url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/6120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6120/comments | https://api.github.com/repos/huggingface/transformers/issues/6120/events | https://github.com/huggingface/transformers/issues/6120 | 667,667,599 | MDU6SXNzdWU2Njc2Njc1OTk= | 6,120 | Don't see how to use correct padding with QA pipeline | {
"login": "nathan-chappell",
"id": 36384302,
"node_id": "MDQ6VXNlcjM2Mzg0MzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/36384302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathan-chappell",
"html_url": "https://github.com/nathan-chappell",
"followers_url": "https://api.github.com/users/nathan-chappell/followers",
"following_url": "https://api.github.com/users/nathan-chappell/following{/other_user}",
"gists_url": "https://api.github.com/users/nathan-chappell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathan-chappell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathan-chappell/subscriptions",
"organizations_url": "https://api.github.com/users/nathan-chappell/orgs",
"repos_url": "https://api.github.com/users/nathan-chappell/repos",
"events_url": "https://api.github.com/users/nathan-chappell/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathan-chappell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
In some cases, when using a **QuestionAnsweringPipeline**, I get an error similar to the following:
*** ValueError: expected sequence of length 384 at dim 1 (got 260)
I've traced the problem to line 1496 of pipelines.py, and the change to [this commit](https://github.com/huggingface/transformers/commit/896300177bf9f35feac4698370212a80a5ab6138). The troubled line is:
fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()}
Basically, when trying to convert this **v** to a tensor an error is thrown because not every array in **v** has the same length - because the padding strategy has been changed (if I comment out the padding strategy in the call to **squad_convert_examples_to_features** then the default value `"max_length"` takes effect and there is no problem). I guess this was done as some sort of optimization, but I'm not really sure how to use it. Every other argument to **squad_convert_examples_to_features** will use a *kwarg*, but this one does not. Maybe it should use a *kwarg* like everything else, so if you need the padding (or don't want to have to deal with it) you can set the **padding_strategy** as you like? Or am I missing something?
### Minimal code to reproduce:
from transformers import QuestionAnsweringPipeline, AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained('twmkn9/distilbert-base-uncased-squad2')
tokenizer = AutoTokenizer.from_pretrained('twmkn9/distilbert-base-uncased-squad2')
# I've omitted the context for brevity. You can, for example, take the **plot** section from [the matrix](https://en.wikipedia.org/wiki/The_Matrix)
context = """..."""
pipeline({"question":"what was Neo's job?", "context": context}) # error as described above
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6120/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6120/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6119/comments | https://api.github.com/repos/huggingface/transformers/issues/6119/events | https://github.com/huggingface/transformers/issues/6119 | 667,646,929 | MDU6SXNzdWU2Njc2NDY5Mjk= | 6,119 | 🐛 Empty TypeError on BartTokenizerFast.decode(tensor) | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I noticed this is due to `x` being a tensor. It works fine with a list :\r\n\r\n```python\r\nimport torch\r\nfrom transformers import BartTokenizerFast\r\n\r\nt = BartTokenizerFast.from_pretrained('facebook/bart-large')\r\n\r\nx = [0, 34, 45, 23, 54, 65, 765, 2]\r\nt.decode(x)\r\n```\r\n\r\n> `<s> has not at who one short</s>`\r\n\r\n---\r\n\r\nSo the current work-around is to first convert to a list :\r\n\r\n```python\r\nimport torch\r\nfrom transformers import BartTokenizerFast\r\n\r\nt = BartTokenizerFast.from_pretrained('facebook/bart-large')\r\n\r\nx = torch.tensor([0, 34, 45, 23, 54, 65, 765, 2])\r\nt.decode(x.tolist())\r\n```",
"Tokenizer bug. @mfuntowicz is this expected behavior?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,601 | 1,601 | CONTRIBUTOR | null | ## Environment info
`transformers` `3.0.2`
### Who can help
Summarization: @sshleifer
## To reproduce
```python
import torch
from transformers import BartTokenizerFast
t = BartTokenizerFast.from_pretrained('facebook/bart-large')
x = torch.tensor([0, 34, 45, 23, 54, 65, 765, 2])
t.decode(x)
```
will throw an empty `TypeError` :
```
File "/home/me/.venv/summarization/lib/python3.6/site-packages/tokenizers/implementations/base_tokenizer.py", line 267, in decode
return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens)
TypeError
```
To reproduce : [Colab Notebook](https://colab.research.google.com/drive/1bnP8TvmRrHrMD-7H2MOQQSE8QrhCb7SC?usp=sharing)
## Expected behavior
No error thrown, like with regular `Tokenizer` :
```python
import torch
from transformers import BartTokenizer
t = BartTokenizer.from_pretrained('facebook/bart-large')
x = torch.tensor([0, 34, 45, 23, 54, 65, 765, 2])
t.decode(x)
```
> `<s> has not at who one short</s>` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6119/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6118/comments | https://api.github.com/repos/huggingface/transformers/issues/6118/events | https://github.com/huggingface/transformers/issues/6118 | 667,582,560 | MDU6SXNzdWU2Njc1ODI1NjA= | 6,118 | Is `guid` allowed to be None in `InputExample`? | {
"login": "dnaaun",
"id": 52462475,
"node_id": "MDQ6VXNlcjUyNDYyNDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/52462475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaaun",
"html_url": "https://github.com/dnaaun",
"followers_url": "https://api.github.com/users/dnaaun/followers",
"following_url": "https://api.github.com/users/dnaaun/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaaun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaaun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaaun/subscriptions",
"organizations_url": "https://api.github.com/users/dnaaun/orgs",
"repos_url": "https://api.github.com/users/dnaaun/repos",
"events_url": "https://api.github.com/users/dnaaun/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaaun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The guid is indeed optional, we can add this to the type annotation. We can't add the default `= None` however because it's before `text_a` in the dataclass, which is not optional.",
"Thanks! I'll probably shape up a bunch of type annotations into a PR sometime soon, so I'll make `guid` Optional(but without a default) in that PR if noone gets to it before me.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | ## Environment
-`transformers` version: 3.0.2
Omitted the rest because they most likely don't affect this issue.
## To reproduce
```py
from transformers.data.processors.utils import SingleSentenceClassificationProcessor
processor = SingleSentenceClassificationProcessor(labels=["lbl1", "lbl2"])
processor.add_examples(texts_or_text_and_labels=["example1", "example2"]) # There's a default ids=None
print(processor[0])
```
prints
```InputExample(guid=None, text_a='example1', text_b=None, label=None)```
If `guid` is allowed to be None, that should be reflected in the type annotation(and documentation) of `InputExample`. If not, then `ids` should not be allowed to be `None`.
### Who can help
@sgugger because it's a documentation issue, @thomwolf because of `git blame`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6118/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6117/comments | https://api.github.com/repos/huggingface/transformers/issues/6117/events | https://github.com/huggingface/transformers/issues/6117 | 667,524,556 | MDU6SXNzdWU2Njc1MjQ1NTY= | 6,117 | Using control codes for finetuning | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@julien-c ",
"It'd be best to re-read the paper and original implem, but I think you just prepend a control code to each of your samples.\r\n\r\nCc'ing @keskarnitish for information.\r\n\r\nPS/ for general questions, please use https://discuss.huggingface.co!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | Hi
I have a use case of style conditioned generation, where I ask the LM to generate a sentence based on the control code I provide. CTRL is pretty suitable for that task.
Can you tell me how to use control codes for fine-tuning as well as inference? It should be the same as any CLM like GPT2 but I want to specifically know about the style and control code conditioning. How should be the data format and other stuffs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6116/comments | https://api.github.com/repos/huggingface/transformers/issues/6116/events | https://github.com/huggingface/transformers/issues/6116 | 667,521,606 | MDU6SXNzdWU2Njc1MjE2MDY= | 6,116 | No button for creating new post at the forum. | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\r\n",
"I approved your post there @guotong1988 "
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | 
discuss.huggingface.co | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6116/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6115/comments | https://api.github.com/repos/huggingface/transformers/issues/6115/events | https://github.com/huggingface/transformers/issues/6115 | 667,512,729 | MDU6SXNzdWU2Njc1MTI3Mjk= | 6,115 | Usage of Pytorch Native AMP in place of apex (Pytorch 1.6) in Trainer | {
"login": "prajjwal1",
"id": 24690051,
"node_id": "MDQ6VXNlcjI0NjkwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajjwal1",
"html_url": "https://github.com/prajjwal1",
"followers_url": "https://api.github.com/users/prajjwal1/followers",
"following_url": "https://api.github.com/users/prajjwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions",
"organizations_url": "https://api.github.com/users/prajjwal1/orgs",
"repos_url": "https://api.github.com/users/prajjwal1/repos",
"events_url": "https://api.github.com/users/prajjwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajjwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I had a query, Pytorch examples show loss being calculated as :\r\n```\r\nwith autocast():\r\n output = model(input)\r\n loss = loss_fn(output, target)\r\nscaler.scale(loss).backward()\r\n```\r\nBut in all `SequenceClassification` and other models, loss is calculated in the `forward pass`. We can use `@autocast` decorator on the forward pass as the docs suggest, but this introduce so many changes for one feature. Maybe, there's a workaround. Does computing loss in `autocast` scope affect the loss itself when `backward` is called upon it ?",
"Hi there,\r\n\r\nNote that we won't pin the version of PyTorch to 1.6 minimum, so the use of native mixed precision will have to be controlled by a test on the pytorch version (basically use native mixed precision when the version allows it and use apex otherwise). \r\n\r\nOtherwise, I don't think the loss being computed inside the model should be a problem, the line would probably be\r\n```\r\nwith autocast():\r\n outputs = model(**inputs)\r\n loss = outputs[0]\r\n```\r\ninside Trainer but I haven't run tests yet.\r\n\r\nYou're welcome to try to work on a PR with this, otherwise it is on my TODO for when I have time (hopefully in the next few weeks).",
"Hi Sylvain,\r\n\r\nI've opened up a PR. I know that pinning of version won't be done. To address, I've addressed this issue the same way as we handle `scheduler.get_lr()`. "
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | # 🚀 Feature request
It would be nice to remove the Apex dependency for `fp16` training and use the native [Pytorch's AMP methods](https://github.com/pytorch/pytorch/releases) in the `Trainer` method. Pytorch recommends Apex users to switch to it's native implementation, even [Apex does it so](https://github.com/NVIDIA/apex/issues/818). Moreover, it will eliminate the need for users to build apex by themselves.
## Your contribution
I am happy to submit a PR if you think it would be a good addition. Please let me know.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6115/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6115/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6114/comments | https://api.github.com/repos/huggingface/transformers/issues/6114/events | https://github.com/huggingface/transformers/issues/6114 | 667,508,467 | MDU6SXNzdWU2Njc1MDg0Njc= | 6,114 | namespace object has no attribute to "enc_only" | {
"login": "zy329jy",
"id": 68934937,
"node_id": "MDQ6VXNlcjY4OTM0OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/68934937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zy329jy",
"html_url": "https://github.com/zy329jy",
"followers_url": "https://api.github.com/users/zy329jy/followers",
"following_url": "https://api.github.com/users/zy329jy/following{/other_user}",
"gists_url": "https://api.github.com/users/zy329jy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zy329jy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zy329jy/subscriptions",
"organizations_url": "https://api.github.com/users/zy329jy/orgs",
"repos_url": "https://api.github.com/users/zy329jy/repos",
"events_url": "https://api.github.com/users/zy329jy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zy329jy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you do not want to use encoder only, I think it is fine to just comment that elif clause out",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,602 | 1,602 | NONE | null | # ❓ Questions & Help
when I running the distillation.py then
File "E:/transformers-master/examples/seq2seq/distillation.py", line 370, in create_module
elif args.enc_only:
AttributeError: 'Namespace' object has no attribute 'enc_only'
how can I deal with this problem?? thx a lot.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6114/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6113/comments | https://api.github.com/repos/huggingface/transformers/issues/6113/events | https://github.com/huggingface/transformers/issues/6113 | 667,486,468 | MDU6SXNzdWU2Njc0ODY0Njg= | 6,113 | 🌟 BigBird | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"When will be getting this model?",
"Until the weights and code are not published I think we won't focus too much on adding the model",
"I am planning to start a small tight group of individuals who will work on implementing research papers for proper business use cases. \r\nPlease let me know if anyone is interested for the same.\r\n**Project 1 :** BigBert for Genomics Research",
"> I am planning to start a small tight group of individuals who will work on implementing research papers for proper business use cases.\r\n> Please let me know if anyone is interested for the same.\r\n> **Project 1 :** BigBert for Genomics Research\r\n\r\nI'll be up for this project",
"I'll be up for this project too. I got a slightly different use case idea, tho. :)\r\n",
"@sathvikask0 \r\nI am super interesting about the **BigBird for Genomics Research**. Are you planning to release the fixed-length embedding part as well?",
"I'm also doing some research on using Google BigBird for genomics research. There's a competition going on right now and we can definitely leverage BigBird for genomics sequencing. ",
"@sathvikask0 @nikhilbyte @seduerr91 \r\nWhat if we could meet together and talk about the BigBert implementation for Genomics Research?",
"Sure do you want to set up a google meet?",
"I'm in.",
"Hello @nikhilbyte @seduerr91 @ptynecki are we still doing this, I want to be a part of it!",
"> Hello @nikhilbyte @seduerr91 @ptynecki are we still doing this, I want to be a part of it!\r\n\r\nI'm up for this. Let me know how to connect with you.",
"@patrickvonplaten actually you can read on the paper (appendix E, section E.4) that for summarization, \"For the large size model, we lift weight from the state-of-the-art Pegasus model [107], which is pretrained using an objective designed for summarization task\". Do you think it would be possible to include the new architecture, using the weights already available of `google/pegasus-large`?",
"Is there an official code base by now? ",
"As soon as weights and codebase is out, we'll integrate! But it does not make much sense IMO to do it before that",
"> I am planning to start a small tight group of individuals who will work on implementing research papers for proper business use cases.\r\n> Please let me know if anyone is interested for the same.\r\n> **Project 1 :** BigBert for Genomics Research\r\n\r\nI would like to join the effort as well",
"It seems BigBird official [code](https://github.com/google-research/bigbird) and [pretrained models](https://console.cloud.google.com/storage/browser/bigbird-transformer) are finally out (well partially). The code seems to be written for TPUs mainly so not sure how easy to port to huggingface. Also I see a keras based BigBird implementation as part of [Tensorflow official models](https://github.com/tensorflow/models/tree/master/official/nlp/projects), which might be easier to port. So let's start working on it!",
"will try to allocate some time next week to start porting the model :-) ",
"Can you please add me to this group, I would also like to work on this project.",
"@patrickvonplaten, do you know when it will be ready? 🐦 ",
"Any update?",
"Has there been any progress on this? :)",
"@patrickvonplaten I see #10183 is passing all its checks, is it close to being able to merge? Looking forward to using with my project!",
"Hi, it will be merged by next week.",
"Is this model available before this weekend?",
"@DarthAnakin BidBird is available as of this morning on the `master` branch and will be in the next release",
"@LysandreJik Thanks!",
"@LysandreJik very excited to see this complete. When will the next release happen?",
"We expect to do it early next week!",
"Any plans to add a Fast Tokenizer for this model ?\r\nI would be happy to help integrate it.\r\n@patrickvonplaten "
] | 1,595 | 1,620 | 1,617 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
Paper : https://arxiv.org/pdf/2007.14062.pdf
Abstract :
> Transformers-based models, such as BERT, have been one of the most successful deep learning
models for NLP. Unfortunately, one of their core limitations is the quadratic dependency
(mainly in terms of memory) on the sequence length due to their full attention mechanism.
To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this
quadratic dependency to linear. We show that BigBird is a universal approximator of
sequence functions and is Turing complete, thereby preserving these properties of the
quadratic, full attention model. Along the way, our theoretical analysis reveals some of the
benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence
as part of the sparse attention mechanism. The proposed sparse attention can handle
sequences of length up to 8x of what was previously possible using similar hardware. As
a consequence of the capability to handle longer context, BigBird drastically improves
performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
## Open source status
* [ ] the model implementation is available: *No*
* [ ] the model weights are available: *No*
* [ ] who are the authors: *?*
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6113/reactions",
"total_count": 183,
"+1": 77,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 34,
"rocket": 13,
"eyes": 59
} | https://api.github.com/repos/huggingface/transformers/issues/6113/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6112/comments | https://api.github.com/repos/huggingface/transformers/issues/6112/events | https://github.com/huggingface/transformers/issues/6112 | 667,469,044 | MDU6SXNzdWU2Njc0NjkwNDQ= | 6,112 | Is there any way that I can use the HuggingFace Transformers as Pyro models? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,596 | 1,596 | NONE | null | Hello,
`Pyro` is a Python module that allows users to convert a given (frequentist) neural network into a Bayesian neural network.
I can convert a HuggingFace Transformer into a Pyro model like below:
```python
import torch
from torch import distributions
from transformers import RobertaTokenizer, RobertaForMultipleChoice
import pyro
import pyro.infer
import pyro.optim
import pyro.distributions as dist
import pyro.nn.module as module
from torch import nn
from pyro.infer import SVI
# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token
model_RobertaForMultipleChoice = RobertaForMultipleChoice.from_pretrained('roberta-large', output_hidden_states = True)
module.to_pyro_module_(model_RobertaForMultipleChoice)
# Now we can attempt to be fully Bayesian:
for m in model_RobertaForMultipleChoice.modules():
for name, value in list(m.named_parameters(recurse=False)):
setattr(m, name, module.PyroSample(prior=dist.Normal(0, 1)
.expand(value.shape)
.to_event(value.dim())))
# define parameters for training
guide_delta = guides.AutoDelta(model_RobertaForMultipleChoice)
```
But when I try to compute the mc_loss from this Bayesian Transformer, Python generates an error:
```python
mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0]
Traceback (most recent call last):
File "STAT946_final_project_code_v4.py", line 625, in <module>
success_rate_list_diag_normal = main_function_diag_normal('/home/ec2-user/test.txt', 'test_ans_num.txt', num_iter, log_interval)
File "STAT946_final_project_code_v4.py", line 415, in main_function_diag_normal
best_model_RobertaForMultipleChoice_diag_normal = train_loop(model_RobertaForMultipleChoice, tokenizer, optimizer_1, scheduler_1, log_interval, svi_diag_normal, guide_diag_normal, best_model_RobertaForMultipleChoice_diag_normal)
File "STAT946_final_project_code_v4.py", line 342, in train_loop
optimizer, scheduler, log_interval, svi, guide, epoch)
File "STAT946_final_project_code_v4.py", line 237, in train_mc_head
mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0]
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__
return super().__call__(*args, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 441, in forward
output_hidden_states=output_hidden_states,
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__
return super().__call__(*args, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 732, in forward
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 228, in get_extended_attention_mask
extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 159, in dtype
first_tuple = next(gen)
StopIteration
```
Is there any way that I can compute the mc_loss in the regular way after converting HuggingFace Transformer into a Bayesian Transformer?
Thank you., | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6111/comments | https://api.github.com/repos/huggingface/transformers/issues/6111/events | https://github.com/huggingface/transformers/pull/6111 | 667,403,742 | MDExOlB1bGxSZXF1ZXN0NDU4MDc3MDc2 | 6,111 | Use FutureWarning to deprecate | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=h1) Report\n> Merging [#6111](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b1c8b76907ad605c7b25bb12580cb46d70207b7a&el=desc) will **increase** coverage by `0.65%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6111 +/- ##\n==========================================\n+ Coverage 77.21% 77.86% +0.65% \n==========================================\n Files 146 146 \n Lines 26325 26325 \n==========================================\n+ Hits 20327 20499 +172 \n+ Misses 5998 5826 -172 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (+1.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.97%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.95% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.71% <0.00%> (+35.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=footer). Last update [b1c8b76...fb6e785](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,596 | 1,596 | COLLABORATOR | null | As discussed, `DeprecationWarning` -> `FutureWarning` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6111/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6111",
"html_url": "https://github.com/huggingface/transformers/pull/6111",
"diff_url": "https://github.com/huggingface/transformers/pull/6111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6111.patch",
"merged_at": 1596014454000
} |
https://api.github.com/repos/huggingface/transformers/issues/6110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6110/comments | https://api.github.com/repos/huggingface/transformers/issues/6110/events | https://github.com/huggingface/transformers/pull/6110 | 667,388,277 | MDExOlB1bGxSZXF1ZXN0NDU4MDY0MjY0 | 6,110 | Doc tokenizer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=h1) Report\n> Merging [#6110](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/11792d7826854979bb532b6da09bc3796b09ea6a&el=desc) will **decrease** coverage by `1.54%`.\n> The diff coverage is `95.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6110 +/- ##\n==========================================\n- Coverage 78.73% 77.19% -1.55% \n==========================================\n Files 146 146 \n Lines 26314 26353 +39 \n==========================================\n- Hits 20719 20342 -377 \n- Misses 5595 6011 +416 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.32% <ø> (ø)` | |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.39% <71.42%> (-1.71%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <88.88%> (+0.04%)` | :arrow_up: |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.90% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <100.00%> (ø)` | |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=footer). Last update [11792d7...80a44a8](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,596 | 1,596 | COLLABORATOR | null | Improve the documentation of tokenizers, following what was done for the models last week, mainly:
- make sure all docstrings of public functions are properly formatted for sphinx
- make sure all args are properly documented
- add or fix type hints wherever necessary
The methods/classes that are not in the main `__init__` are all in the page `internal/tokenization_utils.html`. I had added `SpecialTokensMixin` in the __init__ of transformers a while ago to easily document it, but it can be removed from there now if we want.
I rewrote a few dosctrings here and there so pinging @n1t0 and @mfuntowicz to make sure I didn't write anything bad.
[Preview](https://65719-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/tokenizer.html) of the tokenization page.
[Preview](https://65719-155220641-gh.circle-artifacts.com/0/docs/_build/html/internal/tokenization_utils.html) of the tokenization utils page. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6110",
"html_url": "https://github.com/huggingface/transformers/pull/6110",
"diff_url": "https://github.com/huggingface/transformers/pull/6110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6110.patch",
"merged_at": 1596135080000
} |
https://api.github.com/repos/huggingface/transformers/issues/6109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6109/comments | https://api.github.com/repos/huggingface/transformers/issues/6109/events | https://github.com/huggingface/transformers/issues/6109 | 667,353,926 | MDU6SXNzdWU2NjczNTM5MjY= | 6,109 | StopIteration error in RobertaForMultipleChoice | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | NONE | null | Hello,
I am trying to execute the line below for my `RobertaForMultipleChoice` model:
```python
# retrieve the resulting mc_loss
mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0]
```
but this generates the following error:
```python
Traceback (most recent call last):
File "STAT946_final_project_code_v4.py", line 623, in <module>
success_rate_list_diag_normal = main_function_diag_normal('/home/ec2-user/test.txt', 'test_ans_num.txt', num_iter, log_interval)
File "STAT946_final_project_code_v4.py", line 414, in main_function_diag_normal
best_model_RobertaForMultipleChoice_diag_normal = train_loop(model_RobertaForMultipleChoice, tokenizer, optimizer_1, scheduler_1, log_interval, svi_diag_normal, guide_diag_normal, best_model_RobertaForMultipleChoice_diag_normal)
File "STAT946_final_project_code_v4.py", line 341, in train_loop
optimizer, scheduler, log_interval, svi, guide, epoch)
File "STAT946_final_project_code_v4.py", line 236, in train_mc_head
mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0]
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__
return super().__call__(*args, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 441, in forward
output_hidden_states=output_hidden_states,
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__
return super().__call__(*args, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 732, in forward
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 228, in get_extended_attention_mask
extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 159, in dtype
first_tuple = next(gen)
StopIteration
```
How can I get around this type of error? Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6109/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6108/comments | https://api.github.com/repos/huggingface/transformers/issues/6108/events | https://github.com/huggingface/transformers/issues/6108 | 667,338,760 | MDU6SXNzdWU2NjczMzg3NjA= | 6,108 | allenai/longformer-large-4096 unavailable | {
"login": "CMobley7",
"id": 10121829,
"node_id": "MDQ6VXNlcjEwMTIxODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/10121829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CMobley7",
"html_url": "https://github.com/CMobley7",
"followers_url": "https://api.github.com/users/CMobley7/followers",
"following_url": "https://api.github.com/users/CMobley7/following{/other_user}",
"gists_url": "https://api.github.com/users/CMobley7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CMobley7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CMobley7/subscriptions",
"organizations_url": "https://api.github.com/users/CMobley7/orgs",
"repos_url": "https://api.github.com/users/CMobley7/repos",
"events_url": "https://api.github.com/users/CMobley7/events{/privacy}",
"received_events_url": "https://api.github.com/users/CMobley7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The issue seems to have resolved itself. So, I'm closing the issue."
] | 1,595 | 1,595 | 1,595 | NONE | null | For some reason, I'm unable to download allenai/longformer-large-4096. Everything was working an hour ago, but all of a sudden I get the error included below. It's still listed on https://huggingface.co/models?search=allenai%2Flongformer-large-4096. I'm not sure what's up. Any ideas?
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 242, in get_config_dict
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/ptcc/run_glue.py", line 246, in <module>
main()
File "/ptcc/run_glue.py", line 123, in main
cache_dir=model_args.cache_dir,
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py", line 203, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 251, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'allenai/longformer-large-4096'. Make sure that:
- 'allenai/longformer-large-4096' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'allenai/longformer-large-4096' is the correct path to a directory containing a config.json file
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 242, in get_config_dict
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/ptcc/run_glue.py", line 246, in <module>
main()
File "/ptcc/run_glue.py", line 123, in main
cache_dir=model_args.cache_dir,
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py", line 203, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 251, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'allenai/longformer-large-4096'. Make sure that:
- 'allenai/longformer-large-4096' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'allenai/longformer-large-4096' is the correct path to a directory containing a config.json file
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6108/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6107/comments | https://api.github.com/repos/huggingface/transformers/issues/6107/events | https://github.com/huggingface/transformers/issues/6107 | 667,326,244 | MDU6SXNzdWU2NjczMjYyNDQ= | 6,107 | Where do the Masked Language Model perform mask on the input data | {
"login": "SusanSun8",
"id": 61705975,
"node_id": "MDQ6VXNlcjYxNzA1OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/61705975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SusanSun8",
"html_url": "https://github.com/SusanSun8",
"followers_url": "https://api.github.com/users/SusanSun8/followers",
"following_url": "https://api.github.com/users/SusanSun8/following{/other_user}",
"gists_url": "https://api.github.com/users/SusanSun8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SusanSun8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SusanSun8/subscriptions",
"organizations_url": "https://api.github.com/users/SusanSun8/orgs",
"repos_url": "https://api.github.com/users/SusanSun8/repos",
"events_url": "https://api.github.com/users/SusanSun8/events{/privacy}",
"received_events_url": "https://api.github.com/users/SusanSun8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @SusanSun8, we are trying to move \"non-bugs\" questions to the forum: https://discuss.huggingface.co/ . Could you maybe post your question there again? \r\n\r\nHere is the code that is responsible for Masked Language Modeling: https://github.com/huggingface/transformers/blob/f6cb0f806efecb64df40c946dacaad0adad33d53/src/transformers/data/data_collator.py#L107."
] | 1,595 | 1,597 | 1,597 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I am trying to pre-train a bert from scratch with my own word set using only the Masked Language Model.
I have a trouble finding where exactly the code masks 15% of the words and replaces it with 80% mask, 10% random, and 10% original.
I noticed that the input "labels" kind of refers to the places where words are masked. Does it mean that when I preprocess the data, I need to masked it myself and then indicates the position of masks in the "labels" input? If so, is "labels" the only input that would be affect? Is there any other input variables, such as the "masked_lm_positions" and "masked_lm_ids" in google-bert that I need to take care of?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6107/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6106/comments | https://api.github.com/repos/huggingface/transformers/issues/6106/events | https://github.com/huggingface/transformers/issues/6106 | 667,318,411 | MDU6SXNzdWU2NjczMTg0MTE= | 6,106 | Weird Behavior on XLNetTokenizer after new tokens added | {
"login": "riven314",
"id": 21143399,
"node_id": "MDQ6VXNlcjIxMTQzMzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/21143399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riven314",
"html_url": "https://github.com/riven314",
"followers_url": "https://api.github.com/users/riven314/followers",
"following_url": "https://api.github.com/users/riven314/following{/other_user}",
"gists_url": "https://api.github.com/users/riven314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riven314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riven314/subscriptions",
"organizations_url": "https://api.github.com/users/riven314/orgs",
"repos_url": "https://api.github.com/users/riven314/repos",
"events_url": "https://api.github.com/users/riven314/events{/privacy}",
"received_events_url": "https://api.github.com/users/riven314/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: ubuntu
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
I use XLNetTokenizer (pretrained) and added a new token in it. After that, the output from ```tokenizer.tokenize``` looks weird.
(I am not sure if the problem comes from ```transformers``` or ```tokenizers```, but I post here anyway.)
## To reproduce
```
from transformers import XLNetTokenizer
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
test = 'This is so awesome!!@username!'
out = tokenizer.tokenize(test)
# without new tokens added, @username is broken down as expected
out
>> ['▁This', '▁is', '▁so', '▁awesome', '!!', '@', 'user', 'name', '!']
```
```
from tokenizers import AddedToken
# introduce new tokens (with white-space on right only)
new_tokens = AddedToken(
'@username', single_word = True,
lstrip = False, rstrip = True
)
tokenizer.add_tokens(new_tokens)
out = tokenizer.tokenize(test)
# weird result about the white-space around new tokens
out
>> ['▁This', '▁is', '▁so', '▁awesome', '!!', '@username', '▁', '!']
```
In here, there are two places weird:
1. new tokens "@username" has a white-space on the left, so ""!!@username" should not be broken down into "!!", "@username" (I think it should be broken down into "!!", "@", "user", "name", "!")
2. I am a bit confused on why there is a white-space produced after "@username" token. (i.e. '@username', '▁', '!')
And oddly, when I encode and decode the sentence back, the white-space token after "@username" is not translated to actual whitespace. (Also note there is a white space added before "@username" which mean the new token is correctly identified to have a white-space on the left):
```
enc = tokenizer.encode(test, add_special_tokens = False)
dec = tokenizer.decode(enc)
# in encoding stage, the 2nd last token is whitespace
enc
>> [122, 27, 102, 8729, 7675, 32000, 17, 136]
# in decoding stage, the white-space disappear
dec
>> This is so awesome!! @username!
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6106/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6105/comments | https://api.github.com/repos/huggingface/transformers/issues/6105/events | https://github.com/huggingface/transformers/issues/6105 | 667,314,961 | MDU6SXNzdWU2NjczMTQ5NjE= | 6,105 | Recursive error calling generate in forward | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"cool workaround! I think you might have a cleaner solution, potentially, if you compose instead of inheriting from `GPT2WithLMHead`. This is not worthy of a bug report, (what's the bug), but it could be an interesting proposal/project for examples/ if it works well on a task with a public dataset.\r\n\r\nCould I hear more about your task?\r\nAre you successfully backpropagating through beam search?",
"Hi @sshleifer, thanks for your reply!\r\nI wasn't quite sure if it would warrant a bug report or feature suggestion (or neither). Thanks for clearing that up.\r\n\r\nThe task I am doing is text generation. I have a dataset of scientific abstracts that I want to finetune the GPT2 pretrained model on to generate similar abstracts. However, I wanted to replace the loss with a loss from a N-grams model I have. The procedure looks something like this:\r\n\r\n- Feed sample abstract into Pre-trained GPT2.\r\n- Generate a sequence of specified length based off that sample.\r\n- Calculate the loss using the N-grams model I have and use that loss for backpropagation.\r\n\r\nBasically I am replacing the loss function found in `GPT2LMHeadModel` with my own and utilizing the `generate` method in `GPT2Pretrained` to generate new abstracts. I was doing the generation one token at a time using a naive method, but the `generate` method is so handy for the generation that I really wanted to utilize it (and all the hard work the HF team has put in).\r\n\r\nI have not tried to backpropagate yet. You'll notice most of the arguments that go into `Trainer` are pretty lousy. Right now, I just want to see if it will start training with no errors. I hope to try to do some more thoughtful training later this week.",
"Any further thoughts on this?",
"Nope. Excited to see what code modifications are required to get this working!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,604 | 1,604 | NONE | null | ## System Info
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 3.0.2
Python: 3.7.6
## Question
Here is the training loop:
```python
def sd_data_collator(dataset_samples_list):
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')
tokenizer.pad_token = tokenizer.eos_token
encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True)
batch = {}
batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])
batch['past'] = None
batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])
batch['position_ids'] = None
batch['head_mask'] = None
batch['inputs_embeds'] = None
batch['labels'] = None
batch['use_cache'] = True
return batch
sd_dataset = SDAbstractsDataset('/path/to/sd_samples_64.csv')
training_args = TrainingArguments(
output_dir='/path/to/finetuned_gpt2',
do_train=True,
per_device_train_batch_size=4,
learning_rate=1e-3,
num_train_epochs=1
)
model = GPT2FinetunedWithNgrams.from_pretrained('gpt2')
trainer = Trainer(
model=model,
args=training_args,
train_dataset=sd_dataset,
data_collator = sd_data_collator
)
trainer.train()
```
Here's the model class and its `forward` method:
```python
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, config):
super().__init__(config)
self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')
self.tokenizer.pad_token = self.tokenizer.eos_token
def load_ngrams_model(self, ngrams_model_path):
self.ngrams_model = NGrams(ngrams_model_path)
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
):
output = self.generate(input_ids=input_ids, max_length=474)
decoded_output = self.tokenizer.decode(output[0], skip_special_tokens=True)
```
Here's the whole error. It's really lengthy and I cut out the repetitions:
```python
Some weights of GPT2FinetunedWithNgrams were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/16 [00:00<?, ?it/s]Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
.
.
.
File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 480, in generate
model_specific_kwargs=model_specific_kwargs,
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 520, in _generate_no_beam_search
outputs = self(**model_inputs)
File "/path/to/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/path/to/ric-2020/text_gen_w_transformers/finetune_gpt2.py", line 33, in forward
File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 350, in generate
"Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id)
.
.
.
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1390, in warning
self._log(WARNING, msg, args, **kwargs)
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1514, in _log
self.handle(record)
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1524, in handle
self.callHandlers(record)
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1594, in callHandlers
lastResort.handle(record)
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 894, in handle
self.emit(record)
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1025, in emit
msg = self.format(record)
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 869, in format
return fmt.format(record)
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 608, in format
record.message = record.getMessage()
File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 360, in getMessage
def getMessage(self):
RecursionError: maximum recursion depth exceeded while calling a Python object
```
My guess is the `self.generate()` being called within the model produces the recursion problem. I found this problematic because the `generate` method has some awesome functionality for beam search, greedy search, top-k, etc. To overcome this, I added a flag to `generate` called `is_finetuning_current_model`:
```python
@torch.no_grad()
def generate(
self,
input_ids: Optional[torch.LongTensor] = None,
max_length: Optional[int] = None,
min_length: Optional[int] = None,
do_sample: Optional[bool] = None,
early_stopping: Optional[bool] = None,
num_beams: Optional[int] = None,
temperature: Optional[float] = None,
top_k: Optional[int] = None,
top_p: Optional[float] = None,
repetition_penalty: Optional[float] = None,
bad_words_ids: Optional[Iterable[int]] = None,
bos_token_id: Optional[int] = None,
pad_token_id: Optional[int] = None,
eos_token_id: Optional[int] = None,
length_penalty: Optional[float] = None,
no_repeat_ngram_size: Optional[int] = None,
num_return_sequences: Optional[int] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_start_token_id: Optional[int] = None,
use_cache: Optional[bool] = None,
is_finetuning_current_model: Optional[bool] = None,
**model_specific_kwargs
) -> torch.LongTensor:
```
propagated this down to the `num_beams` check:
```python
if num_beams > 1:
output = self._generate_beam_search(
input_ids,
cur_len=cur_len,
max_length=max_length,
min_length=min_length,
do_sample=do_sample,
early_stopping=early_stopping,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
batch_size=effective_batch_size,
num_return_sequences=num_return_sequences,
length_penalty=length_penalty,
num_beams=num_beams,
vocab_size=vocab_size,
encoder_outputs=encoder_outputs,
attention_mask=attention_mask,
use_cache=use_cache,
is_finetuning_current_model=is_finetuning_current_model,
model_specific_kwargs=model_specific_kwargs
)
else:
output = self._generate_no_beam_search(
input_ids,
cur_len=cur_len,
max_length=max_length,
min_length=min_length,
do_sample=do_sample,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
pad_token_id=pad_token_id,
eos_token_id=eos_token_id,
batch_size=effective_batch_size,
encoder_outputs=encoder_outputs,
attention_mask=attention_mask,
use_cache=use_cache,
is_finetuning_current_model=is_finetuning_current_model,
model_specific_kwargs=model_specific_kwargs
)
```
updated `_generate_no_beam_search` and `_generate_beam_search` with the following:
```python
if is_finetuning_current_model:
outputs = self.generate_text_while_finetuning(**model_inputs)
else:
outputs = self(**model_inputs)
```
For my model class, I just added the `generate_text_while_finetuning` method and set the `is_finetuning_current_model`:
```python
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, config):
super().__init__(config)
self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')
self.tokenizer.pad_token = self.tokenizer.eos_token
def load_ngrams_model(self, ngrams_model_path):
self.ngrams_model = NGrams(ngrams_model_path)
def generate_text_while_finetuning(self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,):
transformer_outputs = self.transformer(
input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
outputs = (lm_logits,) + transformer_outputs[1:]
return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions)
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
):
output = self.generate(input_ids=input_ids, max_length=474, is_finetuning_current_model=True)
decoded_output = self.tokenizer.decode(output[0], skip_special_tokens=True)
```
This seems to resolve the recursive error and produces an expected `decoded_output` for me. My usecase is using GPT2 with a different loss function to finetune it on a particular domain corpus. I imagine other people would be doing something similar for GPT2 and other models, so I tested this approach just using `GPT2LMHeadModel` and got the same expected results.
My question is do contributors think I should open up a bug report for this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6105/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6104/comments | https://api.github.com/repos/huggingface/transformers/issues/6104/events | https://github.com/huggingface/transformers/pull/6104 | 667,286,495 | MDExOlB1bGxSZXF1ZXN0NDU3OTgyNTM4 | 6,104 | Fix zero-shot pipeline single seq output shape | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=h1) Report\n> Merging [#6104](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06834bc33255f5fb8fabb72c9ff114764b3c7ce5&el=desc) will **decrease** coverage by `1.54%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6104 +/- ##\n==========================================\n- Coverage 77.77% 76.23% -1.55% \n==========================================\n Files 146 146 \n Lines 26325 26325 \n==========================================\n- Hits 20474 20068 -406 \n- Misses 5851 6257 +406 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `78.50% <ø> (ø)` | |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.82% <0.00%> (-77.59%)` | :arrow_down: |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `9.90% <0.00%> (-76.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `17.22% <0.00%> (-72.24%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.00% <0.00%> (-35.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=footer). Last update [06834bc...23060b9](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,598 | 1,595 | CONTRIBUTOR | null | Fixes zero shot pipelines bug that returns sequence as a list rather than a str when a single sequence is passed as a list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6104/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6104",
"html_url": "https://github.com/huggingface/transformers/pull/6104",
"diff_url": "https://github.com/huggingface/transformers/pull/6104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6104.patch",
"merged_at": 1595961963000
} |
https://api.github.com/repos/huggingface/transformers/issues/6103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6103/comments | https://api.github.com/repos/huggingface/transformers/issues/6103/events | https://github.com/huggingface/transformers/pull/6103 | 667,278,324 | MDExOlB1bGxSZXF1ZXN0NDU3OTc1ODA0 | 6,103 | rename prepare_translation_batch -> prepare_seq2seq_batch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=h1) Report\n> Merging [#6103](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66fa8ceaeaa6fe12f1bd4a5e6b0a924f59f715d9&el=desc) will **decrease** coverage by `0.47%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6103 +/- ##\n==========================================\n- Coverage 79.90% 79.42% -0.48% \n==========================================\n Files 153 153 \n Lines 27877 27879 +2 \n==========================================\n- Hits 22276 22144 -132 \n- Misses 5601 5735 +134 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.14% <100.00%> (+6.97%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.16% <0.00%> (+1.61%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=footer). Last update [66fa8ce...56b0bf4](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Didn't know that `prepare_translation_batch` was already in master. Guess I'm not a huge fan of such a helper function in general and for me, it's a pure convenience function that does not really add functionality to the lib. Think we should lower maintenance costs and reduce the risk of future breaking backward compatibility by not adding such functions to the python tokenizers. But I don't have the best insight into the tokenizers. Maybe @LysandreJik and @thomwolf can have a better opinion here.",
"The better argument is not about convenience, but about managing special tokens when they are different on the encoder and decoder side. It's very hard to have finetuning code that supports multiple models if the tokenizers don't handle special tokens/language codes for you. \r\n\r\n",
"same spurious pabee failure as #6421 , merging!"
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | cc @patil-suraj
Starts work on #6080 , which suggests that all seq2seq tokenizers expose a `prepare_seq2seq_batch` method.
TODO:
- add common test enforcing API consistency. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6103/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6103/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6103",
"html_url": "https://github.com/huggingface/transformers/pull/6103",
"diff_url": "https://github.com/huggingface/transformers/pull/6103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6103.patch",
"merged_at": 1597175828000
} |
https://api.github.com/repos/huggingface/transformers/issues/6102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6102/comments | https://api.github.com/repos/huggingface/transformers/issues/6102/events | https://github.com/huggingface/transformers/pull/6102 | 667,263,082 | MDExOlB1bGxSZXF1ZXN0NDU3OTYzNjYy | 6,102 | Fix deebert tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=h1) Report\n> Merging [#6102](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06834bc33255f5fb8fabb72c9ff114764b3c7ce5&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6102 +/- ##\n=======================================\n Coverage 77.77% 77.77% \n=======================================\n Files 146 146 \n Lines 26325 26325 \n=======================================\n+ Hits 20474 20475 +1 \n+ Misses 5851 5850 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6102/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=footer). Last update [06834bc...c00d3b1](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6102",
"html_url": "https://github.com/huggingface/transformers/pull/6102",
"diff_url": "https://github.com/huggingface/transformers/pull/6102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6102.patch",
"merged_at": 1595975417000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6101/comments | https://api.github.com/repos/huggingface/transformers/issues/6101/events | https://github.com/huggingface/transformers/issues/6101 | 667,254,004 | MDU6SXNzdWU2NjcyNTQwMDQ= | 6,101 | Use HFArgParser instead of Fire | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"added to my list",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"keepalive",
"I'm pretty happy with `fire`, closing."
] | 1,595 | 1,602 | 1,602 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6101/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/6100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6100/comments | https://api.github.com/repos/huggingface/transformers/issues/6100/events | https://github.com/huggingface/transformers/pull/6100 | 667,253,437 | MDExOlB1bGxSZXF1ZXN0NDU3OTU1OTMy | 6,100 | [Fix] position_ids tests again | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Failure is test_tokenization_auto.py, and spurious ."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Yesterday, in my haste, I added authorized_missing_keys to `BertModel`, rather than `BertPretrainedModel`. Since position_ids are allowed to be missing for all Bert variants, we want the latter, not the former.
I also improved the traceback for the failing test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6100/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6100",
"html_url": "https://github.com/huggingface/transformers/pull/6100",
"diff_url": "https://github.com/huggingface/transformers/pull/6100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6100.patch",
"merged_at": 1595975376000
} |
https://api.github.com/repos/huggingface/transformers/issues/6099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6099/comments | https://api.github.com/repos/huggingface/transformers/issues/6099/events | https://github.com/huggingface/transformers/pull/6099 | 667,250,949 | MDExOlB1bGxSZXF1ZXN0NDU3OTUzOTE1 | 6,099 | [fix] add bart to LM_MAPPING | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=h1) Report\n> Merging [#6099](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06834bc33255f5fb8fabb72c9ff114764b3c7ce5&el=desc) will **increase** coverage by `0.16%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6099 +/- ##\n==========================================\n+ Coverage 77.77% 77.93% +0.16% \n==========================================\n Files 146 146 \n Lines 26325 26325 \n==========================================\n+ Hits 20474 20517 +43 \n+ Misses 5851 5808 -43 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=footer). Last update [06834bc...1c7f573](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | This fixes 2/5 failing slow tests in #6094 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6099/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6099",
"html_url": "https://github.com/huggingface/transformers/pull/6099",
"diff_url": "https://github.com/huggingface/transformers/pull/6099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6099.patch",
"merged_at": 1595975358000
} |
https://api.github.com/repos/huggingface/transformers/issues/6098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6098/comments | https://api.github.com/repos/huggingface/transformers/issues/6098/events | https://github.com/huggingface/transformers/pull/6098 | 667,213,685 | MDExOlB1bGxSZXF1ZXN0NDU3OTI0MDI2 | 6,098 | Fix #6096: MBartTokenizer's mask token | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=h1) Report\n> Merging [#6098](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dafa296c952c08fca3686f1cf8f3a8f8eb116744&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6098 +/- ##\n==========================================\n- Coverage 78.80% 77.78% -1.03% \n==========================================\n Files 146 146 \n Lines 26325 26326 +1 \n==========================================\n- Hits 20746 20477 -269 \n- Misses 5579 5849 +270 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.97% <0.00%> (+4.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=footer). Last update [dafa296...30e83a7](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | add 3 regression tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6098/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6098",
"html_url": "https://github.com/huggingface/transformers/pull/6098",
"diff_url": "https://github.com/huggingface/transformers/pull/6098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6098.patch",
"merged_at": 1595975279000
} |
https://api.github.com/repos/huggingface/transformers/issues/6097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6097/comments | https://api.github.com/repos/huggingface/transformers/issues/6097/events | https://github.com/huggingface/transformers/pull/6097 | 667,207,402 | MDExOlB1bGxSZXF1ZXN0NDU3OTE4OTEx | 6,097 | Logs should not be hidden behind a logger.info | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=h1) Report\n> Merging [#6097](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dafa296c952c08fca3686f1cf8f3a8f8eb116744&el=desc) will **decrease** coverage by `0.31%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6097 +/- ##\n==========================================\n- Coverage 78.80% 78.48% -0.32% \n==========================================\n Files 146 146 \n Lines 26325 26325 \n==========================================\n- Hits 20746 20662 -84 \n- Misses 5579 5663 +84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.97% <0.00%> (+4.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=footer). Last update [dafa296...2f0a5a6](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | MEMBER | null | Currently the logs that are printed when the global step is a multiple of the `logging_steps` are printed using a `logger.info`. In the case where a user did not set its logging level to INFO, these logs are not shown, even if the user has set a `logging_steps > 0`. This PR fixes that by putting back a print statement.
Closes https://github.com/huggingface/transformers/issues/5901
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6097/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6097",
"html_url": "https://github.com/huggingface/transformers/pull/6097",
"diff_url": "https://github.com/huggingface/transformers/pull/6097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6097.patch",
"merged_at": 1595954666000
} |
https://api.github.com/repos/huggingface/transformers/issues/6096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6096/comments | https://api.github.com/repos/huggingface/transformers/issues/6096/events | https://github.com/huggingface/transformers/issues/6096 | 667,117,856 | MDU6SXNzdWU2NjcxMTc4NTY= | 6,096 | mBART: incorrect <mask> token id | {
"login": "OlegPlatonov",
"id": 32016523,
"node_id": "MDQ6VXNlcjMyMDE2NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/32016523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OlegPlatonov",
"html_url": "https://github.com/OlegPlatonov",
"followers_url": "https://api.github.com/users/OlegPlatonov/followers",
"following_url": "https://api.github.com/users/OlegPlatonov/following{/other_user}",
"gists_url": "https://api.github.com/users/OlegPlatonov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OlegPlatonov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OlegPlatonov/subscriptions",
"organizations_url": "https://api.github.com/users/OlegPlatonov/orgs",
"repos_url": "https://api.github.com/users/OlegPlatonov/repos",
"events_url": "https://api.github.com/users/OlegPlatonov/events{/privacy}",
"received_events_url": "https://api.github.com/users/OlegPlatonov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"great catch and incredibly detailed description, I'll fix this one! Thanks!"
] | 1,595 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using: mBART
## To reproduce
```
from transformers import MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
print(tokenizer.convert_tokens_to_ids(['<mask>', 'ar_AR']))
```
The output for the above code is `[250001, 250001]` - two different special tokens are mapped to the same id.
## Expected behavior
As far as I can tell, `<mask>` token should be mapped to id 250026.
I've checked [fairseq implementation](https://github.com/pytorch/fairseq/blob/master/fairseq/tasks/multilingual_denoising.py) and it seems that `<mask>` token is added after all the language codes, so it should be the last token in the vocab.
Currently, when I try to use mBART to denoise text with `<mask>` tokens, it mostly just ignores them, but if I replace mask ids with 250026, the model actually generates new text in place of `<mask>` tokens:
```
from transformers import MBartTokenizer, BartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
model = BartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25')
text = 'I highly recommend <mask> - it is one of the best <mask> ever read!'
inputs = tokenizer.prepare_translation_batch([text], src_lang='en_XX')
outputs = model.generate(inputs['input_ids'], decoder_start_token_id=tokenizer.lang_code_to_id['en_XX'],
num_beams=5)
print(tokenizer.batch_decode(outputs)[0])
```
The output is:
```
en_XX<s> highly recommend - it is one of the best ever read!
```
Replacing mask ids:
```
where = (inputs['input_ids'] == 250001)
inputs['input_ids'][where] = 250026
outputs = model.generate(inputs['input_ids'], decoder_start_token_id=tokenizer.lang_code_to_id['en_XX'],
num_beams=5)
print(tokenizer.batch_decode(outputs)[0])
```
The output is:
```
en_XX<s> highly recommend this book - it is one of the best books I have ever read!
```
(In both cases, the model also skips the first input token when generating output, as discussed in #5755.)
I've also noticed that fairseq is using [language code tokens](https://github.com/pytorch/fairseq/blob/108bb2560b1ec01524ba723bc7c69186875afa0a/fairseq/tasks/multilingual_denoising.py#L62) of the form `[en_XX]` rather than just `en_XX`, which can lead to different tokenization if words like `en_XX` appear in the text, but that's a rather contrived case.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6096/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6096/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6095/comments | https://api.github.com/repos/huggingface/transformers/issues/6095/events | https://github.com/huggingface/transformers/pull/6095 | 667,104,170 | MDExOlB1bGxSZXF1ZXN0NDU3ODM0NjQz | 6,095 | Add BERTweet and PhoBERT | {
"login": "datquocnguyen",
"id": 2412555,
"node_id": "MDQ6VXNlcjI0MTI1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2412555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datquocnguyen",
"html_url": "https://github.com/datquocnguyen",
"followers_url": "https://api.github.com/users/datquocnguyen/followers",
"following_url": "https://api.github.com/users/datquocnguyen/following{/other_user}",
"gists_url": "https://api.github.com/users/datquocnguyen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datquocnguyen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datquocnguyen/subscriptions",
"organizations_url": "https://api.github.com/users/datquocnguyen/orgs",
"repos_url": "https://api.github.com/users/datquocnguyen/repos",
"events_url": "https://api.github.com/users/datquocnguyen/events{/privacy}",
"received_events_url": "https://api.github.com/users/datquocnguyen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | I'd like to add BERTweet and PhoBERT to the huggingface's transformers library.
Users can now use these models directly from transformers. E.g:
tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base")
bertweet = BertweetModel.from_pretrained("vinai/bertweet-base") | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6095/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6095",
"html_url": "https://github.com/huggingface/transformers/pull/6095",
"diff_url": "https://github.com/huggingface/transformers/pull/6095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6095.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6094/comments | https://api.github.com/repos/huggingface/transformers/issues/6094/events | https://github.com/huggingface/transformers/issues/6094 | 667,102,788 | MDU6SXNzdWU2NjcxMDI3ODg= | 6,094 | 5 Slow test failures | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"ALL FIXED YEEHAW"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | traceback: [here](https://github.com/huggingface/transformers/runs/916838508?check_suite_focus=true)
```bash
=========================== short test summary info ============================
FAILED tests/test_modeling_auto.py::AutoModelTest::test_model_for_pretraining_from_pretrained
FAILED tests/test_modeling_auto.py::AutoModelTest::test_model_from_pretrained
FAILED tests/test_modeling_bart.py::BartModelIntegrationTests::test_bart_base_mask_filling
FAILED tests/test_modeling_bart.py::BartModelIntegrationTests::test_bart_large_mask_filling
FAILED tests/test_modeling_common.py::ModelUtilsTest::test_model_from_pretrained
==== 5 failed, 1423 passed, 489 skipped, 384 warnings in 1609.33s (0:26:49) ====
```
I'm investigating | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6094/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6093/comments | https://api.github.com/repos/huggingface/transformers/issues/6093/events | https://github.com/huggingface/transformers/pull/6093 | 667,076,894 | MDExOlB1bGxSZXF1ZXN0NDU3ODEyMTI3 | 6,093 | Fix #6092 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=h1) Report\n> Merging [#6093](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f49af4aef2b19aaf00ffa400ff6c1e4292e9dd&el=desc) will **decrease** coverage by `1.20%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6093 +/- ##\n==========================================\n- Coverage 78.62% 77.42% -1.21% \n==========================================\n Files 146 146 \n Lines 26324 26325 +1 \n==========================================\n- Hits 20698 20381 -317 \n- Misses 5626 5944 +318 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.41% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=footer). Last update [54f49af...6fe884d](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | `BatchEncoding` are not instances of dictionaries so we need the separate test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6093/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6093",
"html_url": "https://github.com/huggingface/transformers/pull/6093",
"diff_url": "https://github.com/huggingface/transformers/pull/6093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6093.patch",
"merged_at": 1595944119000
} |
https://api.github.com/repos/huggingface/transformers/issues/6092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6092/comments | https://api.github.com/repos/huggingface/transformers/issues/6092/events | https://github.com/huggingface/transformers/issues/6092 | 667,061,417 | MDU6SXNzdWU2NjcwNjE0MTc= | 6,092 | i dont know what Tranier`s Dataset is. | {
"login": "Ted8000",
"id": 32102558,
"node_id": "MDQ6VXNlcjMyMTAyNTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/32102558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ted8000",
"html_url": "https://github.com/Ted8000",
"followers_url": "https://api.github.com/users/Ted8000/followers",
"following_url": "https://api.github.com/users/Ted8000/following{/other_user}",
"gists_url": "https://api.github.com/users/Ted8000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ted8000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ted8000/subscriptions",
"organizations_url": "https://api.github.com/users/Ted8000/orgs",
"repos_url": "https://api.github.com/users/Ted8000/repos",
"events_url": "https://api.github.com/users/Ted8000/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ted8000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can reproduce, it should be fixed by #6093. Thanks for flagging!"
] | 1,595 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->


i thought its my customer dataset goes wrong, i dont konw what dataset it should return. the Trainer receive what dataset.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6092/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6091/comments | https://api.github.com/repos/huggingface/transformers/issues/6091/events | https://github.com/huggingface/transformers/pull/6091 | 666,974,706 | MDExOlB1bGxSZXF1ZXN0NDU3NzI4MzEy | 6,091 | Fix local_files_only for TF | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Thanks, @LysandreJik . I manually applied this fix to a local installation of `transformers==3.3.0` and can confirm that this fix resolves #5016 . Can this please be merged?",
"This still doesn't work in 3.3.0. I am getting the same issue on running without internet with transformers==3.3.0",
"You're perfectly correct! This was merged the 1st of October, and 3.3.0 was released in September. Please install a more recent version to get that fix (version v3.4.0 being the first release which contained that fix).",
"Thanks, I'll check. "
] | 1,595 | 1,608 | 1,601 | MEMBER | null | The `local_files_only` was not working for TF. The flag is to be used as such:
```py
from transformers import TFBertModel
model = TFBertModel.from_pretrained('bert-base-cased', local_files_only=True)
```
Setting it to `True` for any TF model would result in the following error:
```
Traceback (most recent call last):
File "/Users/jik/Library/Application Support/JetBrains/PyCharm2020.1/scratches/scratch_1.py", line 7, in <module>
model = TFBertModel.from_pretrained('bert-base-cased', local_files_only=True)
File "/Users/jik/Workspaces/python/transformers/src/transformers/modeling_tf_utils.py", line 578, in from_pretrained
local_files_only=local_files_only,
File "/Users/jik/Workspaces/python/transformers/src/transformers/file_utils.py", line 663, in cached_path
local_files_only=local_files_only,
File "/Users/jik/Workspaces/python/transformers/src/transformers/file_utils.py", line 801, in get_from_cache
"Cannot find the requested files in the cached path and outgoing traffic has been"
ValueError: Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.
```
This is because the `file_utils.get_from_cache` method obtains the filename from the `url_to_filename` method:
https://github.com/huggingface/transformers/blob/1246b20f6d81bcd949078d26cf5ab3d0f3acccc6/src/transformers/file_utils.py#L777
The issue is that this method adds `.h5` to TF models:
https://github.com/huggingface/transformers/blob/1246b20f6d81bcd949078d26cf5ab3d0f3acccc6/src/transformers/file_utils.py#L585-L586
This works when the file is already built using `{filename}.{etag}`, resulting in `{filename}.{etag}.h5`. However, since the etag is `None` with `local_files_only=True`, this results in `{filename}.h5`.
The method tries to find the saved files using the filename followed by `.*`:
https://github.com/huggingface/transformers/blob/1246b20f6d81bcd949078d26cf5ab3d0f3acccc6/src/transformers/file_utils.py#L788-L792
This doesn't work since it's looking for `{filename}.h5.*` which doesn't exist. It should instead be looking for `{filename}.*`, which is what it's doing now.
Fix https://github.com/huggingface/transformers/issues/5016 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6091/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6091",
"html_url": "https://github.com/huggingface/transformers/pull/6091",
"diff_url": "https://github.com/huggingface/transformers/pull/6091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6091.patch",
"merged_at": 1601543162000
} |
https://api.github.com/repos/huggingface/transformers/issues/6090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6090/comments | https://api.github.com/repos/huggingface/transformers/issues/6090/events | https://github.com/huggingface/transformers/issues/6090 | 666,959,429 | MDU6SXNzdWU2NjY5NTk0Mjk= | 6,090 | customize special tokens | {
"login": "XiaoLiuAI",
"id": 1553482,
"node_id": "MDQ6VXNlcjE1NTM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1553482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XiaoLiuAI",
"html_url": "https://github.com/XiaoLiuAI",
"followers_url": "https://api.github.com/users/XiaoLiuAI/followers",
"following_url": "https://api.github.com/users/XiaoLiuAI/following{/other_user}",
"gists_url": "https://api.github.com/users/XiaoLiuAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XiaoLiuAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XiaoLiuAI/subscriptions",
"organizations_url": "https://api.github.com/users/XiaoLiuAI/orgs",
"repos_url": "https://api.github.com/users/XiaoLiuAI/repos",
"events_url": "https://api.github.com/users/XiaoLiuAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/XiaoLiuAI/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For special tokens that don't correspond to one of the attributes of tokenizers, you should pass them to `add_special_tokens` with the `additional_special_tokens` keyword arguments:\r\n```\r\ntokenizer.add_special_tokens(additional_special_tokens = ['TK1', 'TK2' ...])\r\n```\r\nYou can then change `tokenizer.additional_special_tokens` directly if you need to add or remove some of those tokens.",
"@sgugger Thank you very much."
] | 1,595 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
## Details
I am trying to add more special tokens in order to train new language models without modifying the model architecture but simply modifying the input data. For example, adding more separators to involving more complex structure information.
Is there a simple, clear, consistent way to add/customize special tokens.
1. The function `add_special_tokens` can only add token string to token attribute specified in `SPECIAL_TOKENS_ATTRIBUTES`
2. I tried to extend this list and `add_special_tokens` works, but calling
`tokenizer.get_special_tokens_mask` report that _XXX does not exist, where XXX is the new customized special token attribute
finally I found a solution: modifying `tokenizer._additional_special_tokens.extend` directly makes `tokenizer.get_special_tokens_mask` work. But I don't know how could this work and if there is any other side effect.
Properties like `special_tokens_map` is also confusing, it seems like a @property implementation, which is not modifiable. And I don't know where this could be used to have what effect.
I cost much time to read the source code but still lack a overview about the special token architecture in tokenizers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6089/comments | https://api.github.com/repos/huggingface/transformers/issues/6089/events | https://github.com/huggingface/transformers/pull/6089 | 666,914,024 | MDExOlB1bGxSZXF1ZXN0NDU3Njc3MjQ4 | 6,089 | Added capability to quantize a model while exporting through ONNX. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc: @tianleiwu @yufenglee from Microsoft",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=h1) Report\n> Merging [#6089](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/640550fc7a1e311915ead1bcca6dacea0c503faf&el=desc) will **increase** coverage by `0.68%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6089 +/- ##\n==========================================\n+ Coverage 77.85% 78.53% +0.68% \n==========================================\n Files 146 146 \n Lines 26326 26326 \n==========================================\n+ Hits 20496 20676 +180 \n+ Misses 5830 5650 -180 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `45.98% <0.00%> (-44.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.98% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=footer). Last update [640550f...16ecccc](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,596 | 1,596 | MEMBER | null | Add quantization support as part of our collaboration with ONNX team.
Quantization is available through the new method `quantize` which takes the path to the initial ONNX model and operate the conversion.
From a CLI point of view, adding `--quantize` allows to seamlessly export the quantized model.
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6089",
"html_url": "https://github.com/huggingface/transformers/pull/6089",
"diff_url": "https://github.com/huggingface/transformers/pull/6089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6089.patch",
"merged_at": 1596021690000
} |
https://api.github.com/repos/huggingface/transformers/issues/6088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6088/comments | https://api.github.com/repos/huggingface/transformers/issues/6088/events | https://github.com/huggingface/transformers/issues/6088 | 666,907,886 | MDU6SXNzdWU2NjY5MDc4ODY= | 6,088 | Finetuning German BERT for QA on biomedical domain | {
"login": "sbhttchryy",
"id": 57942901,
"node_id": "MDQ6VXNlcjU3OTQyOTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/57942901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbhttchryy",
"html_url": "https://github.com/sbhttchryy",
"followers_url": "https://api.github.com/users/sbhttchryy/followers",
"following_url": "https://api.github.com/users/sbhttchryy/following{/other_user}",
"gists_url": "https://api.github.com/users/sbhttchryy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbhttchryy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbhttchryy/subscriptions",
"organizations_url": "https://api.github.com/users/sbhttchryy/orgs",
"repos_url": "https://api.github.com/users/sbhttchryy/repos",
"events_url": "https://api.github.com/users/sbhttchryy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbhttchryy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,595 | 1,614 | 1,614 | NONE | null | Hello there and thank you very much for this wonderful work. I am relatively new to this field, so please bear with my amateur question. I want to perform question-answering on a German Biomedical text. From what I understand up to now, I need to fine-tune German BERT on biomedical QA datasets. Is there any script/pipeline that I should be using for this?
I have also posted in StackOverflow and the huggingface forum, but to no avail as of now.
Thank you very much in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6088/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6087/comments | https://api.github.com/repos/huggingface/transformers/issues/6087/events | https://github.com/huggingface/transformers/pull/6087 | 666,874,780 | MDExOlB1bGxSZXF1ZXN0NDU3NjQ0MTE1 | 6,087 | fixed typos, added example question. | {
"login": "psorianom",
"id": 1085210,
"node_id": "MDQ6VXNlcjEwODUyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1085210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psorianom",
"html_url": "https://github.com/psorianom",
"followers_url": "https://api.github.com/users/psorianom/followers",
"following_url": "https://api.github.com/users/psorianom/following{/other_user}",
"gists_url": "https://api.github.com/users/psorianom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psorianom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psorianom/subscriptions",
"organizations_url": "https://api.github.com/users/psorianom/orgs",
"repos_url": "https://api.github.com/users/psorianom/repos",
"events_url": "https://api.github.com/users/psorianom/events{/privacy}",
"received_events_url": "https://api.github.com/users/psorianom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Added a new example question for the widget. Fixed some typos in the description. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6087/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6087",
"html_url": "https://github.com/huggingface/transformers/pull/6087",
"diff_url": "https://github.com/huggingface/transformers/pull/6087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6087.patch",
"merged_at": 1595939633000
} |
https://api.github.com/repos/huggingface/transformers/issues/6086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6086/comments | https://api.github.com/repos/huggingface/transformers/issues/6086/events | https://github.com/huggingface/transformers/pull/6086 | 666,831,108 | MDExOlB1bGxSZXF1ZXN0NDU3NjA3NDA5 | 6,086 | Replace mecab-python3 with fugashi for Japanese tokenization | {
"login": "polm",
"id": 286278,
"node_id": "MDQ6VXNlcjI4NjI3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/286278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polm",
"html_url": "https://github.com/polm",
"followers_url": "https://api.github.com/users/polm/followers",
"following_url": "https://api.github.com/users/polm/following{/other_user}",
"gists_url": "https://api.github.com/users/polm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polm/subscriptions",
"organizations_url": "https://api.github.com/users/polm/orgs",
"repos_url": "https://api.github.com/users/polm/repos",
"events_url": "https://api.github.com/users/polm/events{/privacy}",
"received_events_url": "https://api.github.com/users/polm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=h1) Report\n> Merging [#6086](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/91cb95461e438dc57555c4f57f8ce95a56328036&el=desc) will **increase** coverage by `1.14%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6086 +/- ##\n==========================================\n+ Coverage 78.35% 79.50% +1.14% \n==========================================\n Files 146 146 \n Lines 26454 26450 -4 \n==========================================\n+ Hits 20729 21030 +301 \n+ Misses 5725 5420 -305 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `32.05% <0.00%> (+1.56%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (-2.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=footer). Last update [91cb954...3f97763](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"My pleasure. It'd be great if you could edit the docs 👍 I think I gave edit permissions so you should be able to push normally.",
"Awesome, this has been a small but sore thorn in some issues posted here. Thanks a lot @polm!",
"If you ever have any other issues with it please feel free to tag me any time."
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | This replaces mecab-python3 with fugashi for Japanese tokenization. I am
the maintainer of both projects.
Both projects are MeCab wrappers, so the underlying C++ code is the
same. fugashi is the newer wrapper and doesn't use SWIG, so for basic
use of the MeCab API it's easier to use.
This code insures the use of a version of ipadic installed via pip,
which should make versioning and tracking down issues easier.
fugashi has wheels for Windows, OSX, and Linux, which will help with
issues with installing old versions of mecab-python3 on Windows.
Compared to mecab-python3, because fugashi doesn't use SWIG, it doesn't
require a C++ runtime to be installed on Windows.
In adding this change I removed some code dealing with `cursor`,
`token_start`, and `token_end` variables. These variables didn't seem to
be used for anything, it is unclear to me why they were there.
I ran the tests and they passed. For reference, since I had trouble figuring it out, this is needed to run the tests:
RUN_CUSTOM_TOKENIZERS=yes RUN_SLOW=1 pytest -rs tests/test_tokenization_bert_japanese.py
This is a followup to #5375 .
It's not in this PR, but because installing MeCab separately is not required, it might be a good idea to have the docs changed to point directly to fugashi instead of MeCab. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6086/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6086/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6086",
"html_url": "https://github.com/huggingface/transformers/pull/6086",
"diff_url": "https://github.com/huggingface/transformers/pull/6086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6086.patch",
"merged_at": 1596184874000
} |
https://api.github.com/repos/huggingface/transformers/issues/6085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6085/comments | https://api.github.com/repos/huggingface/transformers/issues/6085/events | https://github.com/huggingface/transformers/issues/6085 | 666,804,814 | MDU6SXNzdWU2NjY4MDQ4MTQ= | 6,085 | tf.saved_model.save is not worked on TFElectra* series. | {
"login": "seopbo",
"id": 19755607,
"node_id": "MDQ6VXNlcjE5NzU1NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19755607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seopbo",
"html_url": "https://github.com/seopbo",
"followers_url": "https://api.github.com/users/seopbo/followers",
"following_url": "https://api.github.com/users/seopbo/following{/other_user}",
"gists_url": "https://api.github.com/users/seopbo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seopbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seopbo/subscriptions",
"organizations_url": "https://api.github.com/users/seopbo/orgs",
"repos_url": "https://api.github.com/users/seopbo/repos",
"events_url": "https://api.github.com/users/seopbo/events{/privacy}",
"received_events_url": "https://api.github.com/users/seopbo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using TFElectraModel:
Language I am using the model on English:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import tensorflow as tf
from pathlib import Path
from transformers import TFAutoModel
dump_dir = Path("test_electra")
if not dump_dir.exists():
dump_dir.mkdir(parents=True)
model = TFAutoModel.from_pretrained("google/electra-base-discriminator")
tf.saved_model.save(model, export_dir=str(dump_dir))
```
```bash
Traceback (most recent call last):
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-a8a019d74543>", line 11, in <module>
tf.saved_model.save(model, export_dir=str(dump_dir))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 951, in save
obj, export_dir, signatures, options, meta_graph_def)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 1008, in _build_meta_graph
checkpoint_graph_view)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_serialization.py", line 75, in find_function_to_export
functions = saveable_view.list_functions(saveable_view.root)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 143, in list_functions
self._serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1656, in _list_functions_for_serialization
Model, self)._list_functions_for_serialization(serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 2750, in _list_functions_for_serialization
.list_functions_for_serialization(serialization_cache))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 87, in list_functions_for_serialization
fns = self.functions_to_serialize(serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 77, in functions_to_serialize
serialization_cache).functions_to_serialize)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 92, in _get_serialized_attributes
serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 53, in _get_serialized_attributes_internal
serialization_cache))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 101, in _get_serialized_attributes_internal
functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 153, in wrap_layer_functions
original_fns = _replace_child_layer_functions(layer, serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 272, in _replace_child_layer_functions
serialization_cache).functions)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 92, in _get_serialized_attributes
serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 53, in _get_serialized_attributes_internal
serialization_cache))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 101, in _get_serialized_attributes_internal
functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 163, in wrap_layer_functions
'{}_layer_call_and_return_conditional_losses'.format(layer.name))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 503, in add_function
self.add_trace(*self._input_signature)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 418, in add_trace
trace_with_training(True)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 416, in trace_with_training
fn.get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 547, in get_concrete_function
return super(LayerCall, self).get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 959, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 865, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 524, in wrapper
ret = method(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 170, in wrap_with_training_arg
lambda: replace_training_and_call(False))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 65, in smart_cond
pred, true_fn=true_fn, false_fn=false_fn, name=name)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py", line 54, in smart_cond
return true_fn()
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 169, in <lambda>
lambda: replace_training_and_call(True),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 566, in call_and_return_conditional_losses
return layer_call(inputs, *args, **kwargs), layer.get_losses_for(inputs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/transformers/modeling_tf_electra.py", line 292, in call
hidden_states = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 71, in return_outputs_and_add_losses
outputs, losses = fn(inputs, *args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 170, in wrap_with_training_arg
lambda: replace_training_and_call(False))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 65, in smart_cond
pred, true_fn=true_fn, false_fn=false_fn, name=name)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py", line 54, in smart_cond
return true_fn()
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 169, in <lambda>
lambda: replace_training_and_call(True),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 541, in __call__
self.call_collection.add_trace(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 418, in add_trace
trace_with_training(True)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 416, in trace_with_training
fn.get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 547, in get_concrete_function
return super(LayerCall, self).get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 959, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 865, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 524, in wrapper
ret = method(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 170, in wrap_with_training_arg
lambda: replace_training_and_call(False))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 65, in smart_cond
pred, true_fn=true_fn, false_fn=false_fn, name=name)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py", line 54, in smart_cond
return true_fn()
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 169, in <lambda>
lambda: replace_training_and_call(True),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 566, in call_and_return_conditional_losses
return layer_call(inputs, *args, **kwargs), layer.get_losses_for(inputs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1627, in get_losses_for
reachable = tf_utils.get_reachable_from_inputs(inputs, losses)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 140, in get_reachable_from_inputs
raise TypeError('Expected Operation, Variable, or Tensor, got ' + str(x))
TypeError: Expected Operation, Variable, or Tensor, got None
```
## Expected behavior
```python
import tensorflow as tf
from pathlib import Path
from transformers import TFAutoModel
dump_dir = Path("test_distilbert")
if not dump_dir.exists():
dump_dir.mkdir(parents=True)
model = TFAutoModel.from_pretrained("distilbert-base-uncased")
tf.saved_model.save(model, export_dir=str(dump_dir))
```
```bash
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16538b0d0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x165347250>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16531c2d0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x165e22f90>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16528bd90>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16526d850>, because it is not built.
INFO:tensorflow:Assets written to: test_distilbert/assets
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6085/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6084/comments | https://api.github.com/repos/huggingface/transformers/issues/6084/events | https://github.com/huggingface/transformers/issues/6084 | 666,800,563 | MDU6SXNzdWU2NjY4MDA1NjM= | 6,084 | ValueError raises when load Flaubert from pre-train with Transformers >=3.0.0 | {
"login": "wzmJimmy",
"id": 35741367,
"node_id": "MDQ6VXNlcjM1NzQxMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/35741367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wzmJimmy",
"html_url": "https://github.com/wzmJimmy",
"followers_url": "https://api.github.com/users/wzmJimmy/followers",
"following_url": "https://api.github.com/users/wzmJimmy/following{/other_user}",
"gists_url": "https://api.github.com/users/wzmJimmy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wzmJimmy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wzmJimmy/subscriptions",
"organizations_url": "https://api.github.com/users/wzmJimmy/orgs",
"repos_url": "https://api.github.com/users/wzmJimmy/repos",
"events_url": "https://api.github.com/users/wzmJimmy/events{/privacy}",
"received_events_url": "https://api.github.com/users/wzmJimmy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @wzmJimmy,\r\nDid you get any solution for this problem? I am also facing the same problem. Please let me know if you are able to make it run. Thanks.",
"Well, my solution is inside the expected behavior part. Downgrade transformers to version lower than 3.0.0 and it just works. I post the bug report just to ask what's wrong and how to fix it in the newer version."
] | 1,595 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): **Flaubert**
Language I am using the model on (English, Chinese ...): **French**
The problem arises when using: my own modified scripts
## To reproduce
**Just load Flaubert model with from_pretrained:**
```
MODEL = "flaubert/flaubert_large_cased"
transformer_layer = TFAutoModel.from_pretrained(MODEL, from_pt=True)
```
**Then a ValueError is raised:**
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-22-4235ae313fbd> in <module>()
1 MODEL = "flaubert/flaubert_large_cased"
----> 2 transformer_layer = TFAutoModel.from_pretrained(MODEL, from_pt=True)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
421 for config_class, model_class in TF_MODEL_MAPPING.items():
422 if isinstance(config, config_class):
--> 423 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
424 raise ValueError(
425 "Unrecognized configuration class {} for this kind of TFAutoModel: {}.\n"
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
479 if from_pt:
480 # Load from a PyTorch checkpoint
--> 481 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
482
483 model(model.dummy_inputs, training=False) # build the network with dummy inputs
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)
91
92 return load_pytorch_weights_in_tf2_model(
---> 93 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
94 )
95
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)
123
124 if tf_inputs is not None:
--> 125 tf_model(tf_inputs, training=False) # Make sure model is built
126
127 # Adapt state dict - TODO remove this and update the AWS weights files instead
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_xlm.py in call(self, inputs, **kwargs)
635 heads.
636 """
--> 637 outputs = self.transformer(inputs, **kwargs)
638 return outputs
639
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_flaubert.py in call(self, inputs, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds, training, output_attentions, output_hidden_states)
267 tensor_normalized = self.layer_norm1[i](tensor)
268 attn_outputs = self.attentions[i](
--> 269 [tensor_normalized, attn_mask, None, cache, head_mask[i]], training=training
270 )
271 attn = attn_outputs[0]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_xlm.py in call(self, inputs, training)
139 Self-attention (if kv is None) or attention over source sentence (provided by kv).
140 """
--> 141 input, mask, kv, cache, head_mask, output_attentions = inputs
142 # Input is (bs, qlen, dim)
143 # Mask is (bs, klen) (non-causal) or (bs, klen, klen)
ValueError: not enough values to unpack (expected 6, got 5)
```
## Expected behavior
Load the model successfully.
- It can be achieved with Transformers version <= 2.11.0
- I also try CamemBERT, and Bert-like models in other languages like tr, ru, es, pt, it. None of them have this problem.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **>=3.0.0 ( 3.0.0, 3.0.1, 3.0.2 all have this problem.)**
(I try this on google colab with GPU and TPU. The environment does not matter here. But I still provide the one with GPU.)
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6084/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6083/comments | https://api.github.com/repos/huggingface/transformers/issues/6083/events | https://github.com/huggingface/transformers/issues/6083 | 666,718,466 | MDU6SXNzdWU2NjY3MTg0NjY= | 6,083 | Error when using np.where() during squad tokenization | {
"login": "Adawindcatcher",
"id": 15449292,
"node_id": "MDQ6VXNlcjE1NDQ5Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/15449292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Adawindcatcher",
"html_url": "https://github.com/Adawindcatcher",
"followers_url": "https://api.github.com/users/Adawindcatcher/followers",
"following_url": "https://api.github.com/users/Adawindcatcher/following{/other_user}",
"gists_url": "https://api.github.com/users/Adawindcatcher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Adawindcatcher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Adawindcatcher/subscriptions",
"organizations_url": "https://api.github.com/users/Adawindcatcher/orgs",
"repos_url": "https://api.github.com/users/Adawindcatcher/repos",
"events_url": "https://api.github.com/users/Adawindcatcher/events{/privacy}",
"received_events_url": "https://api.github.com/users/Adawindcatcher/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):Bert
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [x] the official example scripts: squad1.1
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD task
* [ ] my own task or dataset: (give details below)
## To reproduce
I'm confused about this line:
https://github.com/huggingface/transformers/blob/896300177bf9f35feac4698370212a80a5ab6138/src/transformers/data/processors/squad.py#L230
Should this be
```
pad_token_indices = np.where(np.array(span["input_ids"]) == tokenizer.pad_token_id)
```
Cause "p_mask: mask with 1 for token than cannot be in the answer (0 for token which can be in an answer)", and the padding positions should not be answers
Steps to reproduce the behavior:
```python
span["input_ids"] = [1,1,3,0,0,0]
np.where([1,1,3,0,0,0] == tokenizer.pad_token_id)
# return (array([], dtype=int64),)
np.where(np.array([1,1,3,0,0,0]) == tokenizer.pad_token_id)
# return (array([3, 4, 5]),)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
As described above.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 7a68d401388bc68f10dfeb591709352736a6c0b6
- Platform:
- Python version: python3.6.8
- PyTorch version (GPU?):1.5.0 CPU
- Tensorflow version (GPU?):
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?:NO
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6083/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6082/comments | https://api.github.com/repos/huggingface/transformers/issues/6082/events | https://github.com/huggingface/transformers/issues/6082 | 666,701,623 | MDU6SXNzdWU2NjY3MDE2MjM= | 6,082 | 🐛 Inconsistencies between BartTokenizer and BartTokenizerFast | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems like this issue is for all fast tokenizers, this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_fast.py#L312) directly raises that error when kwargs are passed",
"Can I close? Seems like your workaround is solid @Colanim ",
"If it's the expected behavior, sure go ahead ^^\r\n\r\nI opened because I thought `Tokenizer` and `TokenizerFast` were expected to have the same behaviors.",
"Fair. I think there is a medium-term plan to deprecate `add_prefix_space`, so at the moment it is left inconsistent and in the future it will be deleted from both."
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | # 🐛 Bug
## Description
It's possible to use the argument `add_prefix_space` with `BartTokenizer` :
```
from transformers import BartTokenizer
t = BartTokenizer.from_pretrained("facebook/bart-large")
x = t("This is an example.", add_prefix_space=True)
```
But when doing the same with `BartTokenizerFast` :
```
from transformers import BartTokenizerFast
t = BartTokenizerFast.from_pretrained("facebook/bart-large")
x = t("This is an example.", add_prefix_space=True)
```
It throws the following error :
```
ValueError: Keyword arguments {'add_prefix_space': True} not recognized.
```
## To reproduce
[Colab notebook](https://colab.research.google.com/drive/1f0W2llsJfVIkXsYk0XK1C3oiy2xSqYOx?usp=sharing)
## Work-around
It seems working if the argument is specified in the constructor :
```
from transformers import BartTokenizerFast
t = BartTokenizerFast.from_pretrained("facebook/bart-large", add_prefix_space=True)
x = t("This is an example.")
```
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6082/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6081/comments | https://api.github.com/repos/huggingface/transformers/issues/6081/events | https://github.com/huggingface/transformers/pull/6081 | 666,672,237 | MDExOlB1bGxSZXF1ZXN0NDU3NDc1MjIw | 6,081 | [s2s] Delete useless method, log tokens_per_batch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=h1) Report\n> Merging [#6081](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dc4755c6d59238ffea4843d06610a29c522257fb&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6081 +/- ##\n==========================================\n+ Coverage 78.13% 78.20% +0.07% \n==========================================\n Files 146 146 \n Lines 26325 26318 -7 \n==========================================\n+ Hits 20569 20582 +13 \n+ Misses 5756 5736 -20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <0.00%> (-2.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.37% <0.00%> (-1.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.14% <0.00%> (-0.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.58% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `99.09% <0.00%> (+1.68%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=footer). Last update [dc4755c...51c962b](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"fixed typo!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | - trimming is already performed in `collate_fn`, there is no need to call it again.
- val_summ_len can be called gen_len so that it makes sense for a translation model.
- new metric: tpb = how many non pad tokens per batch. Useful for debugging/verifying speedups.
cc @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6081/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6081",
"html_url": "https://github.com/huggingface/transformers/pull/6081",
"diff_url": "https://github.com/huggingface/transformers/pull/6081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6081.patch",
"merged_at": 1595949864000
} |
https://api.github.com/repos/huggingface/transformers/issues/6080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6080/comments | https://api.github.com/repos/huggingface/transformers/issues/6080/events | https://github.com/huggingface/transformers/issues/6080 | 666,660,613 | MDU6SXNzdWU2NjY2NjA2MTM= | 6,080 | Proposal: seq2seq tokenizers expose a prepare_seq2seq_batch method | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Motivation:\r\nless important: only call tokenizer once\r\nmore important: the best time to think about special tokens is when you are writing the tokenizer, not when you are designing your training loop. We've had a lot of bugs recently involving special tokens in seq2seq training loops that likely could have been avoided if when the tokenizer was added the author had written a method that said: \"This is how you make a batch\".",
"This will be actually really helpful. Many people run into bugs by forgetting to add special tokens, or shift labels to right. Should this also prepare `labels` along with `decoder_input_ids` as needed by seq2seq models ? ",
"This sounds useful, and looks like it could then directly be used as a `data_collator` in `Trainer`.",
"yes, optionally making `labels` would be very useful!\r\n",
"What do you think @thomwolf ?",
"I am not really in favor of adding this code...I think the user should have understood the library (tokenizers) well enough to not need such a helper function. Think we could add it in a notebook / example and point to it if people ask, but IMO the tokenizers already have too many functions exposed to the user...",
"> I think the user should have understood the library (tokenizers) well enough to not need such a helper function.\r\n\r\n1) I got this wrong the first time for marian and the only way to support finetuning multilingual models was to add `prepare_translation_batch`. \r\n2) Suraj just fixed a bug (that was also my fault, but nobody noticed) about adding `decoder_start_token_id` to the beginning of `t5.decoder_input_ids`. Both of us know the library pretty well and had trouble getting this right.\r\n3) The tokenizers are already setup to create model-ready batches for GLUE tasks (an awesome feature IMO). Why shouldn't they create model-ready batches for seq2seq tasks?\r\n",
"@patrickvonplaten said that he is ok with this change if @mfuntowicz and @n1t0 are ok with it.\r\nWhat do you guys think?\r\n\r\n@patil-suraj we can move forward on your PR once we get some consensus that this is an OK direction to go in.",
"Im feeling pretty strongly about this as I just messed up decoder special tokens for pegasus finetuning in a very avoidable way :(; the tokenizers should handle special tokens, not some random barely tested examples/processor* code.",
"After thinking a bit more about this and talking to @sshleifer, I am fine with the PR. I agree now that there are a lot of use cases when `prepare_seq2seq_batch` is used and since it's going to be added to each model specifically, it's clean as well. Also since it will replace `prepare_translation_batch`, it does not really increase function exposure. \r\n\r\nTwo things that I'm still a bit uncertain about is: \r\na) It does add a \"layer\" on top of the `__call__` function. But I guess we do the same with `generate()` on top of `forward()`\r\nb) will we have to add this functionality in Rust then as well? Will it be difficult to add? @n1t0 \r\n\r\nOverall I'm in favor of this design as well though now",
"Moving forward, as discussed with @LysandreJik .",
"The problem here is that the inputs are retokenized every time right, instead of pre-tokenizing and fixing padding to fit the max size of the batch as with [DataCollatorForSeq2Seq](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorForSeq2Seq) \r\nNot a huge deal I guess. ",
"I feel like Transformers would maybe benefit from having a better advertised, \"one\" good way to do seq2seq that it could put in all of it's tutorials and everything. I had kind of a harder time than I would have expected to figure this out",
"> The problem here is that the inputs are retokenized every time right, instead of pre-tokenizing and fixing padding to fit the max size of the batch as with [DataCollatorForSeq2Seq](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorForSeq2Seq)\r\n\r\nThe inputs are not retokenized every time. `DataCollatorForSeq2Seq` just pads them dynamically to reduce excessive padding which can be more efficient rather than always padding to max len.\r\n\r\n> I feel like Transformers would maybe benefit from having a better advertised, \"one\" good way to do seq2seq that it could put in all of it's tutorials and everything\r\n\r\nYes, good idea!",
"I meant, with `prepare_seq2seq_batch` they are tokenized every time, not with `DataCollatorForSeq2Seq`"
] | 1,595 | 1,646 | 1,598 | CONTRIBUTOR | null | This has been useful in `MarianTokenizer.prepare_translation_batch` and `MBartTokenizer.prepare_translation_batch`.
and would also be a useful addition for `T5Tokenizer`, and `BartTokenizer`.
@LysandreJik mentioned this in a PR so I wanted to see what others thought before starting work.
@patrickvonplaten @sgugger @mfuntowicz @patil-suraj ?
Proposed signature (I would not add this to PretrainedModel, just rename the Marian/MBart implems and add implems for Bart, T5. When pegasus, blenderbot get merged, I would also add implems there consider a helper function.)
```python
def prepare_seq2seq_batch(
self,
src_texts: List[str],
tgt_texts: Optional[List[str]] = None,
max_length: Optional[int] = None,
max_target_length:Optional[int] = None,
padding: str = "longest",
**kwargs,
) -> BatchEncoding:
if max_length is None:
max_length = self.max_len
if max_target_length is None: # default to max_length
max_target_length = max_length# no need to specify twice for translation
model_inputs: BatchEncoding = self(
src_texts,
add_special_tokens=True,
return_tensors=return_tensors,
max_length=max_length,
padding=padding,
truncation=True,
**kwargs,
)
if tgt_texts is None:
return model_inputs
# Here classes can implement logic to put decoder_start_token_id at the front if they want,
# or deal with language codes for multilingual models.
decoder_inputs: BatchEncoding = self(
tgt_texts,
add_special_tokens=True,
return_tensors=return_tensors,
padding=padding,
max_length=max_target_length,
truncation=True,
**kwargs,
)
for k, v in decoder_inputs.items():
model_inputs[f"decoder_{k}"] = v
return model_inputs # still a BatchEncoding
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6080/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6079/comments | https://api.github.com/repos/huggingface/transformers/issues/6079/events | https://github.com/huggingface/transformers/pull/6079 | 666,659,666 | MDExOlB1bGxSZXF1ZXN0NDU3NDY0OTAz | 6,079 | [s2s] Don't mention packed data in README | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6079/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6079",
"html_url": "https://github.com/huggingface/transformers/pull/6079",
"diff_url": "https://github.com/huggingface/transformers/pull/6079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6079.patch",
"merged_at": 1595894842000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6078/comments | https://api.github.com/repos/huggingface/transformers/issues/6078/events | https://github.com/huggingface/transformers/issues/6078 | 666,646,176 | MDU6SXNzdWU2NjY2NDYxNzY= | 6,078 | model.roberta.from_pretrained() fails to change the parameters | {
"login": "wyin-Salesforce",
"id": 53835505,
"node_id": "MDQ6VXNlcjUzODM1NTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/53835505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wyin-Salesforce",
"html_url": "https://github.com/wyin-Salesforce",
"followers_url": "https://api.github.com/users/wyin-Salesforce/followers",
"following_url": "https://api.github.com/users/wyin-Salesforce/following{/other_user}",
"gists_url": "https://api.github.com/users/wyin-Salesforce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wyin-Salesforce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wyin-Salesforce/subscriptions",
"organizations_url": "https://api.github.com/users/wyin-Salesforce/orgs",
"repos_url": "https://api.github.com/users/wyin-Salesforce/repos",
"events_url": "https://api.github.com/users/wyin-Salesforce/events{/privacy}",
"received_events_url": "https://api.github.com/users/wyin-Salesforce/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
Hi all, my problem is "how to use a pretrained 3-way sequence classification model to fine-tune on a 2-way classification task". I am using "https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py";
in the code I change
```
model_args.model_name_or_path = 'roberta-large'
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
model.roberta.from_pretrained(path_to_my_3way_pretrained_model)
```
This is what I tried. Pls note that I can not used the "AutoModelForSequenceClassification.from_pretrained()" to load "path_to_my_3way_pretrained_model" directly because there will be class mismatch (i.e., 3-way model does not apply to 2-way task); but I think no matter it is 3-way or 2-way sequence classification, what their architectures share is the roberta part; so I used the model.roberta to load the parameters from my pretrained model.
However, I found the roberta parameters do not change, I checked as follows
```
for name, param in model.named_parameters():
if param.requires_grad and name == 'roberta.encoder.layer.16.attention.self.value.weight':
print('new:', name, param.data)
```
Any clue why it doesn't work? Or any solution how to use a fine-tune a pretrained 3-way model on a 2-way task? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6078/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6077/comments | https://api.github.com/repos/huggingface/transformers/issues/6077/events | https://github.com/huggingface/transformers/pull/6077 | 666,622,544 | MDExOlB1bGxSZXF1ZXN0NDU3NDMzODQy | 6,077 | [s2s] dont document packing because it hurts performance | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6077/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6077",
"html_url": "https://github.com/huggingface/transformers/pull/6077",
"diff_url": "https://github.com/huggingface/transformers/pull/6077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6077.patch",
"merged_at": 1595888760000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6076/comments | https://api.github.com/repos/huggingface/transformers/issues/6076/events | https://github.com/huggingface/transformers/pull/6076 | 666,614,082 | MDExOlB1bGxSZXF1ZXN0NDU3NDI2Nzgy | 6,076 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=h1) Report\n> Merging [#6076](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d0d3a6645384e236c55d311f3f8b7dd67d58562&el=desc) will **decrease** coverage by `1.21%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6076 +/- ##\n==========================================\n- Coverage 78.59% 77.37% -1.22% \n==========================================\n Files 146 146 \n Lines 26314 26314 \n==========================================\n- Hits 20681 20361 -320 \n- Misses 5633 5953 +320 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=footer). Last update [9d0d3a6...45cc428](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6076/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6076",
"html_url": "https://github.com/huggingface/transformers/pull/6076",
"diff_url": "https://github.com/huggingface/transformers/pull/6076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6076.patch",
"merged_at": 1595943360000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6075/comments | https://api.github.com/repos/huggingface/transformers/issues/6075/events | https://github.com/huggingface/transformers/pull/6075 | 666,612,864 | MDExOlB1bGxSZXF1ZXN0NDU3NDI1NzY3 | 6,075 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=h1) Report\n> Merging [#6075](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d0d3a6645384e236c55d311f3f8b7dd67d58562&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6075 +/- ##\n=======================================\n Coverage 78.59% 78.59% \n=======================================\n Files 146 146 \n Lines 26314 26314 \n=======================================\n Hits 20681 20681 \n Misses 5633 5633 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=footer). Last update [9d0d3a6...de221b0](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"What's the difference between this and `_v2`?",
"v2 includes ingredients too. V1 just includes instructions (the dataset for this version was provided by https://twitter.com/alexcg/status/1286464335867346946?s=19 But then I found a version with ingredients and instructions and created v2.",
"👍 "
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6075/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6075",
"html_url": "https://github.com/huggingface/transformers/pull/6075",
"diff_url": "https://github.com/huggingface/transformers/pull/6075.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6075.patch",
"merged_at": 1596577283000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6074/comments | https://api.github.com/repos/huggingface/transformers/issues/6074/events | https://github.com/huggingface/transformers/issues/6074 | 666,612,332 | MDU6SXNzdWU2NjY2MTIzMzI= | 6,074 | Cannot use the RobertaForMultipleChoice model for processing multiple choice questions with 4 options | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Good question. `xxxForMultipleChoice` models are actually a bit tricky. The way you should provide the data to the tokenizer is as follows:\r\n\r\n```\r\nprompt = \"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced.\"\r\nchoice0 = \"It is eaten with a fork and a knife.\"\r\nchoice1 = \"It is eaten while held in the hand.\"\r\nchoice2 = \"It is eathen with a napkin.\"\r\nchoice3 = \"It is not eatable.\"\r\n\r\nencoded_dict = bert_tokenizer([prompt, prompt, prompt, prompt], \r\n [choice0, choice1, choice2, choice3], \r\n return_tensors='pt', \r\n padding='max_length')\r\n```\r\nNote the difference: we provide 2 lists to the tokenizer rather than one. The first list contains the first sequence of every training example, the second list contains the second sequence. Every training example will then be encoded as `[CLS] prompt [SEP] choice x [SEP]`.\r\n\r\nFor more details, see [here](https://github.com/huggingface/transformers/issues/7701#issuecomment-707149546)."
] | 1,595 | 1,602 | 1,601 | NONE | null | Hello,
When I try to use the `RobertaForMultipleChoice` pretrained model with the code below, it generates an error:
```python
from transformers import RobertaTokenizer, RobertaForMultipleChoice
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMultipleChoice.from_pretrained('roberta-base')
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
choice2 = "It is eathen with a napkin."
choice3 = "It is not eatable."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([[prompt, prompt,prompt,prompt], [choice0, choice1, choice2, choice3]], return_tensors='pt', return_token_type_ids=True,padding=True)
```
The error message is:
```python
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 534, in _batch_encode_plus
ids, pair_ids = ids_or_pair_ids
ValueError: too many values to unpack (expected 2)
```
What am I doing wrong here? Thank you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6074/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6074/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6073/comments | https://api.github.com/repos/huggingface/transformers/issues/6073/events | https://github.com/huggingface/transformers/pull/6073 | 666,611,648 | MDExOlB1bGxSZXF1ZXN0NDU3NDI0Nzc2 | 6,073 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"I forgot to add the ```widget``` keyword"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6073/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6073",
"html_url": "https://github.com/huggingface/transformers/pull/6073",
"diff_url": "https://github.com/huggingface/transformers/pull/6073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6073.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6072/comments | https://api.github.com/repos/huggingface/transformers/issues/6072/events | https://github.com/huggingface/transformers/issues/6072 | 666,597,837 | MDU6SXNzdWU2NjY1OTc4Mzc= | 6,072 | Error in the RobertaTokenizer? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | Hello,
I think I spotted an error in the `RobertaTokenizer` (`AutoTokenizer`).
I tested the `tokenizer` function by using the code below:
```python
import torch
from torch.nn import CrossEntropyLoss
from matplotlib import pyplot as plt
from transformers import RobertaTokenizer, RobertaForMultipleChoice, AdamW, get_constant_schedule
from transformers import AutoTokenizer
import numpy as np
import pandas as pd
import pickle
import dill
from matplotlib.pyplot import plot, savefig, xlim, figure, ylim, legend, boxplot, setp, axes, xlabel, ylabel, xticks
import gc
import math
import time
from random import seed
from random import randint
import sys
import statistics
from numpy import nan
import scipy.stats as ss
from statistics import mode
from pylab import *
from numpy import arange
# import the pre-trained HuggingFace RobertaTokenizer
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token
best_model_roberta = RobertaForMultipleChoice.from_pretrained('roberta-base', output_hidden_states = True)
sequence_a = 'china is a very large country .'
sequence_b = 'indeed'
tokenizer(sequence_a, sequence_b, padding = True, return_token_type_ids=True, return_tensors="pt")
```
and below is the corresponding output:
```
{
'input_ids': tensor([[ 0, 611, 1243, 16, 10, 182, 739, 247, 479, 2, 2, 2028, 10247, 2]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])
}
```
The output above is not what I was expecting to get from the `tokenizer` function. Since I am inputting `sequence_a` along with `sequence_b`, I was expecting my `token_type_ids` to be: `tensor([[0,0,0,0,0,0,0,0,0,0,0,1,1,1]])`, but the output does not include any 1's...everything is 0. Perhaps this is the error within the `RobertaTokenizer`?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6071/comments | https://api.github.com/repos/huggingface/transformers/issues/6071/events | https://github.com/huggingface/transformers/issues/6071 | 666,590,555 | MDU6SXNzdWU2NjY1OTA1NTU= | 6,071 | Loading and running on CPU, the RoBERTa model traced/saved on GPU. | {
"login": "HamidShojanazeri",
"id": 9162336,
"node_id": "MDQ6VXNlcjkxNjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9162336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamidShojanazeri",
"html_url": "https://github.com/HamidShojanazeri",
"followers_url": "https://api.github.com/users/HamidShojanazeri/followers",
"following_url": "https://api.github.com/users/HamidShojanazeri/following{/other_user}",
"gists_url": "https://api.github.com/users/HamidShojanazeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamidShojanazeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamidShojanazeri/subscriptions",
"organizations_url": "https://api.github.com/users/HamidShojanazeri/orgs",
"repos_url": "https://api.github.com/users/HamidShojanazeri/repos",
"events_url": "https://api.github.com/users/HamidShojanazeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamidShojanazeri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@mfuntowicz I was wondering if there is any update/ work around or a PR in progress to solve this issue. Thanks."
] | 1,595 | 1,602 | 1,601 | CONTRIBUTOR | null | # ❓ Questions & Help
## Details
I have installed the transformers from source and I am trying to trace/save roberta_base model for sequence classification on GPU and load it on CPU, similar to the issue [#5664](https://github.com/huggingface/transformers/issues/5664) that was addressed [PR #5773](https://github.com/huggingface/transformers/pull/5773) and I am facing similar issues with position_embeddings. I was wondering is any step is missing or RoBERTa is also covered in [PR #5773](https://github.com/huggingface/transformers/pull/5773). Thanks.
## Sample Script
```python
import transformers
from pathlib import Path
import os
import json
import torch
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, AutoModelForQuestionAnswering,
AutoModelForTokenClassification, AutoConfig)
print('Transformers version',transformers.__version__)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dummy_input = "This is a dummy input for torch jit trace"
max_length=20
config = AutoConfig.from_pretrained('roberta-base',num_labels=2,torchscript=True)
model = AutoModelForSequenceClassification.from_pretrained('roberta-base', config=config)
tokenizer = AutoTokenizer.from_pretrained('roberta-base',do_lower_case=True)
inputs = tokenizer.encode_plus(dummy_input,max_length = max_length,pad_to_max_length = True,truncation=True, add_special_tokens = True, return_tensors = 'pt')
print(inputs.keys())
input_ids = inputs["input_ids"].to(device)
attention_mask = inputs["attention_mask"].to(device)
model.to(device).eval()
traced_model = torch.jit.trace(model, (input_ids,attention_mask))
torch.jit.save(traced_model, "Roberta_cuda.pt")
print(traced_model.graph)
print("\n")
print("Load model onto CPU")
loaded = torch.jit.load("Roberta_cuda.pt", map_location=torch.device("cpu"))
print("\n")
print(loaded.graph)
outputs = loaded(input_ids.to("cpu"),attention_mask.to("cpu"))
print(outputs)
```
## Error
Traceback (most recent call last):
File "test.py", line 100, in <module>
outputs = loaded(input_ids.to("cpu"),attention_mask.to("cpu"))
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/transformers/modeling_roberta.py", line 10, in forward
attention_mask: Tensor) -> Tuple[Tensor]:
_0 = self.classifier
_1 = (self.roberta).forward(input_ids, attention_mask, )
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return ((_0).forward(_1, ),)
class RobertaModel(Module):
File "code/__torch__/transformers/modeling_roberta.py", line 32, in forward
_9 = torch.to(extended_attention_mask, 6, False, False, None)
attention_mask0 = torch.mul(torch.rsub(_9, 1., 1), CONSTANTS.c0)
_10 = (_3).forward((_4).forward(input_ids, input, ), attention_mask0, )
~~~~~~~~~~~ <--- HERE
_11 = (_2).forward(_10, )
return _10
File "code/__torch__/transformers/modeling_roberta.py", line 59, in forward
input0 = torch.to(_19, dtype=4, layout=0, device=torch.device("cuda:0"), pin_memory=False, non_blocking=False, copy=False, memory_format=None)
_20 = (_16).forward(input_ids, )
_21 = (_15).forward(input0, )
~~~~~~~~~~~~ <--- HERE
_22 = (_14).forward(input, )
input1 = torch.add(torch.add(_20, _21, alpha=1), _22, alpha=1)
File "code/__torch__/torch/nn/modules/sparse/___torch_mangle_0.py", line 7, in forward
def forward(self: __torch__.torch.nn.modules.sparse.___torch_mangle_0.Embedding,
input: Tensor) -> Tensor:
position_embeddings = torch.embedding(self.weight, input, 1, False, False)
~~~~~~~~~~~~~~~ <--- HERE
return position_embeddings
Traceback of TorchScript, original code (most recent call last):
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py(1724): embedding
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/sparse.py(114): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_bert.py(202): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py(76): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_bert.py(792): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py(344): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/jit/__init__.py(1027): trace_module
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/jit/__init__.py(875): trace
test.py(91): <module>
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6071/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6070/comments | https://api.github.com/repos/huggingface/transformers/issues/6070/events | https://github.com/huggingface/transformers/issues/6070 | 666,575,860 | MDU6SXNzdWU2NjY1NzU4NjA= | 6,070 | lightning_base: new clarg: lr_scheduler=polynomial_decay | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stas00 as discussed on slack",
"got it",
"PR for stage 1 to support the existing schedulers: https://github.com/huggingface/transformers/pull/6232\r\nonce this is happy and merged, I will work on importing new ones.",
"poly: https://github.com/huggingface/transformers/pull/6361",
"@sshleifer, so now that get_polynomial_decay_schedule_with_warmup is done (about to be merged) - do we need any others?",
"Not that I know of. We can close this once the associated PR ismerged."
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | lr_scheduler should be a string with many options, including polynomial_decay, like this [command](https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md#finetune-on-en-ro) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6070/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6069/comments | https://api.github.com/repos/huggingface/transformers/issues/6069/events | https://github.com/huggingface/transformers/issues/6069 | 666,574,858 | MDU6SXNzdWU2NjY1NzQ4NTg= | 6,069 | lightning_base: new clarg: adam_betas | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | should be able to pass `adam_betas` through command line like this [command](https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md#finetune-on-en-ro)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6069/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6068/comments | https://api.github.com/repos/huggingface/transformers/issues/6068/events | https://github.com/huggingface/transformers/pull/6068 | 666,570,323 | MDExOlB1bGxSZXF1ZXN0NDU3MzkwMjc1 | 6,068 | link to README.md | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6068/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6068",
"html_url": "https://github.com/huggingface/transformers/pull/6068",
"diff_url": "https://github.com/huggingface/transformers/pull/6068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6068.patch",
"merged_at": 1595939699000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6067/comments | https://api.github.com/repos/huggingface/transformers/issues/6067/events | https://github.com/huggingface/transformers/pull/6067 | 666,567,551 | MDExOlB1bGxSZXF1ZXN0NDU3Mzg3OTM3 | 6,067 | fix typo | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for catching the typo! Alas, I think this one has already been fixed by recent commits."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | wrong filename passed to `tar` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6067/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6067",
"html_url": "https://github.com/huggingface/transformers/pull/6067",
"diff_url": "https://github.com/huggingface/transformers/pull/6067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6067.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6066/comments | https://api.github.com/repos/huggingface/transformers/issues/6066/events | https://github.com/huggingface/transformers/pull/6066 | 666,488,191 | MDExOlB1bGxSZXF1ZXN0NDU3MzIyMzgz | 6,066 | Add fire to setup.cfg to make isort happy | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6066/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6066",
"html_url": "https://github.com/huggingface/transformers/pull/6066",
"diff_url": "https://github.com/huggingface/transformers/pull/6066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6066.patch",
"merged_at": 1595877454000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6065/comments | https://api.github.com/repos/huggingface/transformers/issues/6065/events | https://github.com/huggingface/transformers/pull/6065 | 666,451,739 | MDExOlB1bGxSZXF1ZXN0NDU3MjkzMzU0 | 6,065 | Make all data collators accept dict | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=h1) Report\n> Merging [#6065](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/11792d7826854979bb532b6da09bc3796b09ea6a&el=desc) will **decrease** coverage by `0.54%`.\n> The diff coverage is `71.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6065 +/- ##\n==========================================\n- Coverage 78.73% 78.19% -0.55% \n==========================================\n Files 146 146 \n Lines 26314 26318 +4 \n==========================================\n- Hits 20719 20579 -140 \n- Misses 5595 5739 +144 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.39% <71.42%> (-1.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (+2.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=footer). Last update [11792d7...f8cc060](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM! Agree on the user subclass, especially for language modeling which should be mostly always the same."
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | + one file changed by make style for some reason.
I'm only keeping the inputs, for more evolved data collation, I think it's best to let the user write their own subclass. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6065/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6065",
"html_url": "https://github.com/huggingface/transformers/pull/6065",
"diff_url": "https://github.com/huggingface/transformers/pull/6065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6065.patch",
"merged_at": 1595941700000
} |
https://api.github.com/repos/huggingface/transformers/issues/6064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6064/comments | https://api.github.com/repos/huggingface/transformers/issues/6064/events | https://github.com/huggingface/transformers/pull/6064 | 666,417,381 | MDExOlB1bGxSZXF1ZXN0NDU3MjY1Mjk3 | 6,064 | [Performance improvement] "Bad tokens ids" optimization | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=h1) Report\n> Merging [#6064](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a8ae27617e3c4dafb34bcbbaadf4ceee28583bd&el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `27.77%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6064 +/- ##\n==========================================\n- Coverage 78.49% 78.38% -0.11% \n==========================================\n Files 146 147 +1 \n Lines 26335 26384 +49 \n==========================================\n+ Hits 20671 20681 +10 \n- Misses 5664 5703 +39 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/test\\_generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.64% <93.75%> (-0.19%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=footer). Last update [8a8ae27...7d9767a](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer Thank you very much for the review. I have added unit tests for the modified method that hopefully aligns with what you had in mind. I have re-run the Marian integration tests that run without issue. I somehow have issues running the BART integration tests (even on master) due to an `ImportError` and unable to see if these still run:\r\n```python\r\nfrom .test_configuration_common import ConfigTester\r\nImportError: attempted relative import with no known parent package\r\n```\r\n\r\nregarding point 2) could you please clarify? The `calc_banned_bad_words_ids` still exists (and is used) in the proposed PR. Would you recommend making a copy of it instead of changing its behaviour? Then the original `calc_banned_bad_words_ids` would no longer be used anywhere\r\n\r\n@JetRunner I have added a comment to clarify the mask tensor generation\r\n\r\nI am currently running into issues with Tensorflow test failing - but I do not see how it relates to the proposed changes\r\n\r\nThank you!",
"I misread your code, sorry.\r\nMy point 2 should be that it feels like the new masking logic could be put into a helper method like\r\n```python\r\ndef set_scores_to_inf_for_banned_tokens(self, scores, bad_words_ids) -> None:\r\n\r\n```\r\njust for the sake of namespace control.\r\nYou could also test that method without running `generate`.\r\n\r\nAlso, how significant is the speedup here?",
"@sshleifer This makes sense, just pushed a few more changes:\r\n- Moved the masking to a utility function\r\n- Updated the unit test to let it fail if it hits timeout. As this is configuration dependent, the limit was increased to 10 if the CI compute power available fluctuates. In general I am not sure if unit tests are the best way to perform performance regression tests\r\n- I have created a gist to share the performance difference between the current and the proposed approach: https://gist.github.com/guillaume-be/e335b099005e9bf38448d0e2eb02f74f . On this simple example with a GPU on Colab, the proposed approach is twice as fast. This actually has a significant impact on the entire generation process, but I did not manage to create a good example on Colab (the resources fluctuate too much from notebook to notebook, and not aware of a way to change a library version within a same notebook). Running locally with a consumer-grade Turing GPU (2070), I observe a time reduction of around 20% for the end-to-end generation process. ",
"@sshleifer Thank you again for the thorough review! Tried to address the latest comments - I believe it cleans it up quite a bit thank you for the suggestions",
"This is ready to be merged @LysandreJik !"
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | Running some benchmarks I noticed that the generation pipeline was varying quite a bit in terms of execution time. Especially the banned token masking seems to be fairly expensive (I ran some experiments where up to 30% of the time for an entire generation process was spent in this step - which seems too high considering its expected simplicity).
This PR accelerates the entire generation pipeline for models using a `bad_words_ids` in their configuration by around 20% on a GPU-enabled node (this includes for example translation using the Marian models).
The following changes contribute to the performance improvement:
- Single conversion from tensor to list. Previous approach was accessing the GPU buffer for every banned token and every batch element, causing this operation to be slower than the entire forward pass through the model
- Vectorized update of the banned tokens using a masked fill
- Skipping the EOS token for the banned tokens (avoiding a potential duplicate masking) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6064/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6064",
"html_url": "https://github.com/huggingface/transformers/pull/6064",
"diff_url": "https://github.com/huggingface/transformers/pull/6064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6064.patch",
"merged_at": 1597139800000
} |
https://api.github.com/repos/huggingface/transformers/issues/6063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6063/comments | https://api.github.com/repos/huggingface/transformers/issues/6063/events | https://github.com/huggingface/transformers/pull/6063 | 666,414,016 | MDExOlB1bGxSZXF1ZXN0NDU3MjYyNTE2 | 6,063 | [fix] no warning for position_ids buffer | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=h1) Report\n> Merging [#6063](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c8bdf7f4ecd73680cb0751d9efc8fa3a992c2c2d&el=desc) will **increase** coverage by `1.19%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6063 +/- ##\n==========================================\n+ Coverage 77.39% 78.58% +1.19% \n==========================================\n Files 146 146 \n Lines 26314 26314 \n==========================================\n+ Hits 20366 20680 +314 \n+ Misses 5948 5634 -314 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.32% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=footer). Last update [c8bdf7f...5de946f](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"gunna merge this so that github actions catches it."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Fixes [#6044](https://github.com/huggingface/transformers/issues/6044)
The failing tests check that there are no missing keys for bart, but morgan's recent change started registering a position_ids buffer in `__init__` for four models, with more expected.
Since we do not expect weights to be saved for position_ids, this PR adds the pattern `position_ids` to authorized_missing_keys for all PT models.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6063",
"html_url": "https://github.com/huggingface/transformers/pull/6063",
"diff_url": "https://github.com/huggingface/transformers/pull/6063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6063.patch",
"merged_at": 1595894444000
} |
https://api.github.com/repos/huggingface/transformers/issues/6062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6062/comments | https://api.github.com/repos/huggingface/transformers/issues/6062/events | https://github.com/huggingface/transformers/pull/6062 | 666,393,830 | MDExOlB1bGxSZXF1ZXN0NDU3MjQ1ODgw | 6,062 | Add new AutoModel classes in pipeline | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=h1) Report\n> Merging [#6062](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7f03b22dc15543317635770f312adf4513303d0&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6062 +/- ##\n==========================================\n+ Coverage 78.50% 78.67% +0.17% \n==========================================\n Files 146 146 \n Lines 26251 26251 \n==========================================\n+ Hits 20609 20654 +45 \n+ Misses 5642 5597 -45 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `77.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <0.00%> (+4.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=footer). Last update [f7f03b2...6385c16](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | MEMBER | null | This PR removes the about to be deprecated `AutModelWithLMHead` class and uses the new `AutoModelForSeq2SeqLM`, `AutoModelForCausalLM` and `AutoModelForMaskedLM` for `translation`, `text-generation` and `fill-mask` pipelines respectively.
Regrading issue #6060
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6062/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6062",
"html_url": "https://github.com/huggingface/transformers/pull/6062",
"diff_url": "https://github.com/huggingface/transformers/pull/6062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6062.patch",
"merged_at": 1595865009000
} |
https://api.github.com/repos/huggingface/transformers/issues/6061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6061/comments | https://api.github.com/repos/huggingface/transformers/issues/6061/events | https://github.com/huggingface/transformers/pull/6061 | 666,391,335 | MDExOlB1bGxSZXF1ZXN0NDU3MjQzODM1 | 6,061 | Pipelines should use tuples instead of namedtuples | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | MEMBER | null | Fix https://github.com/huggingface/transformers/issues/5713
The ONNX conversion cannot handle other objects than tuples, lists and variables. Since calling the conversion script via the command line uses a pipeline, and pipelines cannot be configured with specific model kwargs, the change is so that pipelines manage tuples instead of namedtuples (not a breaking change). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6061/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6061",
"html_url": "https://github.com/huggingface/transformers/pull/6061",
"diff_url": "https://github.com/huggingface/transformers/pull/6061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6061.patch",
"merged_at": 1595920471000
} |
https://api.github.com/repos/huggingface/transformers/issues/6060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6060/comments | https://api.github.com/repos/huggingface/transformers/issues/6060/events | https://github.com/huggingface/transformers/issues/6060 | 666,349,698 | MDU6SXNzdWU2NjYzNDk2OTg= | 6,060 | new AutoModel classes in pipeline | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | MEMBER | null | The `translation`, `text-generation` and `fill-mask` pipelines still use the `AutoModelWithLMHead` class but from the warning it seems that it's going to be deprecated. Is there any specific reason for this or can we use the new `AutoModelForSeq2SeqLM` for translation, `AutoModelCausalLM` for text-generation and `AutoModelForMaskedLM` for fill-mask pipeline ? If yes then I'll be happy to open a PR.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6060/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6059/comments | https://api.github.com/repos/huggingface/transformers/issues/6059/events | https://github.com/huggingface/transformers/issues/6059 | 666,342,225 | MDU6SXNzdWU2NjYzNDIyMjU= | 6,059 | Errors while using TFAutoModelForMultipleChoice and TFTrainer on winogrande dataset | {
"login": "QixinLi",
"id": 25460447,
"node_id": "MDQ6VXNlcjI1NDYwNDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/25460447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QixinLi",
"html_url": "https://github.com/QixinLi",
"followers_url": "https://api.github.com/users/QixinLi/followers",
"following_url": "https://api.github.com/users/QixinLi/following{/other_user}",
"gists_url": "https://api.github.com/users/QixinLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QixinLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QixinLi/subscriptions",
"organizations_url": "https://api.github.com/users/QixinLi/orgs",
"repos_url": "https://api.github.com/users/QixinLi/repos",
"events_url": "https://api.github.com/users/QixinLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/QixinLi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
I'm using TFAutoModelForMultipleChoice and TFTrainer to train on winogrande dataset.
The dataset is from [here](https://leaderboard.allenai.org/winogrande/submissions/get-started)
And I got an error while I starting training.
```python
Traceback (most recent call last):
File "run_wsc.py", line 233, in <module>
main()
File "run_wsc.py", line 208, in main
trainer.train()
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py", line 358, in train
for step, training_loss in enumerate(self._training_steps(train_ds, optimizer)):
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py", line 401, in _training_steps
for i, loss in enumerate(self._accumulate_next_gradients(ds)):
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py", line 434, in _accumulate_next_gradients
yield _accumulate_next()
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 568, in __call__
result = self._call(*args, **kwds)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 615, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 497, in _initialize
*args, **kwds))
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 2389, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 2703, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 2593, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py", line 978, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 439, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in converted code:
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:430 _accumulate_next *
return self._accumulate_gradients(per_replica_features, per_replica_labels)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:440 _accumulate_gradients *
per_replica_loss = self.args.strategy.experimental_run_v2(
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/distribute/one_device_strategy.py:180 experimental_run_v2
return super(OneDeviceStrategy, self).experimental_run_v2(fn, args, kwargs)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:453 _forward *
per_example_loss, _ = self._run_model(features, labels, True)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:474 _run_model *
loss, logits = self.model(features, labels=labels, training=training)[:2]
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:778 __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/modeling_tf_bert.py:1114 call *
loss = self.compute_loss(labels, reshaped_logits)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:134 compute_loss *
if shape_list(logits)[1] == 1:
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:918 if_stmt
basic_symbol_names, composite_symbol_names)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:956 tf_if_stmt
error_checking_orelse)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py:1174 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/ops/cond_v2.py:83 cond_v2
op_return_value=pred)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:983 func_graph_from_py_func
expand_composites=True)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 map_structure
structure[0], [func(*x) for x in entries],
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 <listcomp>
structure[0], [func(*x) for x in entries],
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:943 convert
(str(python_func), type(x)))
TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function tf_if_stmt.<locals>.error_checking_body at 0x158e993b0>, found return value of type <class 'tensorflow.python.keras.losses.MeanSquaredError'>, which is not a Tensor.
```
## Details
```python
import logging
import os
from dataclasses import dataclass, field
from enum import Enum
from typing import Dict, Optional
import numpy as np
import tensorflow as tf
from transformers import (
AutoConfig,
AutoTokenizer,
EvalPrediction,
HfArgumentParser,
PreTrainedTokenizer,
TFTrainer,
TFTrainingArguments,
TFAutoModelForMultipleChoice
)
from wsc_processor import (
WinograndeProcessor,
convert_multiple_choice_examples_to_features,
compute_metrics
)
class Split(Enum):
train = "train"
dev = "validation"
test = "test"
def get_wsc_ds(
tokenizer: PreTrainedTokenizer, max_seq_length: Optional[int] = None, mode: Split = Split.train, data_dir:str=None
):
processor = WinograndeProcessor()
if mode == Split.train:
examples = processor.get_train_examples(data_dir)
elif mode == Split.dev:
examples = processor.get_dev_examples(data_dir)
else:
examples = processor.get_test_examples(data_dir)
features = convert_multiple_choice_examples_to_features(examples, label_list=processor.get_labels(), tokenizer=tokenizer, max_seq_length=max_seq_length, output_mode="multiple_choice")
def gen():
for ex in features:
inputs = []
masks = []
segments = []
for feature in ex.option_features:
inputs.append(feature["input_ids"])
masks.append(feature["input_mask"])
segments.append(feature["segment_ids"])
yield (
{
"input_ids": inputs,
"attention_mask": masks,
"token_type_ids": segments,
},
ex.label,
)
ds = tf.data.Dataset.from_generator(
gen,
({"input_ids": tf.int32, "attention_mask": tf.int32, "token_type_ids": tf.int32}, tf.int64),
(
{
"input_ids": tf.TensorShape([None,None]),
"attention_mask": tf.TensorShape([None,None]),
"token_type_ids": tf.TensorShape([None,None]),
},
tf.TensorShape([]),
),
)
return ds
logger = logging.getLogger(__name__)
@dataclass
class GlueDataTrainingArguments:
task_name: str = field(default="wsc")
data_dir: str = field(default="./data/WSC")
max_seq_length: int = field(
default=128,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
@dataclass
class ModelArguments:
model_name_or_path: str = field(
default="roberta-base",
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
use_fast: bool = field(default=False, metadata={"help": "Set this flag to use fast tokenization."})
cache_dir: Optional[str] = field(
default="./model/", metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
def main():
parser = HfArgumentParser((ModelArguments, GlueDataTrainingArguments, TFTrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info("Training/evaluation parameters %s", training_args)
output_mode = "classification"
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
# num_labels=1,
finetuning_task=data_args.task_name,
cache_dir=model_args.cache_dir
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir
)
# with training_args.strategy.scope():
model = TFAutoModelForMultipleChoice.from_pretrained(
model_args.model_name_or_path,
from_pt=bool(".bin" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
# Get datasets
train_dataset = (
get_wsc_ds(tokenizer=tokenizer, max_seq_length=data_args.max_seq_length,data_dir=data_args.data_dir)
if training_args.do_train
else None
)
eval_dataset = (
get_wsc_ds(tokenizer=tokenizer, max_seq_length=data_args.max_seq_length,mode=Split.dev, data_dir=data_args.data_dir)
if training_args.do_eval
else None
)
def metric(p: EvalPrediction) -> Dict:
if output_mode == "classification":
preds = np.argmax(p.predictions, axis=1)
else:
preds = np.squeeze(p.predictions)
return compute_metrics("winogrande", preds, p.label_ids)
# Initialize our Trainer
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=metric,
)
# Training
if training_args.do_train:
trainer.train()
trainer.save_model()
tokenizer.save_pretrained(training_args.output_dir)
return results
if __name__ == "__main__":
main()
```
the wsc_processor is from [here](https://github.com/allenai/winogrande/blob/master/scripts/utils.py)
I wonder what is wrong in the code (am I wrong in the get_wsc_ds() method ?) , and how to fix it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6059/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6058/comments | https://api.github.com/repos/huggingface/transformers/issues/6058/events | https://github.com/huggingface/transformers/pull/6058 | 666,305,702 | MDExOlB1bGxSZXF1ZXN0NDU3MTczMDUz | 6,058 | Create README.md | {
"login": "Drisya-Ponmari",
"id": 42796375,
"node_id": "MDQ6VXNlcjQyNzk2Mzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42796375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Drisya-Ponmari",
"html_url": "https://github.com/Drisya-Ponmari",
"followers_url": "https://api.github.com/users/Drisya-Ponmari/followers",
"following_url": "https://api.github.com/users/Drisya-Ponmari/following{/other_user}",
"gists_url": "https://api.github.com/users/Drisya-Ponmari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Drisya-Ponmari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Drisya-Ponmari/subscriptions",
"organizations_url": "https://api.github.com/users/Drisya-Ponmari/orgs",
"repos_url": "https://api.github.com/users/Drisya-Ponmari/repos",
"events_url": "https://api.github.com/users/Drisya-Ponmari/events{/privacy}",
"received_events_url": "https://api.github.com/users/Drisya-Ponmari/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6058/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6058",
"html_url": "https://github.com/huggingface/transformers/pull/6058",
"diff_url": "https://github.com/huggingface/transformers/pull/6058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6058.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6057/comments | https://api.github.com/repos/huggingface/transformers/issues/6057/events | https://github.com/huggingface/transformers/issues/6057 | 666,191,254 | MDU6SXNzdWU2NjYxOTEyNTQ= | 6,057 | Transformer layers + Functional API is failed but subclass is successful | {
"login": "Douboo",
"id": 32014271,
"node_id": "MDQ6VXNlcjMyMDE0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/32014271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Douboo",
"html_url": "https://github.com/Douboo",
"followers_url": "https://api.github.com/users/Douboo/followers",
"following_url": "https://api.github.com/users/Douboo/following{/other_user}",
"gists_url": "https://api.github.com/users/Douboo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Douboo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Douboo/subscriptions",
"organizations_url": "https://api.github.com/users/Douboo/orgs",
"repos_url": "https://api.github.com/users/Douboo/repos",
"events_url": "https://api.github.com/users/Douboo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Douboo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Don't use transformers in tensorflow when use functional API and use pre-training vector. Help!!!",
"Might be of interest to @jplu ",
"> Might be of interest to @jplu\r\n\r\nThanks! Finally someone replied to me. I am a beginner.",
"Hello!\r\n\r\nYou are not creating you input properly, `inputs` cannot be `None`. This piece of code should work:\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import TFBertMainLayer, BertConfig\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-cased\")\r\nbert = TFBertMainLayer(config)\r\ninputs = tf.keras.layers.Input(shape=(None,), name='input_embeds', dtype='int32')\r\nseq_emb = bert(inputs)[0]\r\nlast_token_emb = seq_emb[:, -1, :]\r\noutputs = tf.keras.layers.Dense(1, activation='sigmoid')(last_token_emb)\r\nmodel = tf.keras.Model(inputs=inputs, outputs=outputs)\r\n```",
"Ok after a deeper investigation, this partially solve the issue, it might have a bug in the way the model handles the Keras symbolic tensors, I need to do more tests to be sure but we certainly need to review this on our side.",
"Ok I found a workaround for you:\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import TFBertMainLayer, BertConfig\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-cased\")\r\nbert = TFBertMainLayer(config)\r\ninputs_embeds = tf.keras.layers.Input(shape=(None,768), name='inputs_embeds', dtype='float32')\r\ninputs = {\"input_embeds\": inputs_embeds}\r\nseq_emb = bert(inputs)[0]\r\nlast_token_emb = seq_emb[:, -1, :]\r\noutputs = tf.keras.layers.Dense(1, activation='sigmoid')(last_token_emb)\r\nmodel = tf.keras.Model(inputs=inputs, outputs=[outputs])\r\n```",
"> Ok I found a workaround for you:\r\n> \r\n> ```\r\n> import tensorflow as tf\r\n> from transformers import TFBertMainLayer, BertConfig\r\n> \r\n> config = BertConfig.from_pretrained(\"bert-base-cased\")\r\n> bert = TFBertMainLayer(config)\r\n> inputs_embeds = tf.keras.layers.Input(shape=(None,768), name='inputs_embeds', dtype='float32')\r\n> inputs = {\"input_embeds\": inputs_embeds}\r\n> seq_emb = bert(inputs)[0]\r\n> last_token_emb = seq_emb[:, -1, :]\r\n> outputs = tf.keras.layers.Dense(1, activation='sigmoid')(last_token_emb)\r\n> model = tf.keras.Model(inputs=inputs, outputs=[outputs])\r\n> ```\r\n\r\nThank you very much! This method successfully solved my problem. Transformers is very awesome project. Thank you again! @jplu @LysandreJik "
] | 1,595 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert):
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
**successful**
```python
# define model via subclassing
class MyModel(tf.keras.Model):
def __init__(self, item_dim, num_layers, num_heads, max_len, **kwargs):
super(MyModel, self).__init__(**kwargs)
self.item_dim = item_dim
self.num_layers = num_layers
self.num_heads = num_heads
self.max_len = max_len
self.config = BertConfig(hidden_size=item_dim, num_hidden_layers=num_layers,
num_attention_heads=num_heads, intermediate_size=item_dim*4,
max_position_embeddings=max_len)
self.bert = TFBertMainLayer(config=self.config)
self.dense = Dense(1, activation='sigmoid')
def call(self, inputs):
seq_emb = self.bert(inputs=None, inputs_embeds=inputs)[0]
last_token_emb = seq_emb[:, -1, :]
outputs = self.dense(last_token_emb)
return outputs
```
**failed**
```python
def build_model(item_dim, num_layers, num_heads, max_len):
config = BertConfig(hidden_size=item_dim, num_hidden_layers=num_layers,
num_attention_heads=num_heads, intermediate_size=item_dim*4,
max_position_embeddings=max_len)
bert = TFBertMainLayer(config=config)
inputs = Input(shape=(max_len, item_dim), dtype=tf.float32, name='inputs')
# pre-training vectors to bert
seq_emb = bert(inputs=None, inputs_embeds=inputs)[0]
last_token_emb = seq_emb[:, -1, :]
outputs = Dense(1, activation='sigmoid')(last_token_emb)
model = Model(inputs=inputs, outputs=outputs)
return model
```
Errors with
`ValueError: It appears you are trying to construct a functional model, but not all of the inputs in the first positional argument of your layer call are symbolic tensors. (Input objects, or the output of another layer) Functional models cannot correctly track custom layers unless all values in the first call argument are symbolic.`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
* Run the code successfully when use transformer layers and functional api.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
https://colab.research.google.com/gist/Douboo/80dfa91917c176c530ef64be244a99a6/untitled2.ipynb
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: google colab
- Python version: 3.7
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6057/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6056/comments | https://api.github.com/repos/huggingface/transformers/issues/6056/events | https://github.com/huggingface/transformers/pull/6056 | 666,151,027 | MDExOlB1bGxSZXF1ZXN0NDU3MDQ0NDE0 | 6,056 | Empty assert hunt | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=h1) Report\n> Merging [#6056](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12f14710cea72a2104ff698762a8fc68a5dc0a0b&el=desc) will **decrease** coverage by `0.35%`.\n> The diff coverage is `22.91%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6056 +/- ##\n==========================================\n- Coverage 78.82% 78.47% -0.36% \n==========================================\n Files 146 146 \n Lines 26200 26204 +4 \n==========================================\n- Hits 20653 20564 -89 \n- Misses 5547 5640 +93 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <ø> (ø)` | |\n| [src/transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `26.66% <0.00%> (ø)` | |\n| [src/transformers/data/metrics/squad\\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3Mvc3F1YWRfbWV0cmljcy5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (ø)` | |\n| [src/transformers/data/processors/xnli.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMveG5saS5weQ==) | `27.08% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.88% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (ø)` | |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=footer). Last update [12f1471...09c7880](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'm in love 😍 😍. Thanks @TevenLeScao !",
"@LysandreJik can I merge this?"
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | Empty asserts are bad for debugging. I tried to remove them all and to add helpful Pytorch-style messages with the shapes of the corresponding objects when they were mismatched-lengths checks + the file paths when they were file-not-found checks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6056/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6056",
"html_url": "https://github.com/huggingface/transformers/pull/6056",
"diff_url": "https://github.com/huggingface/transformers/pull/6056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6056.patch",
"merged_at": 1596442744000
} |
https://api.github.com/repos/huggingface/transformers/issues/6055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6055/comments | https://api.github.com/repos/huggingface/transformers/issues/6055/events | https://github.com/huggingface/transformers/pull/6055 | 666,139,248 | MDExOlB1bGxSZXF1ZXN0NDU3MDM0NjEw | 6,055 | add another e.g. to avoid confusion | {
"login": "orena1",
"id": 8983713,
"node_id": "MDQ6VXNlcjg5ODM3MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orena1",
"html_url": "https://github.com/orena1",
"followers_url": "https://api.github.com/users/orena1/followers",
"following_url": "https://api.github.com/users/orena1/following{/other_user}",
"gists_url": "https://api.github.com/users/orena1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orena1/subscriptions",
"organizations_url": "https://api.github.com/users/orena1/orgs",
"repos_url": "https://api.github.com/users/orena1/repos",
"events_url": "https://api.github.com/users/orena1/events{/privacy}",
"received_events_url": "https://api.github.com/users/orena1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=h1) Report\n> Merging [#6055](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9b11795cfdce7bb8dd8a01ec5efa602589a78b2&el=desc) will **increase** coverage by `0.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6055 +/- ##\n==========================================\n+ Coverage 78.26% 78.37% +0.11% \n==========================================\n Files 146 146 \n Lines 26253 26253 \n==========================================\n+ Hits 20546 20576 +30 \n+ Misses 5707 5677 -30 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (+2.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=footer). Last update [b9b1179...7264c50](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6055/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6055",
"html_url": "https://github.com/huggingface/transformers/pull/6055",
"diff_url": "https://github.com/huggingface/transformers/pull/6055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6055.patch",
"merged_at": 1596113615000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6054/comments | https://api.github.com/repos/huggingface/transformers/issues/6054/events | https://github.com/huggingface/transformers/issues/6054 | 666,112,127 | MDU6SXNzdWU2NjYxMTIxMjc= | 6,054 | Errors while creating a subclass of BertForTokenClassification in run_ner.py file | {
"login": "vikas95",
"id": 25675079,
"node_id": "MDQ6VXNlcjI1Njc1MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25675079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikas95",
"html_url": "https://github.com/vikas95",
"followers_url": "https://api.github.com/users/vikas95/followers",
"following_url": "https://api.github.com/users/vikas95/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikas95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas95/subscriptions",
"organizations_url": "https://api.github.com/users/vikas95/orgs",
"repos_url": "https://api.github.com/users/vikas95/repos",
"events_url": "https://api.github.com/users/vikas95/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikas95/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null |
Question 1) I want to skip calling AutoModelForTokenClassification ( https://github.com/huggingface/transformers/blob/f7f03b22dc15543317635770f312adf4513303d0/examples/token-classification/run_ner.py#L158 ) and directly use BertForTokenClassification. When I am directly using BertForTokenClassification in run_ner.py using the same parameters which were fed to AutoModelForTokenClassification, it does not load the weights from pretrained BERT checkpoint.
Can you suggest what changes I have to make in the input parameters (most likely in the "config") such that it loads the weights from pretrained BERT checkpoint.
Question 2) I also tried to create a subclass of BertForTokenClassification and call it within AutoModelForTokenClassification. In this case, it throws an error - AttributeError: 'BertConfig' object has no attribute 'use_return_tuple' .
Can anyone please suggest something to fix this error which is caused while just using a subclass of BertForTokenClassification instead of the original BertForTokenClassification class. Please note that the subclass is exactly same as BertForTokenClassification.
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6054/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6053/comments | https://api.github.com/repos/huggingface/transformers/issues/6053/events | https://github.com/huggingface/transformers/issues/6053 | 666,110,069 | MDU6SXNzdWU2NjYxMTAwNjk= | 6,053 | Errors when creating a subclass of "BertForTokenClassification" | {
"login": "vikas95",
"id": 25675079,
"node_id": "MDQ6VXNlcjI1Njc1MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25675079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikas95",
"html_url": "https://github.com/vikas95",
"followers_url": "https://api.github.com/users/vikas95/followers",
"following_url": "https://api.github.com/users/vikas95/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikas95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas95/subscriptions",
"organizations_url": "https://api.github.com/users/vikas95/orgs",
"repos_url": "https://api.github.com/users/vikas95/repos",
"events_url": "https://api.github.com/users/vikas95/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikas95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6053/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/6052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6052/comments | https://api.github.com/repos/huggingface/transformers/issues/6052/events | https://github.com/huggingface/transformers/issues/6052 | 665,980,923 | MDU6SXNzdWU2NjU5ODA5MjM= | 6,052 | add gpt2 padding for tflite | {
"login": "gyin94",
"id": 67664443,
"node_id": "MDQ6VXNlcjY3NjY0NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/67664443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyin94",
"html_url": "https://github.com/gyin94",
"followers_url": "https://api.github.com/users/gyin94/followers",
"following_url": "https://api.github.com/users/gyin94/following{/other_user}",
"gists_url": "https://api.github.com/users/gyin94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyin94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyin94/subscriptions",
"organizations_url": "https://api.github.com/users/gyin94/orgs",
"repos_url": "https://api.github.com/users/gyin94/repos",
"events_url": "https://api.github.com/users/gyin94/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyin94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"the latest TF2.3 and tflite can support dynamic input now. Close this."
] | 1,595 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
Since tflite requires a fixed input length, how can we correctly add padding for gpt2 input with a fix length so that it won't affect the final result?
```
import tensorflow as tf
from transformers import *
gpt2_model = TFGPT2LMHeadModel.from_pretrained('distilgpt2')
class WrapModel(tf.keras.models.Model):
def __init__(self, transformer):
super(WrapModel, self).__init__()
self.transformer = transformer
def call(self, input_ids, **kwargs):
inputs = {"inputs": input_ids}
outputs = self.transformer(**inputs)
next_token_logits = outputs[0][:, -1, :]
# Greedy decoding
next_token = tf.math.argmax(next_token_logits, axis=-1, output_type=tf.int32)
return {"decoded_ids": next_token}
w = WrapModel(gpt2_model)
input_layer = tf.keras.layers.Input(shape=(1, 10), dtype=tf.int32, name='input_ids')
prediction_model = w(input_layer)
tf_model = tf.keras.models.Model(inputs=input_layer, outputs=prediction_model)
export_dir = "./gpt2_tmp/"
tf.keras.models.save_model(tf_model, export_dir)
saved_model_dir = "./gpt2_tmp/"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.experimental_new_converter = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
```
I am not clear how we can use attention_mask to solve this concern based on this one. https://github.com/huggingface/transformers/issues/2630 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6052/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6051/comments | https://api.github.com/repos/huggingface/transformers/issues/6051/events | https://github.com/huggingface/transformers/issues/6051 | 665,947,785 | MDU6SXNzdWU2NjU5NDc3ODU= | 6,051 | examples/seq2seq: add a dataloader that supports dynamic batch size | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,595 | 1,650 | 1,600 | CONTRIBUTOR | null | Should follow the logic of `load_langpair_dataset` in fairseq, roughly.
Batches should be created such that they include N(default 1024) tokens of source documents, however many examples that requires. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6051/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6050/comments | https://api.github.com/repos/huggingface/transformers/issues/6050/events | https://github.com/huggingface/transformers/issues/6050 | 665,946,711 | MDU6SXNzdWU2NjU5NDY3MTE= | 6,050 | CI: run tests against torch=1.6 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"we just did!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Through github or circleci.
If github actions:
copy `.github/self-scheduled.yml` to `.github/torch_future.yml` and modify the install steps. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6050/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6049/comments | https://api.github.com/repos/huggingface/transformers/issues/6049/events | https://github.com/huggingface/transformers/issues/6049 | 665,946,280 | MDU6SXNzdWU2NjU5NDYyODA= | 6,049 | examples/seq2seq/test_bash_script.py :: actually learn something | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This is still an issue :)",
"This command runs in 3 mins (without including downloads)\r\n\r\n```bash\r\n# export WANDB_PROJECT=dmar\r\nexport MAX_LEN=64\r\npython finetune.py \\\r\n --learning_rate=3e-4 \\\r\n --do_train \\\r\n --do_predict \\\r\n --fp16 \\\r\n --val_check_interval 0.25 --n_train 100000 --n_val 500 --n_test 500 \\\r\n --data_dir wmt_en_ro \\\r\n --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \\\r\n --freeze_encoder --freeze_embeds \\\r\n --train_batch_size=64 --eval_batch_size=64 \\\r\n --tokenizer_name Helsinki-NLP/opus-mt-en-ro \\\r\n --model_name_or_path sshleifer/mar_enro_6_3_student \\\r\n --warmup_steps 500 --sortish_sampler \\\r\n --gpus 1 --fp16_opt_level=O1 --task translation --num_sanity_val_steps=0 --output_dir dmar_utest_1gpu --num_train_epochs=1 \\\r\n --overwrite_output_dir\r\n```\r\n#### Test results\r\n```bash\r\ncat dmar_utest_1gpu/test_results.txt\r\n```\r\n```\r\nbs: 32.000000\r\nloss: 1.122035\r\nsrc_pad_frac: 0.322266\r\nsrc_pad_tok: 330.000000\r\nstep_count: 5.000000\r\ntest_avg_bleu: 20.660713\r\ntest_avg_gen_len: 63.750000\r\ntest_avg_gen_time: 0.033025\r\ntest_avg_loss: 2.215564\r\ntest_bleu: 20.660713\r\ntest_loss: 2.215564\r\ntpb: 1475.000000\r\nval_avg_bleu: 20.099000\r\nval_avg_gen_len: 63.125000\r\nval_avg_gen_time: 0.031409\r\nval_avg_loss: 2.265883\r\nval_bleu: 20.099001\r\nval_loss: 2.265883\r\n```\r\n\r\nThe validation BLEU also improves over the course of training:\r\n```\r\n cat dmar_utest_1gpu/metrics.json | grep bleu\r\n```\r\n\r\n```\r\n \"val_avg_bleu\": 16.3561625,\r\n \"val_avg_bleu\": 19.0204625,\r\n \"val_avg_bleu\": 19.704875,\r\n \"val_avg_bleu\": 20.099,\r\n \"test_avg_bleu\": 20.660712500000002\r\n```\r\n\r\nSo this would be a good template for the test.\r\n\r\n\r\n### Spec\r\n\r\n+ convert the command into a unit-test (With programatic download of the right amount of data, possibly through via a new s3 `.tgz` file.\r\n+ Replace existing `test_bash_script` marian test.\r\n+ Try to cut n_train/further\r\n+ Anything less than 15 mins is fine, but the faster the better.\r\n+ Minimum learning requirement 1: BLEU improves over the course of training by more than 2 pts\r\n+ Minimum learning requirement 2: BLEU finishes above 17\r\n+ Minimum learning requirement 3: test bleu and val bleu within 1 pt.\r\n\r\n(this command meets all 3 learning requirements).\r\n\r\n\r\n\r\nWdyt @stas00 ?",
"I will work on that, thank you.",
"@sshleifer, could you please validate that this is the [command you run](https://github.com/huggingface/transformers/issues/6049#issuecomment-721782069)? \r\n\r\nI get very different (bad) results:\r\n```\r\nbs: 9.000000\r\nloss: 6.701375\r\nsrc_pad_frac: 0.118056\r\nsrc_pad_tok: 68.000000\r\nstep_count: 2.000000\r\ntest_avg_bleu: 0.021700\r\ntest_avg_gen_len: 512.000000\r\ntest_avg_gen_time: 0.439663\r\ntest_avg_loss: 5.679669\r\ntest_bleu: 0.021700\r\ntest_loss: 5.679669\r\ntpb: 1025.000000\r\nval_avg_bleu: 0.082700\r\nval_avg_gen_len: 512.000000\r\nval_avg_gen_time: 0.483860\r\nval_avg_loss: 5.404536\r\nval_bleu: 0.082700\r\nval_loss: 5.404536\r\n```\r\n\r\nIn the OP you mentioned \" sshleifer/student_marian_6_3\" but here you used \"sshleifer/mar_enro_6_3_student\" - not sure if that's the difference.",
"Also for the second time you use `wmt_en_ro` instead of `test_data/wmt_en_ro` - do you use a different dataset?",
"Your spec on timing would be a small issue, since I get what you said 3min on your hw in 33secs (unoptimized rtx3090), so might have to re-test on CI. But again I'm not sure we are testing against the same dataset, since my results are terrible.",
"Retested with `sshleifer/student_marian_en_ro_6_3` and 5 epochs - still under < 1 bleu - so this is probably an issue of insufficient data and you must be using a different dataset.",
"I am using full dataset (as in README.md) ",
"Ah, that explains it.\r\n\r\nSo run the slow test with the full dataset downloaded at runtime, right? ",
"OK, I was able to reproduce your results with the full dataset, slightly under 3min and slightly better bleu scores. ",
"Not sure if there is a point to it, but 7zip shaves off about 35% in download size (but CI might not have it installed).\r\n```\r\n-rw-rw-r-- 1 stas stas 58M Nov 4 11:40 wmt_en_ro.tar.gz\r\n-rw-rw-r-- 1 stas stas 37M Nov 4 11:39 wmt_en_ro.7z\r\n```\r\n",
"Another way to save download time would be to only zip up 100k (or fewer) training examples, 500 val examples, 500 test examples. Those are all we use given the `--ntrain --nval --ntest` flags.\r\nI would also check whether 10k/25k/50k meet the learning requirements.",
"While trying to match the suggested hparams to the ones in `train_mbart_cc25_enro.sh` I've been wondering - I think I'm missing the point of this whole thing - if the intention is to test a bash script with specific fixed hparams, but the test replaces half of these presets and adds quite a few new ones, how are we testing this script? \r\n\r\n",
"Why do we use \"--foo=bar\" and \"--foo bar\" both seemingly totally at random - half the args are set the first way, the other half the other way.",
"question: do you want this as a new test or modify the existing `test_train_mbart_cc25_enro_script` - I'm trying to do the latter at the moment - perhaps that's why I'm questioning what do we test here.",
"The high level goal originally was to test that the bash scripts we check in work.\r\nI have a secondary goal of making sure the training code is actually good at training models.\r\nI am fine with any setup that accomplishes both of those tasks, with bonus points for enough traceback control that a developer could tell that they have made performance/vs. speed worse or some such.\r\n\r\nAs I slacked, we want a test to detect if we've regressed the training code. For example, if you set dropout=0.95 or freeze all parameters, or set the LR too low, or mess up the special tokens logic, the test should fail. Does that make sense? I didn't test all these for my command line, but it would be nice.\r\n\r\nRelatedly, we just had a 2x slowdown in the `finetune_trainer.py` code that was not detected by unit-tests.\r\n\r\n\r\nI know this is something of a scope expansion, so feel free to break it up/ignore parts as you see fit. I trust you to make reasonable decisions.",
"Thank you for this useful brain dump. Let's take it point by point.\r\n\r\n1. the bash scripts\r\n\r\n If a test rewrites the script's guts before doing the testing should we not just modify those scripts themselves - we want to test that the script works, so we should test it as is, with the only changes allowed in some numerical settings to make tests faster.\r\n\r\n If we want different pre-sets for different purposes - then have a set of scripts rather then do-them-all in one?\r\n\r\n2. Best regression tests are written when an actual regression is discovered because then you know exactly which side of things to put under the \"magnifier glass\". When another regression is discovered down the road a new test should be made that focuses just on that part. Over time one ends up with a great coverage and the test suite becomes strong. Trying to accomplish all of these in one test will over time lose the very specific setups that exposed very specific side of things. It also helps to annotate that this test solves a regression in this git sha, so that it flags to future developers to not try to refactor or slip extra checks or slightly change things in the existing test.\r\n\r\n It's very difficult to think of all the possible things that could regress in the void, but surely it is a great start.\r\n\r\n> Relatedly, we just had a 2x slowdown in the finetune_trainer.py code that was not detected by unit-tests.\r\n\r\nThat's great! Let's find a git sha before and after and write a test that detects that regression.\r\n\r\nI hope this approach makes sense?\r\n",
"Yeah you are right, let me try to isolate the bad commit https://github.com/huggingface/transformers/commits/master/examples/seq2seq\r\n\r\nrelated issue: #8154 ",
"I don't think there was an actual regression, I think my command lines are subtly different.\r\nI still think the current test in the linked PR is more aggressive/better and should be added to the test suite in some form, but I am open to other opinions.",
"**edit**: reusing the same ouput_dir during debug is a terrible idea - it gives total bogus test results - basically remembers the very first run and generates test reports based on it all the subsequent times, ignoring the actual test results. Why is that?\r\n\r\nI am growing to dislike `--overwrite_output_dir` - it's so ambiguous - but I guess it was never meant to be used as a debug flag. \r\n\r\nThis works for debug:\r\n```\r\n if DEBUG:\r\n output_dir = self.get_auto_remove_tmp_dir(\"./xxx\", before=True, after=False)\r\n```\r\n\r\nSo after re-evaluating:\r\n\r\n> Try to cut n_train/further\r\n\r\n40k works.\r\n\r\n25k w/ 2 epochs is almost there, but it's slower, than adding a bit more data, so went with 40k\r\n\r\ngoing with a subset \"tr40k-va0.5k-te0.5k\"",
"Created https://cdn-datasets.huggingface.co/translation/wmt_en_ro-tr40k-va0.5k-te0.5k.tar.gz - hope the name is intuitive - self-documenting. It's just 3.6M (vs 56M original)\r\n\r\nI made it using this script:\r\nhttps://github.com/stas00/porting/blob/master/transformers/translation/make-wmt_en_ro-subset.md\r\n",
"In all these tests where we measure a relatively exact quality metrics - should we use a fixed seed?"
] | 1,595 | 1,604 | 1,604 | CONTRIBUTOR | null | At the moment validation bleu barely gets above zero in the tests, so they don't really prove much about our code.
we could use a larger model like sshleifer/student_marian_6_3, and more data, and train for 10 minutes . This would allows us to test whether changing default parameters/batch techniques obviously degrades performance.
The github actions CI reuses it's own disk, so this will only run there and hopefully not have super slow downloads. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6049/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6048/comments | https://api.github.com/repos/huggingface/transformers/issues/6048/events | https://github.com/huggingface/transformers/issues/6048 | 665,945,615 | MDU6SXNzdWU2NjU5NDU2MTU= | 6,048 | examples/seq2seq/test_bash_script.py covers summarization | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @sshleifer, I would like to take this issue, if there no objection",
"Absolutely!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,602 | 1,602 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6048/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/6047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6047/comments | https://api.github.com/repos/huggingface/transformers/issues/6047/events | https://github.com/huggingface/transformers/issues/6047 | 665,941,613 | MDU6SXNzdWU2NjU5NDE2MTM= | 6,047 | Feed to forward new parameters as computed manually by update rule | {
"login": "meryemmhamdi1",
"id": 11432288,
"node_id": "MDQ6VXNlcjExNDMyMjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/11432288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meryemmhamdi1",
"html_url": "https://github.com/meryemmhamdi1",
"followers_url": "https://api.github.com/users/meryemmhamdi1/followers",
"following_url": "https://api.github.com/users/meryemmhamdi1/following{/other_user}",
"gists_url": "https://api.github.com/users/meryemmhamdi1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meryemmhamdi1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meryemmhamdi1/subscriptions",
"organizations_url": "https://api.github.com/users/meryemmhamdi1/orgs",
"repos_url": "https://api.github.com/users/meryemmhamdi1/repos",
"events_url": "https://api.github.com/users/meryemmhamdi1/events{/privacy}",
"received_events_url": "https://api.github.com/users/meryemmhamdi1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,602 | 1,602 | NONE | null | Hi,
I would like to update the parameters of BertModel manually while training without using automatic optimizer to do it. I am trying meta-learning on BertModel and if I use automatic optimization on both inner and outer loops I get in-place operation errors. I also tried deepcopying but I cannot use deepcopy with DataParallel. So, I have opted for configuring my own manual optimization mechanism. how can I feed new parameters I computed from autograd of loss with respect to old parameters and update rule into the forward pass of BertModel manually? Is it a feature that exists? Otherwise is there a workaround?
Thanks, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6047/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6046/comments | https://api.github.com/repos/huggingface/transformers/issues/6046/events | https://github.com/huggingface/transformers/issues/6046 | 665,880,077 | MDU6SXNzdWU2NjU4ODAwNzc= | 6,046 | is_pretokenized seems to work incorrectly | {
"login": "Zhylkaaa",
"id": 18054828,
"node_id": "MDQ6VXNlcjE4MDU0ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18054828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhylkaaa",
"html_url": "https://github.com/Zhylkaaa",
"followers_url": "https://api.github.com/users/Zhylkaaa/followers",
"following_url": "https://api.github.com/users/Zhylkaaa/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhylkaaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhylkaaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhylkaaa/subscriptions",
"organizations_url": "https://api.github.com/users/Zhylkaaa/orgs",
"repos_url": "https://api.github.com/users/Zhylkaaa/repos",
"events_url": "https://api.github.com/users/Zhylkaaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhylkaaa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We face a similar issue with the distilbert tokenizer.\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-german-cased\")\r\ntokens = ['1980', 'kam', 'der', 'Crow', '##n', 'von', 'Toy', '##ota']\r\nresult = tokenizer.encode_plus(text=tokens,\r\n text_pair=None,\r\n add_special_tokens=True,\r\n truncation=False,\r\n return_special_tokens_mask=True,\r\n return_token_type_ids=True,\r\n is_pretokenized=True\r\n )\r\nresult[\"input_ids\"]\r\n# returns:\r\n[102,\r\n 3827,\r\n 1396,\r\n 125,\r\n 28177,\r\n 1634,\r\n 1634,\r\n 151,\r\n 195,\r\n 25840,\r\n 1634,\r\n 1634,\r\n 23957,\r\n 30887,\r\n 103]\r\n\r\ntokenizer.decode(result[\"input_ids\"])\r\n# returns:\r\n'[CLS] 1980 kam der Crow # # n von Toy # # ota [SEP]'\r\n```\r\n\r\nIt seems that subword tokens (here ##n and ##ota) get split into further tokens even though we set `is_pretokenized=True`. This seems unexpected to me but maybe I am missing something?",
"As I mentioned before we used `is_pretokenized` to create sliding window, but recently discovered that this can be achieved using:\r\n```\r\nstride = max_seq_length - 2 - int(max_seq_length*stride)\r\ntokenized_examples = tokenizer(examples, return_overflowing_tokens=True, \r\n max_length=max_seq_length, stride=stride, truncation=True)\r\n```\r\n\r\nthis returns `dict` with `input_ids`, `attention_mask` and `overflow_to_sample_mapping` (this helps to map between windows and example, but you should check for its presence, if you pass 1 short example it might not be there). \r\n\r\nHope this will help someone 🤗",
"I have the same issue as @tholor - there seem to be some nasty differences between slow and fast tokenizer implementations.",
"Just got the same issue with `bert-base-uncased`, However if when `is_pretokenized=False` it seems to be OK. Is this expected behaviour?\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\ntext = \"huggingface transformers\"\r\ntok = tokenizer.tokenize(text)\r\nprint(tok)\r\n# ['hugging', '##face', 'transformers']\r\n\r\noutput = tokenizer.encode_plus(tok, is_pretokenized=True)\r\ntokenizer.convert_ids_to_tokens(output[\"input_ids\"])\r\n# ['[CLS]', 'hugging', '#', '#', 'face', 'transformers', '[SEP]']\r\n```\r\nwhen `is_pretokenized=False`\r\n```python\r\noutput2 = tokenizer.encode_plus(tok, is_pretokenized=False)\r\ntokenizer.convert_ids_to_tokens(output2[\"input_ids\"])\r\n# ['[CLS]', 'hugging', '##face', 'transformers', '[SEP]']\r\n```\r\n",
"I believe that this issue can be closed because of explanation in #6575 stating that `is_pretokenized` expect just list of words spited by white space not actual tokens. So this is \"kind of expected\" behaviour :) "
] | 1,595 | 1,598 | 1,598 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I use `RobertaTokenizerFast` on pretokenized text, but problem arises when I switch to slow version too
The tasks I am working on is:
* an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am trying to implement sliding window for roberta
## To reproduce
I use `tokenizer.tokenize(text)` method to tokenize whole text (1-3 sentences), when I divide tokens into chunks and try to use `__call__` method (I also tried `encode`) with `is_pretokenized=True` argument, but this creates additional tokens (like 3 times more then should be). I worked this around by using `tokenize` -> `convert_tokens_to_ids` -> `prepare_for_model` -> `pad` pipeline, but I believe that batch methods should be faster and more memory efficient
Steps to reproduce the behavior:
0. `tokenizer = AutoTokenizer.from_pretrained('roberta-base', add_prefix_space=True, use_fast=True)`
1. `ex_text = 'long text'`
2. `tokens = tokenizer.tokenize(ex_text)`
3. `examples = [tokens[i:i+126] for i in range(0, len(tokens), 100)]`
4. `print(len(tokenizer(examples, is_pretokenized=True)['input_ids'][0])) # this prints more than 128`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect to get result similar to result I get when I use
```
tokens = tokeniser.tokenize(ex_text)
inputs = tokenizer.convert_tokens_to_ids(tokens)
inputs = [inputs[i:i+126] for i in range(0, len(tokens), 100)]
inputs = [tokenizer.prepare_for_model(example) for example in inputs]
inputs = tokenizer.pad(inputs, padding='longest')
```
Am I doing something wrong or it's unexpected behaviour?
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: MacOs
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1 (no GPU)
- Tensorflow version (GPU?): NO
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
EDIT:
I see that when I use `__call__` it actually treat ` Ġ` as 2 tokens:
`tokenizer(tokenizer.tokenize('How'), is_pretokenized=True)['input_ids']`
`out: [0, 4236, 21402, 6179, 2]` where 4236, 21402 is ` Ġ` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6046/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6046/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6045/comments | https://api.github.com/repos/huggingface/transformers/issues/6045/events | https://github.com/huggingface/transformers/issues/6045 | 665,862,946 | MDU6SXNzdWU2NjU4NjI5NDY= | 6,045 | Test BART's memory consumption | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 2604155188,
"node_id": "MDU6TGFiZWwyNjA0MTU1MTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Benchmarks",
"name": "Benchmarks",
"color": "2DF372",
"default": false,
"description": "Issues related to Memory regressions in tests and scripts"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@stas00 , this might be up your alley!",
"Excellent, will work on that. \r\n\r\nThank you, @sshleifer!",
"I will post my work in progress here, so others could chime in with ideas.\r\n\r\nThis notebook measures memory consumption and speed of `__init__`, a single fwd and fwd+bwd pass\r\nhttps://colab.research.google.com/drive/1n6J3tc8FT4ER1vBCTAtU4__U5Px2mxwI?usp=sharing\r\n\r\nThe current difficulty is how to establish a baseline for memory and speed, so that it can be validated against in the future - detecting any regressions.\r\n\r\nSeveral issues have been encountered:\r\n\r\n1. there is a large discrepancy between results on different GPU cards. Currently tested my own Titan X and Colab's Tesla T4:\r\n\r\nThe following data is for `MODEL_ID = \"sshleifer/tinier_bart\"`\r\n\r\nMemory:\r\n\r\n| func | T4 | Titan X |\r\n| --------|------|---------|\r\n| init | 0 | 0 |\r\n| fwd | 22MB | 62MB |\r\n| fwd-bwd | 22MB | 70MB |\r\n\r\nCurrently only GPU-memory was measured and the initial loading of cudnn memory was subtracted. The latter varies hugely - ~600MB on Titan X and ~1GB on T4.\r\n\r\nExec speed:\r\n\r\n| func | T4 | Titan X |\r\n|---------|----------|----------|\r\n| init | 0.05978s | 0.03808s |\r\n| fwd | 0.00375s | 0.00273s |\r\n| fwd-bwd | 0.00904s | 0.00563s |\r\n\r\nSpeed measurements are an average of 30 (!) runs.\r\n\r\nThe following data is for `MODEL_ID = \"tacebook/bart-base\"`\r\n\r\nMemory:\r\n\r\n| func | T4 | Titan X |\r\n|---------|------|---------|\r\n| init | 0 | 0 |\r\n| fwd | 596MB | 636MB |\r\n| fwd-bwd | 1576MB | 1624MB |\r\n\r\nExec speed:\r\n\r\n| func | T4 | Titan X |\r\n|---------|----------|----------|\r\n| init | 3.28328s | 2.24100s |\r\n| fwd | 0.01698s | 0.00998s |\r\n| fwd-bwd | 0.03551s | 0.03039s |\r\n\r\n\r\n2. Inconsistent results on the same card:\r\n\r\njust one function of fwd+bwd on Tesla T4:\r\n\r\n(memory is the same)\r\n\r\nExec time:\r\n```\r\nrun 1: 0.00904s\r\nrun 2: 0.00769s\r\nrun 3: 0.00740s\r\nrun 4: 0.00768s\r\nrun 5: 0.00753s\r\n```\r\n\r\nthe notebook was fully restarted for each measurement. Each report is an average of 30 runs.\r\n\r\nRandomness is not fixed, but the input data is small and fake so it shouldn't make much of a difference.\r\n\r\nSo, given that most users have different cards, how can we approach making such a test that would work for everybody (or enough people to warrant this test's usefulness).\r\n\r\nWhat would also help if you could download the notebook and share which measurement numbers you get and your card name. So that we could compare different cards. Though perhaps just one or 2 - the real testing will be on the full models and not the tiny ones. Thank you.",
"Is the variance similar for larger models?",
"I updated the comment above to add the same data for the full bart-base model.\r\n\r\nThe difference in memory seems to be \"relatively\" fixed - i.e. larger model still has about the same diff as the small one between two cards memory-wise.\r\n\r\nSpeed on the other hand has a much wider variation. Note, I added a comment that speed measurements are already an average of 30 re-runs. ",
"Hey @stas00,\r\n\r\nThanks for running the benchmarks. As discussed, I think the way to go here is:\r\n\r\n**1) GPU Memory:**\r\nSlightly adapt the memory measurement to return 3 results:\r\n- Required mem for CUDA/CUDNN init \r\n- Required mem to run function (fwd or bwd)\r\n- Sum of both these functions (this number should be the one that is returned now)\r\n=> so we only need to add a function that measures how much memory is required to load CUDA/CUDNN kernels into GPU.\r\n\r\nThen, I think we should report GPU mem requirement for a fixed GPU with fixed CUDA/CUDNN library (at the moment the CUDA/CUDNN version is not reported when running the benchmark, but they should be IMO) and make sure that these numbers are stable. Then we can loosen restrictions on different libraries and see how performance changes I guess and then finally compare on different GPUs. \r\n\r\n2) Speed measurements:\r\n\r\nMy measurements on notebooks regarding speed also always varied a lot, but was very stable on one of our GPUs...not sure why the notebooks were much more unstable. I also think we can reduce 30 to something like 10, which should be fine and maybe always do 2-5 warmup runs (this is currently only done for TPU/torchscript)? What do you think? \r\n\r\nHere we should again start with same libraries / same GPU IMO.",
"Your proposals sound good to me, @patrickvonplaten \r\n\r\nBased on my experimentation and your input - it seems that stable speed measurements are the tricky part here.\r\n\r\nIt looks like this issue (including speed measurements) is now where the discussion is on: https://github.com/huggingface/transformers/issues/6218 So perhaps let's continue there and return to this issue once there is an updated API and then we can proceed tuning this one.\r\n\r\nThe complication here is that different cards give different results and we may have to do the painful process of maintaining different baselines for different cards.\r\n\r\n",
"Unless someone would like to contribute some new ideas, this issue is stalled at the moment. The data is too divergent between different cards to have a solid test that also does a proper regression measurement. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I will reopen this for now - as we have come back to it in https://github.com/huggingface/transformers/issues/9261",
"We are still just talking about how to approach this - time being the main missing element at the moment. "
] | 1,595 | 1,614 | null | CONTRIBUTOR | null | - this can run on GPU only and be marked `@slow`
- check how much memory bart is using at `__init__`
- assert that it doesn't use more than 110% of that.
- check how much memory bart uses on a single forward pass. (optionally test this in fp16).
- assert that it doesn't use more than 110% of that.
- check how much memory bart uses on a single forward and backward pass.
- assert that it doesn't use more than 110% of that.
### Bonus:
- add similar asserts for timing!
- let the test run and check memory on CPU (make sure that if pytest is run with `-n -8` the test still passes!
- add a test to `test_modeling_common.py` to make it easy for all models to test this.
- add a test to `test_modeling_common_tf.py` to make it easy for all TF models to test this.
The benchmarking utilities may help.
It may also help to use `torch.cuda.max_memory...`
@patrickvonplaten may have further thoughts!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6045/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6044/comments | https://api.github.com/repos/huggingface/transformers/issues/6044/events | https://github.com/huggingface/transformers/issues/6044 | 665,858,769 | MDU6SXNzdWU2NjU4NTg3Njk= | 6,044 | slow from_pretrained failures | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I didn't add any ignore missing key flag to BERT, so that's weird.",
"Cool. I'll take this one.",
"It's `position_ids`, from Morgan's change. Will fix.\r\n```\r\nWARNING transformers.modeling_utils:modeling_utils.py:885 Some weights of BertModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.embeddings.position_ids']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nWARNING transformers.modeling_utils:modeling_utils.py:885 Some weights of BertModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.embeddings.position_ids']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | github actions has [full traceback](https://github.com/huggingface/transformers/runs/910644181?check_suite_focus=true)
Failures are all like
```python
for model_name in BERT_PRETRAINED_MODEL_ARCHIVE_LIST[:1]:
config = BertConfig.from_pretrained(model_name)
self.assertIsNotNone(config)
self.assertIsInstance(config, PretrainedConfig)
model = BertModel.from_pretrained(model_name)
model, loading_info = BertModel.from_pretrained(model_name, output_loading_info=True)
self.assertIsNotNone(model)
self.assertIsInstance(model, PreTrainedModel)
for value in loading_info.values():
> self.assertEqual(len(value), 0)
E AssertionError: 1 != 0
```
@sgugger is that from your change?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6044/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6043/comments | https://api.github.com/repos/huggingface/transformers/issues/6043/events | https://github.com/huggingface/transformers/pull/6043 | 665,823,881 | MDExOlB1bGxSZXF1ZXN0NDU2Nzc5NDQw | 6,043 | docs(pretrained_models): fix num parameters | {
"login": "amineabdaoui",
"id": 17952908,
"node_id": "MDQ6VXNlcjE3OTUyOTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amineabdaoui",
"html_url": "https://github.com/amineabdaoui",
"followers_url": "https://api.github.com/users/amineabdaoui/followers",
"following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}",
"gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions",
"organizations_url": "https://api.github.com/users/amineabdaoui/orgs",
"repos_url": "https://api.github.com/users/amineabdaoui/repos",
"events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/amineabdaoui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=h1) Report\n> Merging [#6043](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7f65daa2e155ecdd8594e19862dac8b322ed3b73&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6043 +/- ##\n==========================================\n+ Coverage 79.57% 79.63% +0.06% \n==========================================\n Files 146 146 \n Lines 26597 26597 \n==========================================\n+ Hits 21164 21181 +17 \n+ Misses 5433 5416 -17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6043/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6043/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=footer). Last update [7f65daa...ee22cce](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@LysandreJik I created a new Pull Request (https://github.com/huggingface/transformers/pull/7575) to resolve the above mentioned conflicts and start from a dedicated branch.\r\n\r\nThanks.\r\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | @julien-c This PR corrects the number of parameters of BERT based models.
Sometimes the difference between a given model and its pairs is important. For instance:
`bert-base-uncased` has **110M parameters** but `bert-base-multilingual-cased` has more than **178M parameters**, even if both models share the same architecture (12-layer, 768-hidden, 12-heads).
The difference is due to the vocabulary size:
`bert-base-uncased` uses a vocabulary of **30k** entries while `bert-base-multilingual-cased` uses a vocabulary of **119k** entries.
To compute the number of parameters:
``` python
from transformers import AutoModelForMaskedLM
bert_base = AutoModelForMaskedLM.from_pretrained('bert-base-uncased')
print(bert_base.num_parameters())
bert_multiling = AutoModelForMaskedLM.from_pretrained('bert-base-multilingual-cased')
print(bert_multiling.num_parameters())
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6043/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6043",
"html_url": "https://github.com/huggingface/transformers/pull/6043",
"diff_url": "https://github.com/huggingface/transformers/pull/6043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6043.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6042/comments | https://api.github.com/repos/huggingface/transformers/issues/6042/events | https://github.com/huggingface/transformers/pull/6042 | 665,817,109 | MDExOlB1bGxSZXF1ZXN0NDU2Nzc0OTk3 | 6,042 | Update README.md of my model | {
"login": "rdenadai",
"id": 917516,
"node_id": "MDQ6VXNlcjkxNzUxNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/917516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rdenadai",
"html_url": "https://github.com/rdenadai",
"followers_url": "https://api.github.com/users/rdenadai/followers",
"following_url": "https://api.github.com/users/rdenadai/following{/other_user}",
"gists_url": "https://api.github.com/users/rdenadai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rdenadai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rdenadai/subscriptions",
"organizations_url": "https://api.github.com/users/rdenadai/orgs",
"repos_url": "https://api.github.com/users/rdenadai/repos",
"events_url": "https://api.github.com/users/rdenadai/events{/privacy}",
"received_events_url": "https://api.github.com/users/rdenadai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6042/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6042",
"html_url": "https://github.com/huggingface/transformers/pull/6042",
"diff_url": "https://github.com/huggingface/transformers/pull/6042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6042.patch",
"merged_at": 1595799109000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6041/comments | https://api.github.com/repos/huggingface/transformers/issues/6041/events | https://github.com/huggingface/transformers/pull/6041 | 665,812,025 | MDExOlB1bGxSZXF1ZXN0NDU2NzcxODI4 | 6,041 | docs(pretrained_models): fix num parameters | {
"login": "amineabdaoui",
"id": 17952908,
"node_id": "MDQ6VXNlcjE3OTUyOTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amineabdaoui",
"html_url": "https://github.com/amineabdaoui",
"followers_url": "https://api.github.com/users/amineabdaoui/followers",
"following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}",
"gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions",
"organizations_url": "https://api.github.com/users/amineabdaoui/orgs",
"repos_url": "https://api.github.com/users/amineabdaoui/repos",
"events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/amineabdaoui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I created a new PR with the correct formatting: https://github.com/huggingface/transformers/pull/6043"
] | 1,595 | 1,595 | 1,595 | NONE | null | Correct the number of parameters of BERT based models.
Sometimes the difference is important. For instance:
`bert-base-uncased` has **110M parameters** but `bert-base-multilingual-cased` has more than **178M parameters**, even if both models share the same architecture (12-layer, 768-hidden, 12-heads).
The difference is due to the vocabulary size:
`bert-base-uncased` uses a vocabulary of 30k entries while `bert-base-multilingual-cased` uses a vocabulary of 119k entries. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6041/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6041",
"html_url": "https://github.com/huggingface/transformers/pull/6041",
"diff_url": "https://github.com/huggingface/transformers/pull/6041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6041.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6040/comments | https://api.github.com/repos/huggingface/transformers/issues/6040/events | https://github.com/huggingface/transformers/pull/6040 | 665,789,719 | MDExOlB1bGxSZXF1ZXN0NDU2NzU2NDI5 | 6,040 | Draft Etalab QA model | {
"login": "psorianom",
"id": 1085210,
"node_id": "MDQ6VXNlcjEwODUyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1085210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psorianom",
"html_url": "https://github.com/psorianom",
"followers_url": "https://api.github.com/users/psorianom/followers",
"following_url": "https://api.github.com/users/psorianom/following{/other_user}",
"gists_url": "https://api.github.com/users/psorianom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psorianom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psorianom/subscriptions",
"organizations_url": "https://api.github.com/users/psorianom/orgs",
"repos_url": "https://api.github.com/users/psorianom/repos",
"events_url": "https://api.github.com/users/psorianom/events{/privacy}",
"received_events_url": "https://api.github.com/users/psorianom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Is this ready to merge? I'll merge now and feel free to update later (we'll make it way easier to iterate on model cards in the medium-term future)",
"(You can input model-specific examples to the inference QA widget if you need to, see https://huggingface.co/docs#how-can-i-control-my-models-widgets-example-inputs)",
"Top ! Thanks @julien-c ! Yes, I will update it with model-specific examples and fixing some typos also."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Card for the model https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6040/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6040",
"html_url": "https://github.com/huggingface/transformers/pull/6040",
"diff_url": "https://github.com/huggingface/transformers/pull/6040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6040.patch",
"merged_at": 1595841309000
} |
https://api.github.com/repos/huggingface/transformers/issues/6039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6039/comments | https://api.github.com/repos/huggingface/transformers/issues/6039/events | https://github.com/huggingface/transformers/issues/6039 | 665,756,459 | MDU6SXNzdWU2NjU3NTY0NTk= | 6,039 | MarianMT - How to find out the actual names of the languages? - Only language-codes are available | {
"login": "sunnyville01",
"id": 33743210,
"node_id": "MDQ6VXNlcjMzNzQzMjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/33743210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunnyville01",
"html_url": "https://github.com/sunnyville01",
"followers_url": "https://api.github.com/users/sunnyville01/followers",
"following_url": "https://api.github.com/users/sunnyville01/following{/other_user}",
"gists_url": "https://api.github.com/users/sunnyville01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunnyville01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunnyville01/subscriptions",
"organizations_url": "https://api.github.com/users/sunnyville01/orgs",
"repos_url": "https://api.github.com/users/sunnyville01/repos",
"events_url": "https://api.github.com/users/sunnyville01/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunnyville01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"https://huggingface.co/languages",
"@patil-suraj Thank you."
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | # Question
## Information
I want to use the [MarianMT](https://huggingface.co/transformers/model_doc/marian.html) library for creating a translation application. There are lots of languages available with their language-codes mentioned such as "en" and "es", but no where can I find the actual names of the languages for each code. I know "en" is for English and "fr" for French, but what "bg" and "lu" and so many others that are available. [This resource](https://huggingface.co/Helsinki-NLP) doesn't mention the actual names as well.
Is there any place where I can find the corresponding names for these codes. I tried using some online source, but there seems to be a miss-match and I can't be sure if I got the match correct.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6039/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6038/comments | https://api.github.com/repos/huggingface/transformers/issues/6038/events | https://github.com/huggingface/transformers/pull/6038 | 665,742,707 | MDExOlB1bGxSZXF1ZXN0NDU2NzI0NjI4 | 6,038 | Rework TF trainer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=h1) Report\n> Merging [#6038](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dafa296c952c08fca3686f1cf8f3a8f8eb116744&el=desc) will **decrease** coverage by `0.88%`.\n> The diff coverage is `9.23%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6038 +/- ##\n==========================================\n- Coverage 78.80% 77.92% -0.89% \n==========================================\n Files 146 146 \n Lines 26325 26332 +7 \n==========================================\n- Hits 20746 20519 -227 \n- Misses 5579 5813 +234 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <0.00%> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.54% <8.66%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `89.47% <100.00%> (+0.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.97% <0.00%> (+4.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=footer). Last update [dafa296...0e56209](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@jplu I tried, but unfortunately I'm using a **TFRecord dataset**, which has an **unknown cardinality** ([doc link](https://www.tensorflow.org/api_docs/python/tf/data/experimental/cardinality)).\r\n\r\nSo I'm meeting the following error :\r\n\r\n```\r\nValueError: The training dataset must have an asserted cardinality\r\n```",
"Yes, you now have to use `assert_cardinality` on your dataset, as it is done in all the examples now.",
"I can rework it to make it more similar to the PT one, but I don't think they must be strictly the same has they don't really have the same way to work... In any case I can rethink it :)\r\n\r\nAbout the issue I don't get what it means so I asked to elaborate more.",
"@jplu With my custom code I'm meeting the following error :\r\n\r\n```\r\nTypeError: in user code:\r\n\r\n /home/me/.venv/test/lib/python3.7/site-packages/transformers/trainer_tf.py:551 distributed_training_steps *\r\n self.args.strategy.experimental_run_v2(apply_gradients, batch)\r\n /home/me/.venv/test/lib/python3.7/site-packages/transformers/trainer_tf.py:531 apply_gradients *\r\n reduced_features = features[: self.args.train_batch_size / self.args.n_replicas]\r\n\r\n TypeError: unhashable type: 'slice'\r\n```\r\n\r\nNot sure if it's from my code or this PR though...",
"Humm this is weird... I never got this error myself. How did you come to this? in order to try to replicate it.",
"@LysandreJik @sgugger Did I properly address all your comments?"
] | 1,595 | 1,600 | 1,596 | CONTRIBUTOR | null | This PR brings several improvements to the TensorFlow Trainer:
- make it more intuitive with less multiple small functions.
- should also make the trainer compliant with TPU.
- Fix the TensorFlow version to at least 2.2 for the trainer
- increase speed of the dataset preprocessing
@Colanim Can you please test this PR with TPU to see if it looks ok. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6038/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6038",
"html_url": "https://github.com/huggingface/transformers/pull/6038",
"diff_url": "https://github.com/huggingface/transformers/pull/6038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6038.patch",
"merged_at": 1596047522000
} |
https://api.github.com/repos/huggingface/transformers/issues/6037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6037/comments | https://api.github.com/repos/huggingface/transformers/issues/6037/events | https://github.com/huggingface/transformers/pull/6037 | 665,737,864 | MDExOlB1bGxSZXF1ZXN0NDU2NzIxNjAw | 6,037 | Model card for Vamsi/T5_Paraphrase_Paws | {
"login": "Vamsi995",
"id": 52487689,
"node_id": "MDQ6VXNlcjUyNDg3Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/52487689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vamsi995",
"html_url": "https://github.com/Vamsi995",
"followers_url": "https://api.github.com/users/Vamsi995/followers",
"following_url": "https://api.github.com/users/Vamsi995/following{/other_user}",
"gists_url": "https://api.github.com/users/Vamsi995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vamsi995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vamsi995/subscriptions",
"organizations_url": "https://api.github.com/users/Vamsi995/orgs",
"repos_url": "https://api.github.com/users/Vamsi995/repos",
"events_url": "https://api.github.com/users/Vamsi995/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vamsi995/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=h1) Report\n> Merging [#6037](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6037 +/- ##\n==========================================\n+ Coverage 78.50% 78.51% +0.01% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20606 20610 +4 \n+ Misses 5643 5639 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=footer). Last update [c69ea5e...4cf27e8](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Creating a model card for my uploaded model on the transformers hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6037/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6037",
"html_url": "https://github.com/huggingface/transformers/pull/6037",
"diff_url": "https://github.com/huggingface/transformers/pull/6037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6037.patch",
"merged_at": 1595841166000
} |
https://api.github.com/repos/huggingface/transformers/issues/6036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6036/comments | https://api.github.com/repos/huggingface/transformers/issues/6036/events | https://github.com/huggingface/transformers/pull/6036 | 665,685,151 | MDExOlB1bGxSZXF1ZXN0NDU2Njg1NDg3 | 6,036 | don't complain about missing W&B when WANDB_DISABLED=true | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=h1) Report\n> Merging [#6036](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **decrease** coverage by `1.18%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6036 +/- ##\n==========================================\n- Coverage 78.50% 77.31% -1.19% \n==========================================\n Files 146 146 \n Lines 26249 26251 +2 \n==========================================\n- Hits 20606 20297 -309 \n- Misses 5643 5954 +311 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.47% <0.00%> (-0.07%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.00% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=footer). Last update [c69ea5e...f1463a0](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM",
"Note, the project has a failing test - I did 2 PRs updates yesterday and today - both unrelated to my work\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/9539/workflows/15fcbc84-9cfe-4970-943f-935bce800c98/jobs/64765\r\n\r\nanother one from last night:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/9527/workflows/73306d70-4190-48cd-b24a-b73619cd2002/jobs/64665/steps\r\n\r\nPlease kindly trigger a re-run of CI for this PR. Thank you.\r\n",
"Yeah, the CI is flaky sometimes. Re-triggered the failing test.",
"> Yeah, the CI is flaky sometimes. \r\n\r\nWhen it happens it's the same test that fails - perhaps it could be fixed?\r\n\r\n> Re-triggered the failing test.\r\n\r\nThank you, @sgugger \r\n",
"Yeah we try to fix those flaky tests as we catch them. Don't hesitate to report the name and logs on your own too."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | I get a lot of crashes with W&B in transformers examples: https://github.com/huggingface/transformers/pull/5835 so I have to use `WANDB_DISABLED=true` - this PR removes a complaint that shouldn't be there when this env var is used. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6036/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6036",
"html_url": "https://github.com/huggingface/transformers/pull/6036",
"diff_url": "https://github.com/huggingface/transformers/pull/6036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6036.patch",
"merged_at": 1595788194000
} |
https://api.github.com/repos/huggingface/transformers/issues/6035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6035/comments | https://api.github.com/repos/huggingface/transformers/issues/6035/events | https://github.com/huggingface/transformers/pull/6035 | 665,670,273 | MDExOlB1bGxSZXF1ZXN0NDU2Njc1MzQy | 6,035 | add a summary report flag for run_examples on CI | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=h1) Report\n> Merging [#6035](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6035 +/- ##\n==========================================\n+ Coverage 78.50% 78.52% +0.01% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20606 20611 +5 \n+ Misses 5643 5638 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=footer). Last update [c69ea5e...9bca1bc](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Currently, it's hard to derive which example tests were run on CI, and which weren't. Adding `-rA` flag to `pytest`, will now include a summary like:
```
==================================================================== short test summary info =====================================================================
PASSED examples/test_examples.py::ExamplesTests::test_generation
PASSED examples/test_examples.py::ExamplesTests::test_run_glue
PASSED examples/test_examples.py::ExamplesTests::test_run_language_modeling
PASSED examples/test_examples.py::ExamplesTests::test_run_squad
FAILED examples/test_examples.py::ExamplesTests::test_run_pl_glue - AttributeError: 'Namespace' object has no attribute 'gpus'
============================================================ 1 failed, 4 passed, 8 warnings in 42.96s ============================================================
```
which makes it easier to validate whether some example is being covered by CI or not.
The PR came about following on the discussion at https://github.com/huggingface/transformers/pull/6027#issuecomment-663894148 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6035/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6035",
"html_url": "https://github.com/huggingface/transformers/pull/6035",
"diff_url": "https://github.com/huggingface/transformers/pull/6035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6035.patch",
"merged_at": 1595786955000
} |
https://api.github.com/repos/huggingface/transformers/issues/6034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6034/comments | https://api.github.com/repos/huggingface/transformers/issues/6034/events | https://github.com/huggingface/transformers/pull/6034 | 665,657,240 | MDExOlB1bGxSZXF1ZXN0NDU2NjY2NTQ5 | 6,034 | add pl_glue example test | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Looks like this PR needs one of these merged first: https://github.com/huggingface/transformers/pull/6028 or https://github.com/huggingface/transformers/pull/6027, as `-n_gpus` is now required in `lightning_base.py`",
"No matter what I try CI can't get to acc/f1>=0.75, even though I get 1.0 on my machine. Suggestions?",
"OK, as suggested by @sshleifer I changed the PR to only test that `run_pl_glue.py` is able to run and complete its job. Will have to figure out quality testing another time. It's very inefficient to try to tune something that works just fine on my machine but fails on CI.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=h1) Report\n> Merging [#6034](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc54a87c293761823ff1c1833a4f77353af9a9&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6034 +/- ##\n==========================================\n- Coverage 79.53% 79.47% -0.06% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21630 21614 -16 \n- Misses 5566 5582 +16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (+5.16%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (+25.21%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=footer). Last update [1bbc54a...bf693dd](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"CI failure is totally unrelated."
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | Currently PL glue example isn't being tested by CI. This PR fixes that.
https://github.com/huggingface/transformers/pull/6027
the added test currently fails (goodness) and fixes are being PR'ed in
1. https://github.com/huggingface/transformers/pull/6027
2. https://github.com/huggingface/transformers/pull/6028
At the moment this is a basic 'can-run' test - I will add an actual validation once the `pl_glue_run.py` script is made to work.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6034/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6034",
"html_url": "https://github.com/huggingface/transformers/pull/6034",
"diff_url": "https://github.com/huggingface/transformers/pull/6034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6034.patch",
"merged_at": 1597130213000
} |
https://api.github.com/repos/huggingface/transformers/issues/6033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6033/comments | https://api.github.com/repos/huggingface/transformers/issues/6033/events | https://github.com/huggingface/transformers/issues/6033 | 665,632,577 | MDU6SXNzdWU2NjU2MzI1Nzc= | 6,033 | Is there an easy way to access the multiple choice head of the RobertaForMultipleChoice? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There is no direct way to do this from the model, you will have to edit the code directly. You can just copy the code from the file `modeling_roberta.py` and then change the forward method of `RobertaForMultipleChoice`. You can then directly feed the hidden layer of your choice to the final layer.",
"Hello,\r\nWould the code below do the job that I am looking for?\r\n```python\r\n# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token\r\nbest_model_roberta = RobertaForMultipleChoice.from_pretrained('roberta-base', output_hidden_states = True)\r\n\r\n# for each layer j = 1,...,12, extract the hidden states at the layer j\r\nhidden_states = best_model_roberta(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, labels=mc_labels)[2][j][:,:,:].detach()\r\n\r\nmc_logits = best_model_roberta.classifier(hidden_states).detach()\r\n```\r\n\r\nThank you,\r\n",
"Yes, but if you detach the history, you won't be able to do a backward pass through this model.",
"Hello,\r\n\r\nThank you for your reply. If I want to calculate the mc_loss generated after inputting the hidden state vector directly into the multiple-choice head, would the code below be appropriate?\r\n```python\r\nimport torch\r\nfrom torch.nn import CrossEntropyLoss\r\nfrom matplotlib import pyplot as plt\r\nfrom transformers import RobertaTokenizer, RobertaForMultipleChoice, AdamW, get_constant_schedule\r\nfrom transformers import PreTrainedTokenizer\r\nimport numpy as np\r\nimport pandas as pd\r\nimport pickle\r\nimport dill\r\nfrom matplotlib.pyplot import plot, savefig, xlim, figure, ylim, legend, boxplot, setp, axes, xlabel, ylabel, xticks\r\nimport gc \r\nimport math\r\nimport time\r\nfrom random import seed\r\nfrom random import randint\r\nimport sys\r\nimport statistics\r\nfrom numpy import nan\r\nimport scipy.stats as ss\r\n\r\n# import the pre-trained HuggingFace RobertaTokenizer\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\n \r\n# get the encoding for the special tokens\r\npub2_pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)\r\n \r\n# sanity check\r\nlen(tokenizer) \r\n\r\n# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token\r\nbest_model_roberta = RobertaForMultipleChoice.from_pretrained('roberta-base', output_hidden_states = True)\r\n\r\n# Turn on the evaluation mode\r\nbest_model_roberta.eval()\r\n \r\n# extract the hidden states at the layer 1\r\nhidden_states = best_model_roberta(input_ids=input_ids, attention_mask=attention_mask, labels=mc_labels, output_hidden_states=True)[2][1][:,:,:].detach()\r\n\r\n# access the multiple-choice head \r\nmc_logits = best_model_roberta.classifier(hidden_states).detach()\r\n \r\n# define the loss function\r\nloss_fct = CrossEntropyLoss()\r\n\r\n# calculate the mc_loss\r\nmc_loss = loss_fct(mc_logits.view(1,-1), mc_labels)\r\n```\r\n\r\nThe code above works without error, but I am particularly wondering if `mc_logits.view(1,-1)` is correct. The original HuggingFace code for RobertaForMultipleChoice uses `mc_logits.view(-1,num_choice)` to calculate the resulting error, but I am wondering if it is correct to specify `mc_logits.view(1,-1)` instead, if I am inputting the hidden state vectors directly into the multiple-choice head (i.e. instead of `pooled_output`...not sure what the `pooled_output` in the HuggingFace code is).\r\n\r\nThank you,\r\n\r\n ",
"I think you should use the pooler layer too, which is there to convert all your sequence tokens in one summary state. So\r\n```python\r\npooled_output = self.roberta.pooler(hidden_state)\r\nlogits = self.classifier(pooled_output)\r\nreshaped_logits = logits.view(-1, num_choices)\r\nloss_fct = CrossEntropyLoss()\r\nmc_loss = loss_fct(reshaped_logits, mc_labels)\r\n```"
] | 1,595 | 1,596 | 1,596 | NONE | null | Hello,
This question is specifically for the ```RobertaForMultipleChoice``` model.
I know how to extract the hidden state vectors from each layer of the ```RobertaForMultipleChoice``` model, but is there any way that I can directly feed the hidden state vectors of layer `j` as an input to the multiple-choice head of the model?
In other words, I would like to know if there is any way that I can control the input of the multiple-choice head of the ```RobertaForMultipleChoice``` model (or if there is any easy way that I can access the multiple-choice head directly).
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6033/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6032/comments | https://api.github.com/repos/huggingface/transformers/issues/6032/events | https://github.com/huggingface/transformers/pull/6032 | 665,628,177 | MDExOlB1bGxSZXF1ZXN0NDU2NjQ1NDg1 | 6,032 | Create README.md | {
"login": "ramsrigouthamg",
"id": 1754080,
"node_id": "MDQ6VXNlcjE3NTQwODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1754080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ramsrigouthamg",
"html_url": "https://github.com/ramsrigouthamg",
"followers_url": "https://api.github.com/users/ramsrigouthamg/followers",
"following_url": "https://api.github.com/users/ramsrigouthamg/following{/other_user}",
"gists_url": "https://api.github.com/users/ramsrigouthamg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ramsrigouthamg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ramsrigouthamg/subscriptions",
"organizations_url": "https://api.github.com/users/ramsrigouthamg/orgs",
"repos_url": "https://api.github.com/users/ramsrigouthamg/repos",
"events_url": "https://api.github.com/users/ramsrigouthamg/events{/privacy}",
"received_events_url": "https://api.github.com/users/ramsrigouthamg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=h1) Report\n> Merging [#6032](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6032 +/- ##\n==========================================\n+ Coverage 78.50% 78.51% +0.01% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20606 20610 +4 \n+ Misses 5643 5639 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=footer). Last update [c69ea5e...cbb3748](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sgugger ! Hi Sylvain, Wanted to get the pull request for this model card approved. If you can take a look that would be great :) Thanks for your time.\r\n\r\nSincerely,\r\nRamsri"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Adding model card - readme | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6032/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6032",
"html_url": "https://github.com/huggingface/transformers/pull/6032",
"diff_url": "https://github.com/huggingface/transformers/pull/6032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6032.patch",
"merged_at": 1595881538000
} |
https://api.github.com/repos/huggingface/transformers/issues/6031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6031/comments | https://api.github.com/repos/huggingface/transformers/issues/6031/events | https://github.com/huggingface/transformers/issues/6031 | 665,556,984 | MDU6SXNzdWU2NjU1NTY5ODQ= | 6,031 | Error with batch_encode_plus | {
"login": "Nithin-Holla",
"id": 19574344,
"node_id": "MDQ6VXNlcjE5NTc0MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/19574344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nithin-Holla",
"html_url": "https://github.com/Nithin-Holla",
"followers_url": "https://api.github.com/users/Nithin-Holla/followers",
"following_url": "https://api.github.com/users/Nithin-Holla/following{/other_user}",
"gists_url": "https://api.github.com/users/Nithin-Holla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nithin-Holla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nithin-Holla/subscriptions",
"organizations_url": "https://api.github.com/users/Nithin-Holla/orgs",
"repos_url": "https://api.github.com/users/Nithin-Holla/repos",
"events_url": "https://api.github.com/users/Nithin-Holla/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nithin-Holla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, if want to encode as sentence pair then you should do it like this\r\n```python3\r\n[ [\"sent1\", \"sent2\"] ]\r\n```\r\nelse just pass the examples in a single list\r\n```python3\r\n[\"example1\", \"example2\"]\r\n```\r\n\r\nAlso, consider switching to the new tokenizer API. You can find the docs [here](https://huggingface.co/transformers/preprocessing.html)",
"@patil-suraj I didn't intend to encode it as a sentence pair, but rather a batch of single sentences of size 2.",
"Then you can just pass a list of examples. No need for nested list",
"Okay, thanks!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
## Information
When using the tokenizer method `batch_encode_plus` on the new `transformers` version, I run into an error
Model I am using (Bert, XLNet ...): BertTokenizer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Here is a sample script to reproduce the error:
```python
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = [['sample text 1'], ['sample text 2']]
encode = tokenizer.batch_encode_plus(text)
```
The stack trace is:
```
Traceback (most recent call last):
File "bug.py", line 8, in <module>
encode = tokenizer.batch_encode_plus(text)
File "/home/nithinh/debug/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1832, in batch_encode_plus
**kwargs,
File "/home/nithinh/debug/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 534, in _batch_encode_plus
ids, pair_ids = ids_or_pair_ids
ValueError: not enough values to unpack (expected 2, got 1)
```
## Expected behavior
Successfully tokenize without errors. The script runs fine with `transformers` v2.11.0.
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.0-9-amd64-x86_64-with-debian-10.4
- Python version: 3.6.3
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6031/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6030/comments | https://api.github.com/repos/huggingface/transformers/issues/6030/events | https://github.com/huggingface/transformers/pull/6030 | 665,554,075 | MDExOlB1bGxSZXF1ZXN0NDU2NTkzNDg2 | 6,030 | create model-card for lordtt13/emo-mobilebert | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=h1) Report\n> Merging [#6030](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.14%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6030 +/- ##\n==========================================\n+ Coverage 78.50% 78.64% +0.14% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20606 20643 +37 \n+ Misses 5643 5606 -37 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=footer). Last update [c69ea5e...fa74b6b](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Interesting project, thanks for sharing. **[Model page](https://huggingface.co/lordtt13/emo-mobilebert)**"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6030",
"html_url": "https://github.com/huggingface/transformers/pull/6030",
"diff_url": "https://github.com/huggingface/transformers/pull/6030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6030.patch",
"merged_at": 1595944824000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6029/comments | https://api.github.com/repos/huggingface/transformers/issues/6029/events | https://github.com/huggingface/transformers/issues/6029 | 665,553,912 | MDU6SXNzdWU2NjU1NTM5MTI= | 6,029 | tensorflow bert model can‘t return all hidden_states | {
"login": "SunYanCN",
"id": 42198591,
"node_id": "MDQ6VXNlcjQyMTk4NTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/42198591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunYanCN",
"html_url": "https://github.com/SunYanCN",
"followers_url": "https://api.github.com/users/SunYanCN/followers",
"following_url": "https://api.github.com/users/SunYanCN/following{/other_user}",
"gists_url": "https://api.github.com/users/SunYanCN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunYanCN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunYanCN/subscriptions",
"organizations_url": "https://api.github.com/users/SunYanCN/orgs",
"repos_url": "https://api.github.com/users/SunYanCN/repos",
"events_url": "https://api.github.com/users/SunYanCN/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunYanCN/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm encountering the exact same error - and it also happens when trying to output the attention keys. \r\nI would really like to know where the problem comes from as well! ",
"@sshleifer ",
"@julien-c ",
"transformers==2.7.0 solved the issue in my case",
"> transformers==2.7.0 solved the issue in my case\r\n\r\nThanks a lot. It solved my case too",
"Thank you so much @VietHoang1710, worked for me - The same problem with tensorflow roberta model, lost a lot of time to this one!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,603 | 1,603 | NONE | null | ```
import tensorflow as tf
import tensorflow_datasets
from transformers import *
from tensorflow.keras import layers
configuration = BertConfig.from_pretrained('bert-base-cased', output_hidden_states=True)
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', config=configuration)
# BERT encoder
encoder = TFBertModel.from_pretrained('bert-base-cased', config=configuration)
# Model
input_ids = layers.Input(shape=(100,), dtype=tf.int32)
token_type_ids = layers.Input(shape=(100,), dtype=tf.int32)
attention_mask = layers.Input(shape=(100,), dtype=tf.int32)
outputs = encoder(
input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask
)
print(outputs)
_, _, hidden_states = outputs[0], outputs[1], outputs[2]
```
output:
```
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predic
(<tf.Tensor 'tf_bert_model_2/Identity:0' shape=(None, 100, 768) dtype=float32>, <tf.Tensor 'tf_bert_model_2/Identity_1:0' shape=(None, 768) dtype=float32>)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-7-63944e137dcd> in <module>()
17 )
18 print(outputs)
---> 19 _, _, hidden_states = outputs[0], outputs[1], outputs[2]
IndexError: tuple index out of range
```
You can check it:
[colab code](https://colab.research.google.com/drive/1zZo2SwBuGoCHsbvL3_fVR5rygWiuR16-?usp=sharing) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6029/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6028/comments | https://api.github.com/repos/huggingface/transformers/issues/6028/events | https://github.com/huggingface/transformers/pull/6028 | 665,504,416 | MDExOlB1bGxSZXF1ZXN0NDU2NTU4MTAz | 6,028 | examples/text-classification/run_pl.sh multiple problems | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=h1) Report\n> Merging [#6028](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6028 +/- ##\n==========================================\n+ Coverage 78.50% 78.68% +0.18% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20606 20655 +49 \n+ Misses 5643 5594 -49 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6028/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6028/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6028/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=footer). Last update [c69ea5e...414fbe9](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks like both of us worked on this at the same time - a different solution - https://github.com/huggingface/transformers/pull/6027",
"The code of this PR eventually got in the hard way via 3 PRs: https://github.com/huggingface/transformers/pull/6027, https://github.com/huggingface/transformers/pull/6307 and https://github.com/huggingface/transformers/pull/6314"
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | Fixing this sequence of errors - each fix required for the next error
running:
```
cd examples/text-classification
./run_pl.sh
```
error 1:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 183, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 289, in generic_train
if args.gpus > 1:
AttributeError: 'Namespace' object has no attribute 'gpus'
```
solution: added `--n_gpus` arg
error 2:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 183, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 300, in generic_train
**train_params,
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 853, in from_argparse_args
return cls(**trainer_kwargs)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 468, in __init__
self.tpu_cores = _parse_tpu_cores(tpu_cores)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 526, in _parse_tpu_cores
raise MisconfigurationException("`tpu_cores` can only be 1, 8 or [<1-8>]")
pytorch_lightning.utilities.exceptions.MisconfigurationException: `tpu_cores` can only be 1, 8 or [<1-8>]
```
solution: removed `default=0` for `tpu_cores`
error 3:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 183, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 304, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1038, in fit
model.setup('fit')
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 125, in setup
dataloader = self.get_dataloader("train", train_batch_size)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'GLUETransformer' object has no attribute 'get_dataloader'
```
solution: added a wrapper - but it's incomplete - what to do with the `shuffle` arg?
error 4:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 187, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 306, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1044, in fit
results = self.run_pretrain_routine(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 632, in run_training_batch
self.hiddens
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 776, in optimizer_closure
hiddens)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 956, in training_forward
output = self.model.training_step(*args)
File "run_pl_glue.py", line 44, in training_step
tensorboard_logs = {"loss": loss, "rate": self.lr_scheduler.get_last_lr()[-1]}
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'GLUETransformer' object has no attribute 'lr_scheduler'
```
solution: I'm not sure how it used to work, but there is no `self.lr_scheduler` in pytorch-lightning (PL), I found one here: `self.trainer.lr_schedulers[0]["scheduler"]` and set this attribute. I have no idea whether this always works. Someone who wrote this script would probably know better where the missing attribute has gone. It's set inside `def fit`/CPU but inside the `trainer` object and not `nn.Module`.
Further notes:
`run_pl.sh` invokes PL in CPU mode, despite available GPU. I haven't tested this on gpu yet - I just saw during debug that PL [inits optimizers](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L1096) just before it runs `run_pretrain_routine`, so I didn't find an easy PL predefined method where one could preset `self.lr_scheduler`.
Perhaps PL API has changed and caused this issue?
error 5:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 218, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 305, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1044, in fit
results = self.run_pretrain_routine(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 671, in run_training_batch
self.on_batch_end()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py", line 82, in on_batch_end
callback.on_batch_end(self, self.get_model())
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 198, in on_batch_end
lrs = {f"lr_group_{i}": lr for i, lr in enumerate(self.lr_scheduler.get_lr())}
AttributeError: 'LoggingCallback' object has no attribute 'lr_scheduler'
```
solution: see notes for error 4.
with these fixes the code at least starts training, I didn't test further, since clearly there is a better way. Only the fixes for the first 2 errors are obviously correct to merge.
All the fixes are in one PR, as one can't move to the next error, before fixing the previous ones.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6028/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6028",
"html_url": "https://github.com/huggingface/transformers/pull/6028",
"diff_url": "https://github.com/huggingface/transformers/pull/6028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6028.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6027/comments | https://api.github.com/repos/huggingface/transformers/issues/6027/events | https://github.com/huggingface/transformers/pull/6027 | 665,494,213 | MDExOlB1bGxSZXF1ZXN0NDU2NTUwNTQ0 | 6,027 | [Fix] text-classification PL example | {
"login": "bhashithe",
"id": 13556459,
"node_id": "MDQ6VXNlcjEzNTU2NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/13556459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhashithe",
"html_url": "https://github.com/bhashithe",
"followers_url": "https://api.github.com/users/bhashithe/followers",
"following_url": "https://api.github.com/users/bhashithe/following{/other_user}",
"gists_url": "https://api.github.com/users/bhashithe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhashithe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhashithe/subscriptions",
"organizations_url": "https://api.github.com/users/bhashithe/orgs",
"repos_url": "https://api.github.com/users/bhashithe/repos",
"events_url": "https://api.github.com/users/bhashithe/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhashithe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=h1) Report\n> Merging [#6027](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e168488a74a41db0eddfa4699239c6f7b301c933&el=desc) will **increase** coverage by `1.04%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6027 +/- ##\n==========================================\n+ Coverage 77.46% 78.50% +1.04% \n==========================================\n Files 146 146 \n Lines 26243 26243 \n==========================================\n+ Hits 20330 20603 +273 \n+ Misses 5913 5640 -273 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=footer). Last update [e168488...4fbcde5](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Why wasn't any of this breaking tests? Is there some test coverage we could add to find these sorts of errors earlier?\r\n\r\n `run_pl_glue.py` isn't being tested. It's not in [`test_examples.py`](https://github.com/huggingface/transformers/blob/master/examples/test_examples.py) and it doesn't have a dedicated test file like some other examples do.\r\n\r\nBesides, looking at the output for examples test:\r\n\r\nhttps://circleci.com/gh/huggingface/transformers/64559?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link\r\n\r\nit's impossible to tell which examples are being run and which aren't. It will only indicate the name on failure. Perhaps at the very least adding a pytest option so that it can announce which tests were run? Submitted PR to do just that: https://github.com/huggingface/transformers/pull/6035\r\n",
"Here is a PR that adds the missing PL glue test: https://github.com/huggingface/transformers/pull/6034 (which obviously fails by CI - a good thing).\r\n",
"@stas00 you can at least see all the files that are run with\r\n`ls examples/**/test*.py`. ",
"> @stas00 you can at least see all the files that are run with\r\n> `ls examples/**/test*.py`.\r\n\r\nyou want one dir up as well, so:\r\n\r\n`ls -1 examples/test*.py examples/*/test*.py`\r\n\r\nbut it tells only part of the story, since most info is hidden in `examples/test_examples.py`. e.g. you can't tell pl glue is not being there.\r\n\r\n",
"How does this break tf tests? Looks like the model save still has issues with the state_dict its saving.",
"TF failures are spurious.",
"Merging this, thanks @bhashithe, @stas00 @laibamehnaz and everyone else who helped!",
"The merged https://github.com/huggingface/transformers/pull/6027 broke `examples/seq2seq/test_seq2seq_examples.py::test_finetune_lr_shedulers` - which I think was flagged by failing CI of that PR.\r\n\r\nyeah, PL already has `--gpus` - so it conflicts with the one added by 6027. So I will look at how to rework that need in a different way.",
"Let's continue the discussion here: https://github.com/huggingface/transformers/issues/6310"
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | The text-classification example needed to have several edits to get it to working. The main one was that the hparams are loaded as a dict instead of a Namespace object from the checkpoint so this needed to be fixed with recasting the hparams to a Namespace object.
Though this is not the ideal solution, it works for now.
I also have some other fixes such as gpus argument which needed to be added to the generic arguments list in the `lightning_base.py` and removing the default value for `n_tpu_cores`. And the lr_scheduler was not accessed by the logging callback correctly. These have been all fixed and the example works correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6027/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6027",
"html_url": "https://github.com/huggingface/transformers/pull/6027",
"diff_url": "https://github.com/huggingface/transformers/pull/6027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6027.patch",
"merged_at": 1596743203000
} |
https://api.github.com/repos/huggingface/transformers/issues/6026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6026/comments | https://api.github.com/repos/huggingface/transformers/issues/6026/events | https://github.com/huggingface/transformers/pull/6026 | 665,477,337 | MDExOlB1bGxSZXF1ZXN0NDU2NTM3NDY0 | 6,026 | Fix tokenizer saving and loading error | {
"login": "yobekiko",
"id": 25716647,
"node_id": "MDQ6VXNlcjI1NzE2NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/25716647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yobekiko",
"html_url": "https://github.com/yobekiko",
"followers_url": "https://api.github.com/users/yobekiko/followers",
"following_url": "https://api.github.com/users/yobekiko/following{/other_user}",
"gists_url": "https://api.github.com/users/yobekiko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yobekiko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yobekiko/subscriptions",
"organizations_url": "https://api.github.com/users/yobekiko/orgs",
"repos_url": "https://api.github.com/users/yobekiko/repos",
"events_url": "https://api.github.com/users/yobekiko/events{/privacy}",
"received_events_url": "https://api.github.com/users/yobekiko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> This is great! Do you mind adding a test under `tests/test_tokenization_common.py` so that we may ensure this doesn't fail in the future?\r\n\r\nHi I added into an existing test. Please check if it's appropriate.",
"Hello! I added a more robust test and fixed the style issue. Thanks a lot for your contribution, merging as soon as all the tests show green!"
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | This PR is to address #6025 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6026/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6026",
"html_url": "https://github.com/huggingface/transformers/pull/6026",
"diff_url": "https://github.com/huggingface/transformers/pull/6026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6026.patch",
"merged_at": 1597135756000
} |
https://api.github.com/repos/huggingface/transformers/issues/6025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6025/comments | https://api.github.com/repos/huggingface/transformers/issues/6025/events | https://github.com/huggingface/transformers/issues/6025 | 665,477,025 | MDU6SXNzdWU2NjU0NzcwMjU= | 6,025 | Failed to save tokenizer with AddedToken in additional_special_tokens | {
"login": "yobekiko",
"id": 25716647,
"node_id": "MDQ6VXNlcjI1NzE2NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/25716647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yobekiko",
"html_url": "https://github.com/yobekiko",
"followers_url": "https://api.github.com/users/yobekiko/followers",
"following_url": "https://api.github.com/users/yobekiko/following{/other_user}",
"gists_url": "https://api.github.com/users/yobekiko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yobekiko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yobekiko/subscriptions",
"organizations_url": "https://api.github.com/users/yobekiko/orgs",
"repos_url": "https://api.github.com/users/yobekiko/repos",
"events_url": "https://api.github.com/users/yobekiko/events{/privacy}",
"received_events_url": "https://api.github.com/users/yobekiko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixed by #6026"
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | # 🐛 Bug
I tried to add new special tokens to some tokenizer. I wanted them to be `AddedToken` in order to handle specific whitespace striping. But I got the following error when saving the tokenizer.
## To reproduce
Steps to reproduce the behavior:
```python
>>> from transformers import BertTokenizer
>>> from tokenizers import AddedToken
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
>>> new_token = AddedToken('new_token', lstrip=True)
>>> tokenizer.add_special_tokens({'additional_special_tokens': [new_token]})
1
>>> tokenizer.save_pretrained('.')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1371, in save_pretrained
f.write(json.dumps(write_dict, ensure_ascii=False))
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type AddedToken is not JSON serializable
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6025/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6025/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6024/comments | https://api.github.com/repos/huggingface/transformers/issues/6024/events | https://github.com/huggingface/transformers/pull/6024 | 665,386,498 | MDExOlB1bGxSZXF1ZXN0NDU2NDY1NjE5 | 6,024 | Feed forward chunking | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten - here's an initial implementation I have. My first step is to get the model to work with chunked feed forward - and it works! I still need to run the benchmark test to find out the benefits in terms of memory.\r\n\r\nHowever, I see a problem. The new architecture causes some of the nn.Module weights and bias parameter-names to change - which would be a problem with loading existing pretrained weights from checkpoints. \r\nFor example:\r\n`bert.encoder.layer.0.intermediate.dense.weight` --> becomes `bert.encoder.layer.0.feed_forward.dense.dense.weight`\r\n\r\nSee the failing tests for more details. Any thoughts/ideas on how to get around this?\r\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=h1) Report\n> Merging [#6024](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/175cd45e13b2e33d1efec9e2ac217cba99f6ae58&el=desc) will **decrease** coverage by `0.31%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6024 +/- ##\n==========================================\n- Coverage 79.44% 79.12% -0.32% \n==========================================\n Files 148 148 \n Lines 27193 27198 +5 \n==========================================\n- Hits 21604 21521 -83 \n- Misses 5589 5677 +88 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.57% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.49% <100.00%> (+0.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.31% <0.00%> (-26.18%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=footer). Last update [175cd45...406d621](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"hh",
"Hey @Pradhy729, thanks a lot for continuing the PR. I made a couple of changes: fix the docs and added tests for all models, whereas only Reformer and Bert tests are on for now. \r\n\r\n Would be great if @LysandreJik @sgugger @thomwolf @sshleifer can review. \r\n\r\nThis PR shows how `feed_forward_chunking` can be employed for all models. Feed forward chunking is explained here: https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers in combination with some benchmarking. It can give good memory improvements for certain model architectures.\r\nFor Bert a test is added showing that the model gives equal results. This function can easily be added to other models, the same way it was done for BERT. There is no real drawback in implementing this IMO. \r\n\r\n**To-Do after review is positive**:\r\n1. Add feed forward chunking to more models. @Pradhy729, feel free to add it to as many models as you want. The rest can also be added in a new PR or we open a \"first good issue\" for it.\r\n2. Add feed forward chunking for language modeling loss. Chunking of feed forward layers in the attention block is often not really helpful to save memory - only if the model has very few attention heads. On the other hand, a real bottleneck is often the last word embedding layer. When training the loss does not have to be calculated in one huge batch (over time dim), but can be chunked the same way it is done here for Feed forward layers. This is not even really implemented in Reformer yet and would definitely require a new PR.",
"Great! Thanks @patrickvonplaten \r\nWill wait for reviewers and start working on the others.\r\n"
] | 1,595 | 1,599 | 1,597 | CONTRIBUTOR | null | Official PR for #5928 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6024",
"html_url": "https://github.com/huggingface/transformers/pull/6024",
"diff_url": "https://github.com/huggingface/transformers/pull/6024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6024.patch",
"merged_at": 1597129965000
} |
https://api.github.com/repos/huggingface/transformers/issues/6023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6023/comments | https://api.github.com/repos/huggingface/transformers/issues/6023/events | https://github.com/huggingface/transformers/pull/6023 | 665,366,697 | MDExOlB1bGxSZXF1ZXN0NDU2NDQ5MTMw | 6,023 | Remove unused file | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=h1) Report\n> Merging [#6023](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a884b7fa38de9082a6f3f7889b9f7348a8dadbf5&el=desc) will **decrease** coverage by `1.37%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6023 +/- ##\n==========================================\n- Coverage 78.68% 77.31% -1.38% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n- Hits 20655 20295 -360 \n- Misses 5594 5954 +360 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=footer). Last update [a884b7f...0cb4788](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | This seems to be the old script to deploy the docs, new one is `.circleci/deploy.sh`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6023/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6023",
"html_url": "https://github.com/huggingface/transformers/pull/6023",
"diff_url": "https://github.com/huggingface/transformers/pull/6023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6023.patch",
"merged_at": 1595853084000
} |
https://api.github.com/repos/huggingface/transformers/issues/6022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6022/comments | https://api.github.com/repos/huggingface/transformers/issues/6022/events | https://github.com/huggingface/transformers/pull/6022 | 665,330,741 | MDExOlB1bGxSZXF1ZXN0NDU2NDE5NTI2 | 6,022 | Fix the return documentation rendering for all model outputs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=h1) Report\n> Merging [#6022](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3996041d0ae23ce23dfb8a343e6344f2f8d54c16&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6022 +/- ##\n==========================================\n+ Coverage 78.29% 78.70% +0.40% \n==========================================\n Files 146 146 \n Lines 26249 26268 +19 \n==========================================\n+ Hits 20552 20674 +122 \n+ Misses 5697 5594 -103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.12% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <100.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=footer). Last update [3996041...345372d](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | All PyTorch model outputs are documented from their output types. A problem is that just using the docstrings of the output class doesn't render properly on sphinx (this was also the case before the new model outputs were introduced).
This PR adds a function that converts the args part of the docstrings of the output class to render properly on our doc. You can see the transformation by looking [here](https://huggingface.co/transformers/master/model_doc/bert.html#transformers.BertModel.forward) for the docs before this PR and [here](https://64423-155220641-gh.circle-artifacts.com/0/docs/_build/html/model_doc/bert.html#transformers.BertModel.forward) for after it's merged (it's one example, but it will give the same result for all models). Scroll a bit to get to the return part of the doc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6022/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6022",
"html_url": "https://github.com/huggingface/transformers/pull/6022",
"diff_url": "https://github.com/huggingface/transformers/pull/6022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6022.patch",
"merged_at": 1595855940000
} |
https://api.github.com/repos/huggingface/transformers/issues/6021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6021/comments | https://api.github.com/repos/huggingface/transformers/issues/6021/events | https://github.com/huggingface/transformers/pull/6021 | 665,299,071 | MDExOlB1bGxSZXF1ZXN0NDU2MzkzOTM1 | 6,021 | [CI] Don't test apex | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=h1) Report\n> Merging [#6021](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3996041d0ae23ce23dfb8a343e6344f2f8d54c16&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6021 +/- ##\n==========================================\n+ Coverage 78.29% 78.68% +0.39% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20552 20655 +103 \n+ Misses 5697 5594 -103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=footer). Last update [3996041...14292a2](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6021/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6021",
"html_url": "https://github.com/huggingface/transformers/pull/6021",
"diff_url": "https://github.com/huggingface/transformers/pull/6021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6021.patch",
"merged_at": 1595619257000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.