url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/4820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4820/comments
https://api.github.com/repos/huggingface/transformers/issues/4820/events
https://github.com/huggingface/transformers/pull/4820
632,707,174
MDExOlB1bGxSZXF1ZXN0NDI5NDIwNzY3
4,820
Updates args in tf squad example.
{ "login": "daniel-shan", "id": 8588419, "node_id": "MDQ6VXNlcjg1ODg0MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/8588419?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daniel-shan", "html_url": "https://github.com/daniel-shan", "followers_url": "https://api.github.com/users/daniel-shan/followers", "following_url": "https://api.github.com/users/daniel-shan/following{/other_user}", "gists_url": "https://api.github.com/users/daniel-shan/gists{/gist_id}", "starred_url": "https://api.github.com/users/daniel-shan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daniel-shan/subscriptions", "organizations_url": "https://api.github.com/users/daniel-shan/orgs", "repos_url": "https://api.github.com/users/daniel-shan/repos", "events_url": "https://api.github.com/users/daniel-shan/events{/privacy}", "received_events_url": "https://api.github.com/users/daniel-shan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=h1) Report\n> Merging [#4820](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.63%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4820/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4820 +/- ##\n==========================================\n+ Coverage 74.52% 76.15% +1.63% \n==========================================\n Files 128 128 \n Lines 21497 21497 \n==========================================\n+ Hits 16021 16372 +351 \n+ Misses 5476 5125 -351 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4820/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=footer). Last update [c58e6c1...e7a60ca](https://codecov.io/gh/huggingface/transformers/pull/4820?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM, thanks! (cc'ing @jplu)" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Updates example for execution of `run-tf-squad.py` due to changes in https://github.com/huggingface/transformers/pull/4530, particularly removal of `mode` and `optimizer_name`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4820", "html_url": "https://github.com/huggingface/transformers/pull/4820", "diff_url": "https://github.com/huggingface/transformers/pull/4820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4820.patch", "merged_at": 1591608970000 }
https://api.github.com/repos/huggingface/transformers/issues/4819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4819/comments
https://api.github.com/repos/huggingface/transformers/issues/4819/events
https://github.com/huggingface/transformers/pull/4819
632,664,006
MDExOlB1bGxSZXF1ZXN0NDI5MzgyNjU3
4,819
Export PretrainedBartModel from __init__
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=h1) Report\n> Merging [#4819](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.62%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4819/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4819 +/- ##\n==========================================\n+ Coverage 74.52% 76.15% +1.62% \n==========================================\n Files 128 128 \n Lines 21497 21497 \n==========================================\n+ Hits 16021 16371 +350 \n+ Misses 5476 5126 -350 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (+6.29%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (+75.48%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=footer). Last update [c58e6c1...98b2dde](https://codecov.io/gh/huggingface/transformers/pull/4819?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
COLLABORATOR
null
`PretrainedBartModel ` is currently not being exported so one has to manually do ```python from transformers.modeling_bart import PretrainedBartModel ``` This required behaviour is different from other models which do expose their PretrainedModel in `__init__`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4819", "html_url": "https://github.com/huggingface/transformers/pull/4819", "diff_url": "https://github.com/huggingface/transformers/pull/4819.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4819.patch", "merged_at": 1591545310000 }
https://api.github.com/repos/huggingface/transformers/issues/4818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4818/comments
https://api.github.com/repos/huggingface/transformers/issues/4818/events
https://github.com/huggingface/transformers/pull/4818
632,656,593
MDExOlB1bGxSZXF1ZXN0NDI5Mzc2MzQ3
4,818
Enable multiprocessing in glue datasets
{ "login": "zrxbeijing", "id": 38594797, "node_id": "MDQ6VXNlcjM4NTk0Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/38594797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zrxbeijing", "html_url": "https://github.com/zrxbeijing", "followers_url": "https://api.github.com/users/zrxbeijing/followers", "following_url": "https://api.github.com/users/zrxbeijing/following{/other_user}", "gists_url": "https://api.github.com/users/zrxbeijing/gists{/gist_id}", "starred_url": "https://api.github.com/users/zrxbeijing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zrxbeijing/subscriptions", "organizations_url": "https://api.github.com/users/zrxbeijing/orgs", "repos_url": "https://api.github.com/users/zrxbeijing/repos", "events_url": "https://api.github.com/users/zrxbeijing/events{/privacy}", "received_events_url": "https://api.github.com/users/zrxbeijing/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,591
1,591
1,591
NONE
null
The preprocessing of glue datasets is too slow. This change enables multiprocessing to speed up the process of converting examples to features by utilizing multiple cpu cores.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4818/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4818", "html_url": "https://github.com/huggingface/transformers/pull/4818", "diff_url": "https://github.com/huggingface/transformers/pull/4818.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4818.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4817/comments
https://api.github.com/repos/huggingface/transformers/issues/4817/events
https://github.com/huggingface/transformers/issues/4817
632,364,225
MDU6SXNzdWU2MzIzNjQyMjU=
4,817
Question: Where do I find the Transformer model from the paper "Attention is all you need" ?
{ "login": "abhisheksgumadi", "id": 1021734, "node_id": "MDQ6VXNlcjEwMjE3MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1021734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhisheksgumadi", "html_url": "https://github.com/abhisheksgumadi", "followers_url": "https://api.github.com/users/abhisheksgumadi/followers", "following_url": "https://api.github.com/users/abhisheksgumadi/following{/other_user}", "gists_url": "https://api.github.com/users/abhisheksgumadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhisheksgumadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhisheksgumadi/subscriptions", "organizations_url": "https://api.github.com/users/abhisheksgumadi/orgs", "repos_url": "https://api.github.com/users/abhisheksgumadi/repos", "events_url": "https://api.github.com/users/abhisheksgumadi/events{/privacy}", "received_events_url": "https://api.github.com/users/abhisheksgumadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You don't need this library if you only want the transformer module specifically.\r\n\r\nPyTorch: https://pytorch.org/docs/master/generated/torch.nn.Transformer.html\r\nTensorFlow: https://www.tensorflow.org/tutorials/text/transformer#create_the_transformer", "Thanks @BramVanroy " ]
1,591
1,591
1,591
NONE
null
Hello Firstly, thanks for supporting all questions here. I read the paper "Attention is all you need" and wondering which class should I use in the HuggingFace library to use the Transformer architecture used in the paper. Can you please advise? Thanks Abhishek
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4817/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4817/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4816/comments
https://api.github.com/repos/huggingface/transformers/issues/4816/events
https://github.com/huggingface/transformers/issues/4816
632,311,698
MDU6SXNzdWU2MzIzMTE2OTg=
4,816
NER pipeline: Inconsistent entity grouping
{ "login": "dav009", "id": 1659415, "node_id": "MDQ6VXNlcjE2NTk0MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1659415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dav009", "html_url": "https://github.com/dav009", "followers_url": "https://api.github.com/users/dav009/followers", "following_url": "https://api.github.com/users/dav009/following{/other_user}", "gists_url": "https://api.github.com/users/dav009/gists{/gist_id}", "starred_url": "https://api.github.com/users/dav009/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dav009/subscriptions", "organizations_url": "https://api.github.com/users/dav009/orgs", "repos_url": "https://api.github.com/users/dav009/repos", "events_url": "https://api.github.com/users/dav009/events{/privacy}", "received_events_url": "https://api.github.com/users/dav009/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@dav009 Thanks for posting this issue!\r\n\r\n1. **Inconsistent grouping** - correct that `B` and `I` tokens are not yet considered. Will have to include this in a new PR.\r\n2. **Lost tokens** - the skipped tokens are those with an entity type found in the `ignore_labels` argument for `TokenClassificationPipeline`, which is set as `[\"O\"]` by default. If you don't want to skip any token, you can just set `ignore_labels=[]`.\r\n\r\nI'm happy to work on `1` within the next week or so since I've already been planning to apply this fix. ", "@enzoampil 👋 thanks for your prompt answer\r\n\r\n> Lost tokens - the skipped tokens are those with an entity type found in the ignore_labels argument for TokenClassificationPipeline, which is set as [\"O\"] by default. If you don't want to skip any token, you can just set ignore_labels=[].\r\n\r\nin the given sample, the missing entity is not tagged as `O` : \r\n\r\n- `##c` is tagged as `I-ORG` in (`grouped_entities =False`)\r\n `{'word': '##c', 'score': 0.7188423275947571, 'entity': 'I-ORG', 'index': 29}]`\r\n\r\nhowever it did not get included in the grouping results (`grouped_entities =True`)", "@dav009 Understand now! Thanks for clarifying. Yes, it does seem to be related to the I and B issue. Think can handle this in the same PR.", "@dav009 I handled a similar scenario for grouping the Begin and Info tags. \r\nThe below code helps to **merge the tokens between Begin and Info tags**. Please adapt to your use\r\n\r\n`def group_entities(self, prediction_results_list: List[Dict] ) -> List[RecordDataResponse]:\r\n final_prediction_list = []\r\n\r\n # Group the prediction list by the last 3 characters of the tag\r\n # and group the results appropriately\r\n # B-PER-TAG -> TAG\r\n # B-PER -> PER\r\n tmp_dict = defaultdict(list)\r\n added_index = 0\r\n prev_index = 0\r\n for index, entity in enumerate(prediction_results_list):\r\n try:\r\n if entity['entity_group'].startswith(\"B\") and \\\r\n prediction_results_list[index + 1]['entity_group'].startswith(\"I\"):\r\n tmp_dict[index].append(entity)\r\n added_index = index\r\n elif entity['entity_group'].startswith(\"I\"):\r\n if (1 == abs(added_index - index)) or (1 == abs(prev_index - index)):\r\n tmp_dict[added_index].append(entity)\r\n prev_index = index\r\n else:\r\n tmp_dict[index].append(entity)\r\n except IndexError:\r\n tmp_dict[index].append(entity)\r\n\r\n # Flatten the sub-lists\r\n final_grouped_list = list(map(list, map(itertools.chain, tmp_dict.values())))\r\n\r\n for entity_group_list in final_grouped_list:\r\n\r\n # Get the unique number of entities per list\r\n _entity_count = len(\r\n set(\r\n [\r\n prediction_input[\"entity_group\"]\r\n for prediction_input in entity_group_list\r\n ]\r\n )\r\n )\r\n\r\n if entity_group_list:\r\n if len(entity_group_list) > 1:\r\n\r\n # Get the tag name\r\n tag_name = str(entity_group_list[0][\"entity_group\"][-3:])\r\n\r\n # Join the entities\r\n entity_value = \" \".join(\r\n [\r\n prediction_input[\"word\"]\r\n for prediction_input in entity_group_list\r\n ]\r\n )\r\n\r\n # Remove duplicate names\r\n _temp_entities = entity_value.split()\r\n entity_value = \" \".join(\r\n sorted(set(_temp_entities), key=_temp_entities.index)\r\n )\r\n\r\n # Compute the average of confidence scores\r\n mean_score = np.mean(\r\n [\r\n prediction_input[\"score\"]\r\n for prediction_input in entity_group_list\r\n ]\r\n )\r\n\r\n # Frame the entities and ensure name is atleast has more than 1 character\r\n if len(entity_value) > 1:\r\n final_prediction_list.append(\r\n RecordDataResponse(\r\n entity_group=tag_name,\r\n score=mean_score,\r\n word=entity_value,\r\n )\r\n )\r\n else:\r\n [\r\n final_prediction_list.append(\r\n RecordDataResponse(\r\n entity_group=entity_group[\"entity_group\"][-3:],\r\n score=entity_group[\"score\"],\r\n word=entity_group[\"word\"],\r\n )\r\n )\r\n for entity_group in entity_group_list if len(re.sub(r\"(?i)[^-0-9a-z\\\\s.,]+\", \"\", entity_group[\"word\"])) > 1\r\n ]\r\n\r\n # Sort the by the list by confidence score and return in descending order\r\n return sorted(final_prediction_list, key=lambda x: x.score, reverse=True)\r\n`\r\n\r\nThe code is invoked from the **pipeline**:\r\n\r\n `prediction_results_list = [\r\n prediction\r\n for prediction_input in prediction_input_list\r\n for prediction in self.model_prediction_pipeline(prediction_input)\r\n if prediction\r\n and prediction[\"word\"] not in self.stop_list\r\n ]\r\n\r\n # Return the predictions\r\n return (\r\n self.group_entities(prediction_results_list)\r\n if prediction_results_list\r\n else []\r\n )`", "@dav009 Opened a PR (above) that should resolve this :smile:", "@enzoampil Just curious to know if your PR can handle the merging of multiple entities.\r\n\r\n`entities_list = [\r\n {\"word\": \"Patient\", \"score\": 0.9977793097496033, \"entity_group\": \"B-PER-TAG\"},\r\n {\"word\": \"Name\", \"score\": 0.9968074560165405, \"entity_group\": \"I-PER-TAG\"},\r\n {\"word\": \"Cecil\", \"score\": 0.9995920658111572, \"entity_group\": \"B-PER\"},\r\n {\"word\": \"D . Thomas\", \"score\": 0.9938908666372299, \"entity_group\": \"I-PER\"},\r\n {\"word\": \"Thomas\", \"score\": 0.9993066191673279, \"entity_group\": \"B-PER\"}\r\n]`\r\n\r\nIn this case, I would expect the below output after the entities are grouped:\r\n`[\r\n {\"word\": \"Patient Name\", \"score\": 0.9977793097496033, \"entity_group\": \"PER-TAG\"},\r\n {\"word\": \"Cecil D . Thomas\", \"score\": 0.9995920658111572, \"entity_group\": \"PER\"},\r\n {\"word\": \"Thomas\", \"score\": 0.9993066191673279, \"entity_group\": \"PER\"}\r\n]`", "@enzoampil gonna check it out. \r\n\r\nmaybe part of another issue but do you get `word` fields containing `##` is that expected?", "@sudharsan2020 Setting `grouped_entities=True` should work for your example under the new PR, since similar entities w/ different prefixes are now grouped (e.g. \"I-PER\" and \"B-PER\") :smile:", "@dav009 This is even after grouping correct? I suspect this is possible when word pieces have different core entity types (e.g. `ORG` vs `PER`).\r\n\r\nCan you give an example?", "Hello, I think @dav009 is refering to this : \r\n\r\n```Python\r\nfrom transformers import AutoModelForTokenClassification, AutoTokenizer\r\nimport torch\r\nfrom transformers import TokenClassificationPipeline\r\n\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"dbmdz/bert-large-cased-finetuned-conll03-english\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nnlp = TokenClassificationPipeline(\r\n model=model,\r\n tokenizer=tokenizer,\r\n grouped_entities=True\r\n)\r\n\r\nsequence = \"In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification .\"\r\n\r\nres = nlp(sequence)\r\nprint(res)\r\n```\r\nI have this as a result : \r\n\r\n`[{'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'}, {'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'}, {'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'}, {'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'}, {'entity_group': 'I-MISC', 'score': 0.5067382454872131, 'word': '##9'}]`\r\n\r\nSome word fields still have ## in it. I have just installed transformers right now (version 2.11.0) with a pip install command then paste the pipelines.py fixed in my transformers folder. ", "@Nighthyst can you share the result when `grouped_entities=False`?", "Yes, here is a comparison of the resultats with `grouped_entities=False` or when `grouped_entities=True` : \r\n\r\n```Python\r\nfrom transformers import AutoModelForTokenClassification, AutoTokenizer\r\nimport torch\r\nfrom transformers import TokenClassificationPipeline\r\n\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"dbmdz/bert-large-cased-finetuned-conll03-english\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nnlp_not_grouped = TokenClassificationPipeline(\r\n model=model,\r\n tokenizer=tokenizer,\r\n grouped_entities=False\r\n)\r\n\r\nnlp_grouped = TokenClassificationPipeline(\r\n model=model,\r\n tokenizer=tokenizer,\r\n grouped_entities=True\r\n)\r\n\r\nseq1 = \"In addition , the Blabla Group has completed the acquisition of ISO / TS16949 certification .\"\r\nseq2 = \"Directors and certain categories of personnel , who are all included in a regularly updated list\"\\\r\n\", must disclose any trades they carry out in Faurecia\"\r\nseq3 = \"Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very\" \\\r\n \"close to the Manhattan Bridge.\"\r\n\r\nsequences = [seq1, seq2, seq3]\r\n\r\nfor i, seq in enumerate(sequences):\r\n ngrouped, grouped = nlp_not_grouped(seq), nlp_grouped(seq)\r\n print(f\"===================== sentence n°{i+1}\")\r\n print(\"---Not grouped entities---\")\r\n print(ngrouped)\r\n print(\"---Grouped entities---\")\r\n print(grouped)\r\n```\r\nThis is the results:\r\n\r\n```\r\n===================== sentence n°1\r\n---Not grouped entities---\r\n[{'word': 'B', 'score': 0.9997261762619019, 'entity': 'I-ORG', 'index': 5}, \r\n{'word': '##la', 'score': 0.997683048248291, 'entity': 'I-ORG', 'index': 6}, \r\n{'word': '##bla', 'score': 0.99888014793396, 'entity': 'I-ORG', 'index': 7}, \r\n{'word': 'Group', 'score': 0.9992784261703491, 'entity': 'I-ORG', 'index': 8}, \r\n{'word': 'ISO', 'score': 0.9711909890174866, 'entity': 'I-MISC', 'index': 14},\r\n{'word': 'T', 'score': 0.6591967344284058, 'entity': 'I-ORG', 'index': 16}, \r\n{'word': '##S', 'score': 0.658642053604126, 'entity': 'I-MISC', 'index': 17}, \r\n{'word': '##16', 'score': 0.5059574842453003, 'entity': 'I-MISC', 'index': 18}, \r\n{'word': '##9', 'score': 0.5067382454872131, 'entity': 'I-MISC', 'index': 21}]\r\n---Grouped entities---\r\n[{'entity_group': 'I-ORG', 'score': 0.9988919496536255, 'word': 'Blabla Group'}, \r\n{'entity_group': 'I-MISC', 'score': 0.9711909890174866, 'word': 'ISO'}, \r\n{'entity_group': 'I-ORG', 'score': 0.6591967344284058, 'word': 'T'}, \r\n{'entity_group': 'I-MISC', 'score': 0.5822997689247131, 'word': '##S16'}, \r\n{'entity_group': 'I-MISC', 'score': 0.5067382454872131, 'word': '##9'}]\r\n===================== sentence n°2\r\n---Not grouped entities---\r\n[{'word': 'F', 'score': 0.6292181611061096, 'entity': 'I-ORG', 'index': 27}, \r\n{'word': '##au', 'score': 0.7241453528404236, 'entity': 'I-LOC', 'index': 28}, \r\n{'word': '##re', 'score': 0.49484530091285706, 'entity': 'I-LOC', 'index': 29}, \r\n{'word': '##cia', 'score': 0.6472106575965881, 'entity': 'I-LOC', 'index': 30}]\r\n---Grouped entities---\r\n[{'entity_group': 'I-ORG', 'score': 0.6292181611061096, 'word': 'F'}, \r\n{'entity_group': 'I-LOC', 'score': 0.6220671037832896, 'word': '##aurecia'}]\r\n===================== sentence n°3\r\n---Not grouped entities---\r\n[{'word': 'Hu', 'score': 0.9995108246803284, 'entity': 'I-ORG', 'index': 1}, \r\n{'word': '##gging', 'score': 0.989597499370575, 'entity': 'I-ORG', 'index': 2}, \r\n{'word': 'Face', 'score': 0.9979704022407532, 'entity': 'I-ORG', 'index': 3}, \r\n{'word': 'Inc', 'score': 0.9993758797645569, 'entity': 'I-ORG', 'index': 4}, \r\n{'word': 'New', 'score': 0.9993405938148499, 'entity': 'I-LOC', 'index': 11}, \r\n{'word': 'York', 'score': 0.9991927742958069, 'entity': 'I-LOC', 'index': 12}, \r\n{'word': 'City', 'score': 0.9993411302566528, 'entity': 'I-LOC', 'index': 13}, \r\n{'word': 'D', 'score': 0.986336350440979, 'entity': 'I-LOC', 'index': 19}, \r\n{'word': '##UM', 'score': 0.9396238923072815, 'entity': 'I-LOC', 'index': 20}, \r\n{'word': '##BO', 'score': 0.9121386408805847, 'entity': 'I-LOC', 'index': 21}, \r\n{'word': 'Manhattan', 'score': 0.9839190244674683, 'entity': 'I-LOC', 'index': 29}, \r\n{'word': 'Bridge', 'score': 0.9924242496490479, 'entity': 'I-LOC', 'index': 30}]\r\n---Grouped entities---\r\n[{'entity_group': 'I-ORG', 'score': 0.9966136515140533, 'word': 'Hugging Face Inc'}, \r\n{'entity_group': 'I-LOC', 'score': 0.9992914994557699, 'word': 'New York City'}, \r\n{'entity_group': 'I-LOC', 'score': 0.9460329612096151, 'word': 'DUMBO'}, \r\n{'entity_group': 'I-LOC', 'score': 0.9881716370582581, 'word': 'Manhattan Bridge'}]\r\n```\r\n\r\nEverything is fine for seq3 but seq1 and seq2 have the issue.\r\n", "@Nighthyst I see, you're bringing up a different issue now. This is the case where the entity type of a word's word piece, is different from other word pieces.\r\n\r\nA fix I can apply here is to automatically group word pieces together regardless of entity type. I can apply this to a new PR after merging this existing one.", "@Nighthyst @enzoampil indeed that's exactly the other issue I came accross. Thanks for digging a sample for it.", "Ok, I think we should open another issue for this problem : I've noticed other related issues", "@Nighthyst sounds good, thanks! :) ", "@enzoampil I was testing with your **ner_grouping** branch locally and these are the results **before** and **after grouping**. Do you think this is the expected behaviour?\r\n\r\n**Without grouping:**\r\n\r\n`[{'word': 'Peterson', 'score': 0.999268114566803, 'entity': 'B-PER', 'index': 17}, \r\n{'word': ',', 'score': 0.9992983937263489, 'entity': 'I-PER', 'index': 18}, \r\n{'word': '##David', 'score': 0.6536518931388855, 'entity': 'I-PER', 'index': 21}, \r\n{'word': 'David', 'score': 0.974104642868042, 'entity': 'B-PER', 'index': 37}, \r\n{'word': 'Peterson', 'score': 0.9984731078147888, 'entity': 'B-PER', 'index': 106}, \r\n{'word': 'David', 'score': 0.74308180809021, 'entity': 'B-PER', 'index': 393}, \r\n{'word': 'Peterson', 'score': 0.9972764253616333, 'entity': 'B-PER', 'index': 394}]`\r\n\r\n**With grouping:**\r\n\r\n`[{'entity_group': 'B-PER', 'score': 0.9992832541465759, 'word': 'Peterson ,'}, \r\n{'entity_group': 'I-PER', 'score': 0.6536518931388855, 'word': '##David'}, \r\n{'entity_group': 'B-PER', 'score': 0.974104642868042, 'word': 'David'}, \r\n{'entity_group': 'B-PER', 'score': 0.9984731078147888, 'word': 'Peterson'}, \r\n{'entity_group': 'B-PER', 'score': 0.8701791167259216, 'word': 'David Peterson'}]`\r\n\r\nThe **two I-PER entities weren't merged.**\r\n\r\nAlso observed few scenarios, in which the list **filtered_labels_idx** is **empty** which throws **IndexError**.\r\n**src/transformers/pipelines.py**\r\n`last_idx, _ = filtered_labels_idx[-1]`\r\n\r\nScreenshot: https://ibb.co/JHxYgWn", "Hi everyone, this PR was recently merged to resolve the original issue #4987.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,600
1,600
NONE
null
# 🐛 Bug ## Information "mrm8488/bert-spanish-cased-finetuned-ner" Language I am using the model on (English, Chinese ...): Spanish The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. create a `ner` pipeline 2. pass flag `grouped_entities` 3. entities are not grouped as expected see sample below ```python NER_MODEL = "mrm8488/bert-spanish-cased-finetuned-ner" nlp_ner = pipeline("ner", model=NER_MODEL, grouped_entities=True, tokenizer=(NER_MODEL, {"use_fast": False})) t = """Consuelo Araújo Noguera, ministra de cultura del presidente Andrés Pastrana (1998.2002) fue asesinada por las Farc luego de haber permanecido secuestrada por algunos meses.""" ner(t) >>> [ {'entity_group': 'B-PER', 'score': 0.901019960641861, 'word': 'Consuelo'}, {'entity_group': 'I-PER', 'score': 0.9990904808044434, 'word': 'Araújo Noguera'}, {'entity_group': 'B-PER', 'score': 0.9998136162757874, 'word': 'Andrés'}, {'entity_group': 'I-PER', 'score': 0.9996985991795858, 'word': 'Pastrana'}, {'entity_group': 'B-ORG', 'score': 0.9989739060401917, 'word': 'Far'}] ``` ## Expected behavior ### Inconsistent grouping I expect the first two items of the given sample( `B-PER`, and `I-PER`) to be grouped. As they are contiguous tokens and correspond to a single entity spot. It seems the current code does not take into account `B` and `I` tokens. expected output: ``` {'entity_group': 'I-PER', 'score': 0.9990904808044434, 'word': ' Consuelo Araújo Noguera'}, {'entity_group': 'I-PER', 'score': 0.9998136162757874, 'word': 'Andrés Pastrana'}, {'entity_group': 'B-ORG', 'score': 0.9989739060401917, 'word': 'Farc'}] ``` ### Lost tokens? for the same input, passing `grouped_entities=False` generates the following output: ``` [ {'word': 'Cons', 'score': 0.9994944930076599, 'entity': 'B-PER', 'index': 1}, {'word': '##uelo', 'score': 0.802545428276062, 'entity': 'B-PER', 'index': 2}, {'word': 'Ara', 'score': 0.9993102550506592, 'entity': 'I-PER', 'index': 3}, {'word': '##új', 'score': 0.9993743896484375, 'entity': 'I-PER', 'index': 4}, {'word': '##o', 'score': 0.9992871880531311, 'entity': 'I-PER', 'index': 5}, {'word': 'No', 'score': 0.9993029236793518, 'entity': 'I-PER', 'index': 6}, {'word': '##guera', 'score': 0.9981776475906372, 'entity': 'I-PER', 'index': 7}, {'word': 'Andrés', 'score': 0.9998136162757874, 'entity': 'B-PER', 'index': 15}, {'word': 'Pas', 'score': 0.999740719795227, 'entity': 'I-PER', 'index': 16}, {'word': '##tran', 'score': 0.9997414350509644, 'entity': 'I-PER', 'index': 17}, {'word': '##a', 'score': 0.9996136426925659, 'entity': 'I-PER', 'index': 18}, {'word': 'Far', 'score': 0.9989739060401917, 'entity': 'B-ORG', 'index': 28}, {'word': '##c', 'score': 0.7188423275947571, 'entity': 'I-ORG', 'index': 29}] ``` when using `grouped_entities` the last entity `word` (`##c`) got lost, it is not even considered as a different group ` {'entity_group': 'B-ORG', 'score': 0.9989739060401917, 'word': 'Far'}]` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: OSX - Python version: 3.7 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4816/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4815/comments
https://api.github.com/repos/huggingface/transformers/issues/4815/events
https://github.com/huggingface/transformers/pull/4815
632,166,431
MDExOlB1bGxSZXF1ZXN0NDI4OTQ1Mzk2
4,815
[marian tests ] pass device to pipeline
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=h1) Report\n> Merging [#4815](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56d5d160cdd177ae6e644506535b56e79feccf68&el=desc) will **decrease** coverage by `1.60%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4815/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4815 +/- ##\n==========================================\n- Coverage 76.15% 74.54% -1.61% \n==========================================\n Files 128 128 \n Lines 21497 21497 \n==========================================\n- Hits 16371 16026 -345 \n- Misses 5126 5471 +345 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `17.54% <0.00%> (-75.49%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.96% <0.00%> (-6.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.23% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.35% <0.00%> (+1.35%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=footer). Last update [56d5d16...7ab0469](https://codecov.io/gh/huggingface/transformers/pull/4815?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
fixes self-hosted-runner failure
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4815", "html_url": "https://github.com/huggingface/transformers/pull/4815", "diff_url": "https://github.com/huggingface/transformers/pull/4815.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4815.patch", "merged_at": 1591419137000 }
https://api.github.com/repos/huggingface/transformers/issues/4814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4814/comments
https://api.github.com/repos/huggingface/transformers/issues/4814/events
https://github.com/huggingface/transformers/issues/4814
632,135,419
MDU6SXNzdWU2MzIxMzU0MTk=
4,814
TPU Training fails with --evaluate_during_training
{ "login": "misrasaurabh1", "id": 1271289, "node_id": "MDQ6VXNlcjEyNzEyODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misrasaurabh1", "html_url": "https://github.com/misrasaurabh1", "followers_url": "https://api.github.com/users/misrasaurabh1/followers", "following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}", "gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}", "starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions", "organizations_url": "https://api.github.com/users/misrasaurabh1/orgs", "repos_url": "https://api.github.com/users/misrasaurabh1/repos", "events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}", "received_events_url": "https://api.github.com/users/misrasaurabh1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Had the same problem with TPU. `--logging_step` seems to freeze everything. \r\nI have removed logging and then evaluate it after training. ", "Hi, I fail to reproduce this on `master` following your steps. Can you try pulling from master and letting me know if the issue is resolved? If it's not I'll take a deeper look.\r\n\r\nYou can set `--logging_steps=10` so that to reduce the time it takes to get to the hang.\r\n\r\nI can, however, reproduce the issue with wandb. I'm looking into it now.", "Interesting, I retried the same instruction from master with --logging_steps as 50 and it did evaluate the first time but then it again got stuck at the second evaluation attempt at step 99. Something is flaky and not right...\r\nAlso now that I got at least one step of evaluation working, I notice that it prints 8 different eval_loss values, one for each process. Not sure how to interpret this. I haven't looked into the logic but looks like the evaluator also splits the eval_data into 8 parts and calculates the eval_loss on them individually without aggregating them into a single final eval_loss for the whole eval dataset. This defeats the purpose of evaluating during training.", "Indeed, something's not right. I'm taking a look.", "This was working well on 26-27th May. I tried going back to that commit but same error. Maybe something with XLA?", "I don't really know, now for some reason it decides to not hang, while it did hang the first time this morning. Even with a clean environment, it doesn't hang anymore on my side.\r\n\r\nI'm still investigating", "Another really weird bug is that setting --logging_steps to 0 leads to the training hanging up at step 99. I reproduced this same behavior in two different setups. I was using this option to stop logging which would hopefully bypass this above bug with this line of trainer:493\r\n```\r\nif (self.args.logging_steps > 0 and self.global_step % self.args.logging_steps == 0) or (\r\n self.global_step == 1 and self.args.logging_first_step\r\n ):\r\n``` \r\nI believe this is causing that bug\r\n```\r\n if os.getenv(\"WANDB_WATCH\") != \"false\":\r\n wandb.watch(\r\n self.model, log=os.getenv(\"WANDB_WATCH\", \"gradients\"), log_freq=max(100, self.args.logging_steps)\r\n )\r\n```", "Setting WANDB_WATCH = false fixed the bug, it also evaluates during training now. Starting a PR...", "Great. \r\nBut Maybe there can be something with XLA? that WandB gradients are not logged and the training freezes?", "I am not sure if wandb supports logging of gradients with Pytorch/XLA. I reached out to Wandb to ask about this, should get a reply by tomorrow. It is possible that Pytorch/XLA does not support gradient logging as well. I looked at the XLA github repo and couldn't find a mention of gradients logging with TPUs. I am unfamiliar with XLA interface with wandb and not keen on digging deeper into this. Hopefully wandb offers more clarity soon.", "I'm one of the founders of wandb. We're digging into the root cause of this now. We're planning to issue a new release ASAP to ensure users can never get into this hung state. I'll update the thread here. For anyone finding this thread online and hitting the issue, you can add the following code to disable the gradient monitoring in wandb with huggingface.\r\n\r\n```\r\nimport os\r\nos.environ[\"WANDB_WATCH\"] = \"false\"\r\n```\r\n\r\nOr if you're shelling out to a python script:\r\n\r\n```\r\nexport WANDB_WATCH=false\r\npython your_script.py\r\n```", "Thank you Chris for looking into this!", "@vanpelt The wandb gradient logging has been disabled with PR https://github.com/huggingface/transformers/pull/4926 . Once the Wandb fixes the gradient logging for Pytorch/XLA, we can re-enable this.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
CONTRIBUTOR
null
# 🐛 Bug TPU Trainer does not seem to support `--evaluate_during_training`. When the training loop goes into logging part, the whole process just hangs up stalling training. The same code/dataset with a multi-gpu setup works well. I am trying to move my company to Huggingface so want to train models on TPUs on our dataset which hung during the logging step. I was able to replicate the behavior with run_langugage_modelling.py, and the steps to replicate this are shown below. Other observations are - I felt that multiprocessing way of doing TPU training wastes a lot of CPU memory because with large datasets one has to use a machine with 100s of GBs of RAM because the features are being replicated 8 times in memory. Another bug is that with TPU training there are 8 WandB runs generated and it creates a lot of clutter. Suggestions to fix this would be to only do wandb logging from a single process. If its unavoidable to generate 8 wandb runs, tag all the runs to belong to a single 'group' that leads to better organization of the runs. (https://docs.wandb.com/library/advanced/grouping) ## Information Model I am using (Bert, XLNet ...): Roberta with run_language_modelling.py to replicate, T5 with our internal data. Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a new n1-highmem-32 machine with debian-9-torch-xla OS image in us-central1-c zone 2. `conda activate torch-xla-nightly` and start a v2-8 TPU in us-central1-c zone. Set the TPU env vars 3. Use the master branch of transformers 4. Download Wikitext 103 raw char level data from https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip (according to examples for run_language_modelling). Extract it 5. Run the example script ``` export TRAIN_FILE=/path/to/dataset/wiki.train.raw export TEST_FILE=/path/to/dataset/wiki.test.raw python xla_spawn.py --num_cores 8 language_modelling/run_language_modeling.py \ --output_dir=output \ --model_type=roberta \ --model_name_or_path=roberta-base \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm --evaluate_during_training --per_device_train_batch_size=4 --per_device_eval_batch_size=4 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> When it hangs, the tqdm counter is stuck at step 499 (with 500 as the logging interval) and nothing happens. When I do a Keyboard Interrupt, I get this stack trace. ``` main() File "../../../vendor/transformers/examples/xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 296, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 78, in join timeout=timeout, File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/multiprocessing/connection.py", line 911, in wait ready = selector.select(timeout) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/selectors.py", line 376, in select fd_event_list = self._poll.poll(timeout) KeyboardInterrupt ``` ## Expected behavior Being able to log validation set loss during training <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 2.11.0 - Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12 - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0a0+03eca38 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: Yes, 8 core parallelism with xla_spawn.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4814/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4813/comments
https://api.github.com/repos/huggingface/transformers/issues/4813/events
https://github.com/huggingface/transformers/issues/4813
632,052,581
MDU6SXNzdWU2MzIwNTI1ODE=
4,813
Is albert lm finetuning with SOP in Pytorch supported?
{ "login": "faddyai", "id": 47020306, "node_id": "MDQ6VXNlcjQ3MDIwMzA2", "avatar_url": "https://avatars.githubusercontent.com/u/47020306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/faddyai", "html_url": "https://github.com/faddyai", "followers_url": "https://api.github.com/users/faddyai/followers", "following_url": "https://api.github.com/users/faddyai/following{/other_user}", "gists_url": "https://api.github.com/users/faddyai/gists{/gist_id}", "starred_url": "https://api.github.com/users/faddyai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faddyai/subscriptions", "organizations_url": "https://api.github.com/users/faddyai/orgs", "repos_url": "https://api.github.com/users/faddyai/repos", "events_url": "https://api.github.com/users/faddyai/events{/privacy}", "received_events_url": "https://api.github.com/users/faddyai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, `run_language_modeling.py` does support the Albert model. It only does MLM though, no SOP.", "I see, am i correct in assuming that pretraining/finetuning the albert model with run_language_modeling.py which only supports MLM task, would result in lower performance, vs training with a script from another library (such as the original Albert repo from google) which supports SOP? \r\n\r\nThank you ", "It might result in lower performance, indeed. Adding the SOP task shouldn't be too hard, as the layer used for SOP are implemented. You can check this issue for more information https://github.com/huggingface/transformers/issues/2671." ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help Hello, I am trying to use transfer learning on the albert language model. Before i train it on Squad. Does run_language_modeling.py support albert models and SOP ? Thank you <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4813/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4812/comments
https://api.github.com/repos/huggingface/transformers/issues/4812/events
https://github.com/huggingface/transformers/pull/4812
632,023,990
MDExOlB1bGxSZXF1ZXN0NDI4ODE4Mjcy
4,812
[cleanup/marian] pipelines test and new kwarg
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=h1) Report\n> Merging [#4812](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/875288b344d2181b789746e27e7b5bc62df8cae1&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4812/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4812 +/- ##\n==========================================\n- Coverage 76.18% 76.15% -0.03% \n==========================================\n Files 128 128 \n Lines 21497 21497 \n==========================================\n- Hits 16377 16371 -6 \n- Misses 5120 5126 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.79% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-1.36%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.68% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=footer). Last update [875288b...b7b9470](https://codecov.io/gh/huggingface/transformers/pull/4812?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
avoids DeprecationWarning (because `max_len` kwarg is being deprecated)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4812", "html_url": "https://github.com/huggingface/transformers/pull/4812", "diff_url": "https://github.com/huggingface/transformers/pull/4812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4812.patch", "merged_at": 1591397120000 }
https://api.github.com/repos/huggingface/transformers/issues/4811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4811/comments
https://api.github.com/repos/huggingface/transformers/issues/4811/events
https://github.com/huggingface/transformers/pull/4811
631,996,137
MDExOlB1bGxSZXF1ZXN0NDI4NzkzNTc2
4,811
Add model and doc badges
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=h1) Report\n> Merging [#4811](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/875288b344d2181b789746e27e7b5bc62df8cae1&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4811/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4811 +/- ##\n==========================================\n- Coverage 76.18% 76.15% -0.03% \n==========================================\n Files 128 128 \n Lines 21497 21497 \n==========================================\n- Hits 16377 16372 -5 \n- Misses 5120 5125 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.41% <0.00%> (-0.79%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=footer). Last update [875288b...8ccd73a](https://codecov.io/gh/huggingface/transformers/pull/4811?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great" ]
1,591
1,591
1,591
COLLABORATOR
null
Add badges at each model for: - the page with all community models - the documentation of the model Remove the manual doc links as a result.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4811/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4811", "html_url": "https://github.com/huggingface/transformers/pull/4811", "diff_url": "https://github.com/huggingface/transformers/pull/4811.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4811.patch", "merged_at": 1591397143000 }
https://api.github.com/repos/huggingface/transformers/issues/4810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4810/comments
https://api.github.com/repos/huggingface/transformers/issues/4810/events
https://github.com/huggingface/transformers/pull/4810
631,991,489
MDExOlB1bGxSZXF1ZXN0NDI4Nzg5MjAz
4,810
[Benchmark] Add encoder decoder to benchmark and clean labels
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=h1) Report\n> Merging [#4810](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b6f365a8ed32eca20034084f74450723414b5de6&el=desc) will **increase** coverage by `1.19%`.\n> The diff coverage is `76.19%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4810/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4810 +/- ##\n==========================================\n+ Coverage 75.36% 76.55% +1.19% \n==========================================\n Files 128 128 \n Lines 21497 21531 +34 \n==========================================\n+ Hits 16201 16484 +283 \n+ Misses 5296 5047 -249 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `68.68% <70.00%> (+26.22%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `96.87% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `67.24% <100.00%> (+23.67%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.75% <0.00%> (+54.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=footer). Last update [b6f365a...49713d9](https://codecov.io/gh/huggingface/transformers/pull/4810?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
MEMBER
null
This PR cleans the benchmark utils a bit more: - tracing is made independent from CPU memory benchmarking - possibility to benchmark encoder-decoder models is added - 3 new tests - general refactoring
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4810", "html_url": "https://github.com/huggingface/transformers/pull/4810", "diff_url": "https://github.com/huggingface/transformers/pull/4810.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4810.patch", "merged_at": 1591623073000 }
https://api.github.com/repos/huggingface/transformers/issues/4809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4809/comments
https://api.github.com/repos/huggingface/transformers/issues/4809/events
https://github.com/huggingface/transformers/pull/4809
631,946,809
MDExOlB1bGxSZXF1ZXN0NDI4NzQ4OTUw
4,809
[EncoderDecoderConfig] automatically set decoder config to decoder
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=h1) Report\n> Merging [#4809](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **decrease** coverage by `1.41%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4809/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4809 +/- ##\n==========================================\n- Coverage 77.14% 75.72% -1.42% \n==========================================\n Files 128 128 \n Lines 21073 21075 +2 \n==========================================\n- Hits 16256 15959 -297 \n- Misses 4817 5116 +299 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `35.71% <0.00%> (-2.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.70% <0.00%> (-74.83%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <0.00%> (-6.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.77% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.17% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4809/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=footer). Last update [47a551d...6d8a589](https://codecov.io/gh/huggingface/transformers/pull/4809?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Merging for now since this is still unreleased code.", "@LysandreJik - not sure what codecov complains about." ]
1,591
1,591
1,591
MEMBER
null
When instantiating an encoder-decoder configuration from two pretrainied configs, the decoder config should automatically set to `config.is_decoder=True`. In general, whenever we instantiate an encoder decoder model, no matter how, the resulting decoder config should have the attribute `decoder.is_decoder=True`. This PR also adds a couple of tests to make sure that an encoder-decoder model can be instantiated from two configs over the encoder decoder config class.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4809", "html_url": "https://github.com/huggingface/transformers/pull/4809", "diff_url": "https://github.com/huggingface/transformers/pull/4809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4809.patch", "merged_at": 1591391798000 }
https://api.github.com/repos/huggingface/transformers/issues/4808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4808/comments
https://api.github.com/repos/huggingface/transformers/issues/4808/events
https://github.com/huggingface/transformers/pull/4808
631,935,471
MDExOlB1bGxSZXF1ZXN0NDI4NzM5MDYx
4,808
Expose classes used in documentation
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=h1) Report\n> Merging [#4808](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c0cfc2cf0941d2db368767fd232d8712449c7f8&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4808/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4808 +/- ##\n==========================================\n+ Coverage 76.29% 76.69% +0.39% \n==========================================\n Files 128 128 \n Lines 21495 21495 \n==========================================\n+ Hits 16400 16485 +85 \n+ Misses 5095 5010 -85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.39% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `89.17% <0.00%> (+2.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <0.00%> (+4.80%)` | :arrow_up: |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <0.00%> (+61.53%)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4808/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <0.00%> (+64.93%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=footer). Last update [5c0cfc2...e2a7c2d](https://codecov.io/gh/huggingface/transformers/pull/4808?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Can we tell sphinx to look at more than just init? ", "Looking at the sphinx documentation, there seems to be an option to use the modules and specify which parts of the module we want documented. Will try this as an alternative!", "Looked further, but the workaround to use automodule and specifying a few functions will add the docstring of `tokenization_utils` and make the names longer (it becomes `transformers.tokenization_utils.SpecialTokensMixin` instead of `transformers.SpecialTokensMixin` which is fair enough, since it's not in transformers anymore).\r\n\r\nAvoiding the module docstring seems possible by hacking something in conf.py but it then will be done globally, and may impact some other pages...\r\n\r\nSo merging this as is, and we can revisit if we really want to remove some of those things from `__init__`." ]
1,591
1,591
1,591
COLLABORATOR
null
Currently, the documentation page of the tokenizers has three methods lacking documentation (see [here](https://huggingface.co/transformers/main_classes/tokenizer.html#pretrainedtokenizerfast)). This PR adds them to the `__init__` so sphynx can see them. If there is one that should not be public, we should remove it from the documentation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4808/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4808", "html_url": "https://github.com/huggingface/transformers/pull/4808", "diff_url": "https://github.com/huggingface/transformers/pull/4808.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4808.patch", "merged_at": 1591618473000 }
https://api.github.com/repos/huggingface/transformers/issues/4807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4807/comments
https://api.github.com/repos/huggingface/transformers/issues/4807/events
https://github.com/huggingface/transformers/pull/4807
631,929,438
MDExOlB1bGxSZXF1ZXN0NDI4NzMzNzU1
4,807
Use labels to remove deprecation warnings
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=h1) Report\n> Merging [#4807](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c0cfc2cf0941d2db368767fd232d8712449c7f8&el=desc) will **decrease** coverage by `0.34%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4807/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4807 +/- ##\n==========================================\n- Coverage 76.29% 75.95% -0.35% \n==========================================\n Files 128 128 \n Lines 21495 21495 \n==========================================\n- Hits 16400 16326 -74 \n- Misses 5095 5169 +74 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (-14.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.00% <0.00%> (-2.04%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <0.00%> (-0.96%)` | :arrow_down: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `76.65% <0.00%> (-0.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.00% <0.00%> (-0.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.42% <0.00%> (-0.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.56% <0.00%> (-0.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <0.00%> (-0.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.26% <0.00%> (-0.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.43% <0.00%> (-0.38%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4807/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=footer). Last update [5c0cfc2...75f15ff](https://codecov.io/gh/huggingface/transformers/pull/4807?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome!" ]
1,591
1,591
1,591
COLLABORATOR
null
This is a follow-up from #4722 and remove the deprecated arguments in the tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4807/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4807/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4807", "html_url": "https://github.com/huggingface/transformers/pull/4807", "diff_url": "https://github.com/huggingface/transformers/pull/4807.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4807.patch", "merged_at": 1591389707000 }
https://api.github.com/repos/huggingface/transformers/issues/4806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4806/comments
https://api.github.com/repos/huggingface/transformers/issues/4806/events
https://github.com/huggingface/transformers/issues/4806
631,880,557
MDU6SXNzdWU2MzE4ODA1NTc=
4,806
Albert pretrained weights change across runs.
{ "login": "CVxTz", "id": 13545260, "node_id": "MDQ6VXNlcjEzNTQ1MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/13545260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CVxTz", "html_url": "https://github.com/CVxTz", "followers_url": "https://api.github.com/users/CVxTz/followers", "following_url": "https://api.github.com/users/CVxTz/following{/other_user}", "gists_url": "https://api.github.com/users/CVxTz/gists{/gist_id}", "starred_url": "https://api.github.com/users/CVxTz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CVxTz/subscriptions", "organizations_url": "https://api.github.com/users/CVxTz/orgs", "repos_url": "https://api.github.com/users/CVxTz/repos", "events_url": "https://api.github.com/users/CVxTz/events{/privacy}", "received_events_url": "https://api.github.com/users/CVxTz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just did the same experiment with Roberta weights and did not have the same issue.", "Hi, I can reproduce. This is due to the archive maps not being available anymore, and therefore the wrong ALBERT models are linked.\r\n\r\nThanks for raising the issue, this is quite a bug.\r\n\r\ncc @julien-c ", "My bad! It's my fault. I added a warning to the release notes about this: https://github.com/huggingface/transformers/releases/tag/v2.11.0", "Is there a plan to fix this? Looks like the issue is that the \"real\" model we want is named `with-prefix-tf_model.h5`, which needs to be renamed to `tf_model.h5`. https://huggingface.co/albert-base-v2#list-files", "This should work now, the weights have been changed to use the `with-prefix` weights.", "Thanks !! " ]
1,591
1,592
1,592
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): TFAlbertModel Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) ``` import tensorflow as tf from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = TFAlbertModel.from_pretrained('albert-base-v2') model.summary() print(len(model.trainable_weights)) print(model.trainable_weights[23]) input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] outputs = model(input_ids) print(outputs[0].shape, outputs[1].shape, len(outputs)) last_hidden_states = outputs[0] print(last_hidden_states) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) Trying to load pre-trained weights ## To reproduce Run the code above two times and you will see that the weights of the model are not the same across the two runs Steps to reproduce the behavior: 1. Run the code the first time and log the output 2. Run the code a second time and log the output 3. Check that the two logs are not the same. ## Expected behavior Since the model is loading pre-trained weights the results should be the same across runs. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-4.4.0-179-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.0.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No I apologize if the issue is due to me misusing your library, first time using Albert.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4806/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4805/comments
https://api.github.com/repos/huggingface/transformers/issues/4805/events
https://github.com/huggingface/transformers/issues/4805
631,876,991
MDU6SXNzdWU2MzE4NzY5OTE=
4,805
Invalid Argument for Onnxruntime Inference on GPT2
{ "login": "mihail911", "id": 2789441, "node_id": "MDQ6VXNlcjI3ODk0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mihail911", "html_url": "https://github.com/mihail911", "followers_url": "https://api.github.com/users/mihail911/followers", "following_url": "https://api.github.com/users/mihail911/following{/other_user}", "gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}", "starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mihail911/subscriptions", "organizations_url": "https://api.github.com/users/mihail911/orgs", "repos_url": "https://api.github.com/users/mihail911/repos", "events_url": "https://api.github.com/users/mihail911/events{/privacy}", "received_events_url": "https://api.github.com/users/mihail911/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "Assigning @mfuntowicz, the king of the onnx conversion!", "@mihail911, do you need attention_mask and token_type_ids in input? If not, you can inference the exported model like the following:\r\n model.run(None, {\"input_ids\": np.array([blah])})\r\n\r\nGPT-2 attention is unidirectional (right attends to left). User need not provide attention mask (at least for batch_size=1) and token_type_ids (Assume that all words have token type id=0).\r\n\r\nFor GPT-2, it is recommended to export model with past to get better performance. Currently, convert_graph_to_onnx.py cannot export past. You can use a custom script to do that. Here is an [example]( https://github.com/microsoft/onnxruntime/blob/7c8e1580a13ce333e47a41146bccfc90b3a70db5/onnxruntime/python/tools/transformers/benchmark_gpt2.py#L246). Note that optimization for past state is ongoing, and it will be available in onnxruntime nightly build sometime next week.", "thanks for the prompt response @tianleiwu! \r\n\r\nI agree that gpt2 doesn't strictly require the other parameters, but if I have a model that was trained using the token_type_id params because of having particularly formatted inputs, then not providing them at inference time may lead to decreased performance. \r\n\r\nIs there a way to provide them anyway?", "@mihail911, here is example script to export model with token_type_ids (but without past input):\r\n```\r\nimport torch\r\nfrom transformers import (GPT2Config, GPT2Model, GPT2Tokenizer)\r\n\r\n# use_cache is True by default in GPT2Model. Here we wrap a class to disable past state output.\r\nclass GPT2ModelNoPastState(GPT2Model):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n def forward(self, input_ids, attention_mask, token_type_ids):\r\n return super().forward(input_ids, past=None, attention_mask=attention_mask, token_type_ids=token_type_ids, use_cache=False)\r\n\r\nmodel_name=\"gpt2\"\r\nconfig = GPT2Config.from_pretrained(model_name)\r\ntokenizer = GPT2Tokenizer.from_pretrained(model_name)\r\nmodel = GPT2ModelNoPastState.from_pretrained(model_name)\r\n\r\nexample_inputs = tokenizer.encode_plus(\"This is a sample input\", return_tensors=\"pt\")\r\nexample_outputs = model(**example_inputs)\r\n\r\ninput_names = ['input_ids', 'attention_mask', 'token_type_ids']\r\noutput_names=[\"output_1\"]\r\ndynamic_axes={'input_ids': {0: 'batch_size', 1: 'seq_len'}, 'attention_mask': {0: 'batch_size', 1: 'seq_len'}, 'token_type_ids': {0: 'batch_size', 1: 'seq_len'}, 'output_1': {0: 'batch_size', 1: 'seq_len'}}\r\noutput_path=\"gpt2.onnx\"\r\ntorch.onnx.export(model=model,\r\n args=(example_inputs[input_names[0]], example_inputs[input_names[1]], example_inputs[input_names[2]]),\r\n f=output_path,\r\n input_names=input_names,\r\n output_names=output_names,\r\n example_outputs=example_outputs,\r\n dynamic_axes=dynamic_axes,\r\n do_constant_folding=True,\r\n opset_version=11,\r\n use_external_data_format=False)\r\n```\r\n\r\nBTW, I noticed that the token type use same embedding table as word embedding:\r\nhttps://github.com/huggingface/transformers/blob/c58e6c129a153ca1a5021e5d7e642d00bf011e20/src/transformers/modeling_gpt2.py#L465-L469\r\nThis looks like a bug. You might try to fix this if you want to get benefit from token_type_ids input.", "Thanks for the detailed follow-up @tianleiwu! \r\n\r\nI tried executing your code and I found that the dimensions of the output seemed incorrect. The output ended up being `(batch_size, seq_length, hidden_dim)` rather than the dimensions of the prediction scores when you run the forward pass of the GPT2 model (`(batch_size, seq_length, config.vocab_size)`).\r\n\r\nThis was the case even if I didn't explicitly provide the `output_names` or the dimensions in the `dynamic_axes` (i.e. I set `output_names=None`)\r\n\r\nDo you happen to know why that's the case?", "@mihail911, it is expected that last dimension of first output (last_hidden_state) is hidden size as documented in code:\r\nhttps://github.com/huggingface/transformers/blob/a139d1a1602ee72ca98d5e0412efbd68f746d2c8/src/transformers/modeling_gpt2.py#L383\r\n\r\nIf you want prediction scores, you can try export GPT2LMHeadModel instead of GPT2Model.\r\n", "@tianleiwu You are absolutely right. I accidentally missed that. \r\n\r\nThis works now -- thanks for all your help!" ]
1,591
1,596
1,591
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I've been following the ipython notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb) 1. Take an off-the-shelf pretrained `gpt` model and export to onnx format using the following invocation: ``` python convert_graph_to_onnx.py --framework pt --model gpt2 gpt2.onnx ``` 2. Run inference on the exported onnx model, following the steps [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb). After invoking the appropriate provider, run inference using something like the following ``` model.run(None, {"input_ids": np.array([blah]), "token_type_ids": np.array([blah]), "attention_mask": np.array([blah]) ``` Note above, `blah` is replaced with actual data. After invoking the above, I get the error: ``` onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids ``` ## Expected behavior I would expect this to work successfully. My hypothesis is that the `convert_graph_to_onnx.py` is not exporting all the inputs from the `gpt2` model. In particular in line 43-48: ``` for arg_name in model_args_name[1:]: # start at index 1 to skip "self" argument if arg_name in input_names: ordered_input_names.append(arg_name) model_args.append(tokens[arg_name]) else: break ``` `model_args` is only populated with `input_ids` because the order of arguments in the `forward` method of `gpt2` is `input_ids, past, attention_mask, token_type_ids` so the for loop breaks early. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: Commit 0e1869cc286d607f1598506be7bd1312b76ca82c - Onnxruntime: 1.3.0 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0+cu101 - Using GPU in script?: Yes Thanks for your help! @mfuntowicz @tianleiwu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4805/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4804/comments
https://api.github.com/repos/huggingface/transformers/issues/4804/events
https://github.com/huggingface/transformers/pull/4804
631,869,488
MDExOlB1bGxSZXF1ZXN0NDI4NjgxNDY3
4,804
Add link to community models
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the review @clmnt, doing god's work" ]
1,591
1,591
1,591
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4804", "html_url": "https://github.com/huggingface/transformers/pull/4804", "diff_url": "https://github.com/huggingface/transformers/pull/4804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4804.patch", "merged_at": 1591385360000 }
https://api.github.com/repos/huggingface/transformers/issues/4803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4803/comments
https://api.github.com/repos/huggingface/transformers/issues/4803/events
https://github.com/huggingface/transformers/pull/4803
631,864,842
MDExOlB1bGxSZXF1ZXN0NDI4Njc3NDky
4,803
[WIP] Blenderbot
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "repos_url": "https://api.github.com/users/mariamabarham/repos", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Awesome, looks like the model is soon complete :-) \r\n\r\nA couple of things I think that could be improved a bit:\r\n\r\n1. More consistent naming with other models. For me personally, I try to write the code as similar as possible to the `bert_modeling.py` code. `input_tensor` => `hidden_states`, `incremental_state` => `past_key_value_state`, ...\r\n2. More modularization. IMO, it's always good to have many independent classes in a model. `BlenderbotEmbeddings`, `BlenderbotEncoder`, `BlenderbotPoolingLayer`, ...Even if the forward function of these classes only has a couple of lines, it's more readable for the user and also gives you much more flexibility when you want to apply changes to the model later. I would also take a look at the BertModel for this and try to make it as similar as possible\r\n3. Usually, the config is just passed down to the layers instead of writing out all the needed params. This has 2 advantages. 1) Less code 2) No need to set the params to default parameters in the funciton arguments that could confuse the user\r\n4. Make the model as minimal as possible, especially as possible. What I mean by this is that the forward passes of the model should only do what there are supposed to do and we should try to avoid adding any functions that do things under the hood. For example cutting the hidden_states to its last state when using the cache (I have done the same thing in GPT2 and after discussing with @thomwolf it's quite clear now that this can lead to problems as shown in this issue: https://github.com/huggingface/transformers/issues/4368#issuecomment-630244541). This also concers any special function (-inf setting of certain tokens), which should be handled by `generate()` as it's done for Bart. \r\n\r\nOverall, I think the design as it is now fits well with the EncoderDecoder design! It looks to be very similar to Bart (pining @sshleifer here, maybe you can take a look as well). So I think you should just focus on one single forward() pass here and the incremental generation will be handled by the `generate()` method. ", "Just a note from my side.\r\nI'm 100% on board with inheriting from module classes, like `BartEncoder` and `BartDecoder` if they are 1-to-1 the same and only the naming has to be changed. \r\n\r\nOn the other hand, I'm not 100% on board with inheriting from a class and then overwriting specific functionality that is different. I guess in the case of `RobertaTokenizer`, with this tokenizer inheriting from `GPT2Tokenizer`, it's alright because it follows already existing logic in the library, but I'm not really a fan of it. IMO, inheritance should only be done if the functionality is _1-to-1 the same_ and not if only parts of the functionality are the same. I very much like the \"Composition over inheritance\" principle: https://en.wikipedia.org/wiki/Composition_over_inheritance . Also, I don't really mind copy-pasting code to some degree if it gives a clear gain in flexibility, which for me is probably the most important factor to consider in fast-changing research code.\r\n\r\nFor this model, I think it's great if we can reuse `BartEncoder` and `BartDecoder`, but should not abstract at a too high level if the models are just not the same (as was done in Longformer and which I want to change soon). \r\n\r\n\r\n@sshleifer I guess we have very different opinions on this case :D \r\n\r\nAlso, pinging @thomwolf and @LysandreJik to hear their opinion on this ", "@patrickvonplaten I think we are completely aligned on the specifics. We can talk about the principles at a bar someday, but your 100% rule would have us delete `PretrainedModel` :)", "I just pushed a not particularly clean but working version for blenderbot-90M as shown by `test_samgen`. \r\n\r\nThere are still a few known issues, most importantly:\r\n- The model does not generate EOS Token at the end of generations. \r\n- I haven't tested the 3B model. Test is very slow (like 10 mins) on CPU. It should only run on GPU.\r\n- The tokenizers don't work (at least on my machine)\r\n- Our length_penalty implem is [different](https://github.com/facebookresearch/ParlAI/blob/22d75cbfdcf4c093b2e2c660656b65aba77bd802/parlai/core/torch_generator_agent.py#L1474) than blenderbot. We need to do the math to figure out the right number. \r\n- Bart Change: If we decide to use `config.variant` to decide the layernorm order, a practice that I took from [`parlai`](https://github.com/facebookresearch/ParlAI/blob/a20ea268f9b5ef930b97ba5c608b050f7ee63627/parlai/agents/transformer/modules.py#L445) I need to update configs and raise a DeprecationWarning for `config.normalize_before`. I can also check/fix the configs on the model zoo. My opinion are that both ways of supporting such a small difference between the variants are annoying, but this is the least annoying way to support the different order of layernorm operations for variant=='xlm' (which blenderbot-90B uses), and we already had to do it with `mbart/config.normalize_before`. I'd be happy to write a doc explaining the settings. We can also delete `aiayn` which we don't use. I also just copied the naming. It should probably be changed.\r\n\r\nCan test 3B tomorrow. I'm fine with whatever other people want to do stylistically.", "> I just pushed a not particularly clean but working version for blenderbot-90M as shown by `test_samgen`.\r\n> \r\n> There are still a few known issues, most importantly:\r\n> \r\n> * The model does not generate EOS Token at the end of generations.\r\n> * I haven't tested the 3B model. Test is very slow (like 10 mins) on CPU. It should only run on GPU.\r\n> * The tokenizers don't work (at least on my machine)\r\n> * Our length_penalty implem is [different](https://github.com/facebookresearch/ParlAI/blob/22d75cbfdcf4c093b2e2c660656b65aba77bd802/parlai/core/torch_generator_agent.py#L1474) than blenderbot. We need to do the math to figure out the right number.\r\n> * Bart Change: If we decide to use `config.variant` to decide the layernorm order, a practice that I took from [`parlai`](https://github.com/facebookresearch/ParlAI/blob/a20ea268f9b5ef930b97ba5c608b050f7ee63627/parlai/agents/transformer/modules.py#L445) I need to update configs and raise a DeprecationWarning for `config.normalize_before`. I can also check/fix the configs on the model zoo. My opinion are that both ways of supporting such a small difference between the variants are annoying, but this is the least annoying way to support the different order of layernorm operations for variant=='xlm' (which blenderbot-90B uses), and we already had to do it with `mbart/config.normalize_before`. I'd be happy to write a doc explaining the settings. We can also delete `aiayn` which we don't use. I also just copied the naming. It should probably be changed.\r\n> \r\n> Can test 3B tomorrow. I'm fine with whatever other people want to do stylistically.\r\n\r\nFor the 90M tokenizer I pushed a working test here: https://github.com/huggingface/transformers/pull/4803/commits/724dc8798187801f382082bf32ba6025d15426de", "Updates:\r\n\r\n- both tokenizers (for 3B and 90M model)\r\n- `modeling_bart` when `variant==prelayernorm`\r\n- `special_tokens_map.json` file to replace `\"sep_token\": \"</s>\"` by `\"sep_token\": \"__end__\"\r\n- `pytorch_model.bin` \r\n\r\n1. 3B model is working perfectly and generation output is the same as parlai\r\n2. 90M model is working in some case but sometime it's generation output is a bit different to parlai for example:\r\n\r\n- parlai output: `__start__ i ' m not sure . i just feel like i ' m going to throw up .`\r\n- blenderbot output: `__start__ i don ' t know . i just feel like i ' m going to throw up .`\r\n\r\nNot solved yet:\r\n\r\n- Both models do not generate `eos_token`", "### LayerNorm Variant Problem\r\nThe problem with `layernorm_variant` is that the two blenderbot checkpoints have layernorm in **different** places. `bbot-90m.config.layernorm_variant='xlm'`, whereas `bbot-3b.config.layernorm_variant=prelayernorm` so one way I can think to do it without an if statement is separate `Blenderbot90Model` and `Blenderbot3BModel` which is inconsistent with the rest of the repo, where individual checkpoints do not have separate model classes. We would then also need two configs and two model types and probably two of some other things.\r\n\r\n### Possible Solutions\r\n+ Write a doc and markdown table containing: what is each layernorm variant+which models use it, the link to that doc both in the config/code and also from model cards.\r\n+ Don't port bbot-90m. If we don't port bbot-90m, we don't need to add config.layernorm_variant -- bbot3b layers are identical to mbart layers. The issue here is that bbot-3b **barely** runs inference on 1 GPU with bs=1.\r\n+ separate `Blenderbot90Model` and `Blenderbot3BModel`.\r\n+ There are also solutions where we parametrize out EncoderLayer/DecoderLayer, but these seem more confusing/harder to understand/less consistent to me. \r\n\r\nI am very open to suggestions, and if I don't get any I will keep working on trying to get the forward pass into one file, as @thomwolf wrote in slack today.\r\n\r\n", "Moving here for cleaner history: https://github.com/huggingface/transformers/pull/7418" ]
1,591
1,651
1,601
NONE
null
**UPDATES - 14 AUGUST 2020** - Blenderbot-3B is working exactly in the same way as parlai and they have the same generation outputs - Blenderbot-90M also generates the same output as parlai in most cases but it can sometime generate a sequence with a small difference. For example: Parlai: `i' m not sure . i just feel like i ' m going to throw up . ` hf: `i don ' t know . i just feel like i ' m going to throw up .` Discrepancy could be from length penalty or some other beam search param. **Update Sep 17** @sshleifer taking over ### TODO: - check eos generation - test distilled 2b model - `AutoTokenizer`/`AutoModelForSeq2SeqLM` coverage - debug failing 3b integration test - implement backwards compatibility for the variant change by checking `config.model_type` - document `layernorm_variant` nicely or pursue alternative solution.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4803/reactions", "total_count": 12, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 5, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4803/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4803", "html_url": "https://github.com/huggingface/transformers/pull/4803", "diff_url": "https://github.com/huggingface/transformers/pull/4803.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4803.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4802/comments
https://api.github.com/repos/huggingface/transformers/issues/4802/events
https://github.com/huggingface/transformers/pull/4802
631,843,391
MDExOlB1bGxSZXF1ZXN0NDI4NjU5MTU3
4,802
[cleanup] MarianTokenizer: delete unused constants
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=h1) Report\n> Merging [#4802](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `1.69%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4802/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4802 +/- ##\n==========================================\n+ Coverage 74.59% 76.28% +1.69% \n==========================================\n Files 128 128 \n Lines 21500 21495 -5 \n==========================================\n+ Hits 16037 16397 +360 \n+ Misses 5463 5098 -365 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.79% <ø> (-0.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (+0.94%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+55.06%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/4802/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=footer). Last update [acaa2e6...3f826a4](https://codecov.io/gh/huggingface/transformers/pull/4802?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Can anyone help with this issue: #5040 ?", "on it!" ]
1,591
1,592
1,591
CONTRIBUTOR
null
slow tests pass. The only needed constant is `vocab_files_names`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4802/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4802", "html_url": "https://github.com/huggingface/transformers/pull/4802", "diff_url": "https://github.com/huggingface/transformers/pull/4802.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4802.patch", "merged_at": 1591383444000 }
https://api.github.com/repos/huggingface/transformers/issues/4801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4801/comments
https://api.github.com/repos/huggingface/transformers/issues/4801/events
https://github.com/huggingface/transformers/issues/4801
631,820,649
MDU6SXNzdWU2MzE4MjA2NDk=
4,801
pip install -e does not always install the correct isort version
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik @julien-c any ideas?", "Works for me, no issues. But `isort==4.3.21` is not precise enough, you need to have the actual precise commit.\r\n\r\nCan you try `pip uninstall isort && pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort`\r\n?", "That worked! \r\nbut my pip freeze still says \r\n```\r\nisort==4.3.21\r\n```\r\n\r\nwhich seems like the reason that \r\n```bash\r\npip install -e .[\"dev\"]\r\n```\r\ndidn't work.", "pip is confusing, but you technically have version \"4.3.21\" of isort if you install from the specified commit – but not **the** version \"4.3.21\".\r\n\r\ni.e. the version number in the setup.py of the package that you install from git is still the string \"4.3.21\".\r\n\r\nDo you see what I mean?", "Yes I think I do, rephrase: there are multiple different versions of isort called 4.3.21 and pip install -e . will be satisfied if you have any of them, so if you have the wrong one you have to manually run\r\n```bash\r\npip uninstall isort\r\npip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort\r\n```", "Yep" ]
1,591
1,591
1,591
CONTRIBUTOR
null
```bash pip install -e .["dev"] make quality ``` ### Output ```bash black --check --line-length 119 --target-version py35 examples templates tests src utils All done! ✨ 🍰 ✨ 306 files would be left unchanged. isort --check-only --recursive examples templates tests src utils ERROR: /Users/shleifer/transformers_fork/examples/benchmarking/plot_csv_file.py Imports are incorrectly sorted. ERROR: /Users/shleifer/transformers_fork/templates/adding_a_new_example_script/run_xxx.py Imports are incorrectly sorted. ERROR: /Users/shleifer/transformers_fork/templates/adding_a_new_example_script/utils_xxx.py Imports are incorrectly sorted. ERROR: /Users/shleifer/transformers_fork/src/transformers/__init__.py Imports are incorrectly sorted. make: *** [quality] Error 1 ``` relevant packages: ```python flake8==3.8.1 isort==4.3.21 black==19.10b0 ``` Env: ``` - `transformers` version: 2.11.0 - Platform: Darwin-19.4.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) ``` Would also be good to add more verbose error messages if possible
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4801/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4800/comments
https://api.github.com/repos/huggingface/transformers/issues/4800/events
https://github.com/huggingface/transformers/pull/4800
631,801,234
MDExOlB1bGxSZXF1ZXN0NDI4NjIyMzE1
4,800
[isort] add matplotlib to known 3rd party dependencies
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=h1) Report\n> Merging [#4800](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `2.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4800/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4800 +/- ##\n==========================================\n+ Coverage 74.59% 76.68% +2.09% \n==========================================\n Files 128 128 \n Lines 21500 21500 \n==========================================\n+ Hits 16037 16488 +451 \n+ Misses 5463 5012 -451 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.41% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=footer). Last update [acaa2e6...650208b](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome, thanks @sshleifer - I should have added this when adding the benchmarks!" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Many people (like me) have it installed locally, so this will synchronize local isort and circleci.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4800", "html_url": "https://github.com/huggingface/transformers/pull/4800", "diff_url": "https://github.com/huggingface/transformers/pull/4800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4800.patch", "merged_at": 1591392452000 }
https://api.github.com/repos/huggingface/transformers/issues/4799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4799/comments
https://api.github.com/repos/huggingface/transformers/issues/4799/events
https://github.com/huggingface/transformers/pull/4799
631,798,464
MDExOlB1bGxSZXF1ZXN0NDI4NjE5OTA2
4,799
[cleanup] consolidate some prune_heads logic
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=h1) Report\n> Merging [#4799](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56d5d160cdd177ae6e644506535b56e79feccf68&el=desc) will **decrease** coverage by `0.77%`.\n> The diff coverage is `91.30%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4799/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4799 +/- ##\n==========================================\n- Coverage 76.15% 75.38% -0.78% \n==========================================\n Files 128 128 \n Lines 21497 21464 -33 \n==========================================\n- Hits 16371 16180 -191 \n- Misses 5126 5284 +158 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.93% <50.00%> (-53.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.26% <50.00%> (+0.99%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `76.33% <100.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.27% <100.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.52% <100.00%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <100.00%> (+0.96%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.02% <100.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `89.27% <100.00%> (-0.16%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=footer). Last update [56d5d16...bfb3251](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Nice! LGTM" ]
1,591
1,591
1,591
CONTRIBUTOR
null
factors out 5 repetitions of the following logic ```python def find_pruneable_heads_and_indices( heads: List, n_heads: int, head_size: int, already_pruned_heads: set ) -> Tuple[set, "torch.LongTensor"]: mask = torch.ones(n_heads, head_size) heads = set(heads) - already_pruned_heads # Convert to set and remove already pruned heads for head in heads: # Compute how many pruned heads are before the head and move the index accordingly head = head - sum(1 if h < head else 0 for h in already_pruned_heads) mask[head] = 0 mask = mask.view(-1).contiguous().eq(1) index: torch.LongTensor = torch.arange(len(mask))[mask].long() return heads, index ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4799", "html_url": "https://github.com/huggingface/transformers/pull/4799", "diff_url": "https://github.com/huggingface/transformers/pull/4799.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4799.patch", "merged_at": 1591650485000 }
https://api.github.com/repos/huggingface/transformers/issues/4798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4798/comments
https://api.github.com/repos/huggingface/transformers/issues/4798/events
https://github.com/huggingface/transformers/issues/4798
631,786,373
MDU6SXNzdWU2MzE3ODYzNzM=
4,798
[ctrl] has broken code for pruning that is not tested
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Yeah change \r\n```python\r\nself.h[layer].attn.prune_heads(heads)\r\n``` \r\nto \r\n```python\r\nself.h[layer].multi_head_attention.prune_heads(heads)\r\n``` \r\nin modeling_ctrl.py\r\n\r\nand set `test_pruning=True` in `CTRLModelTest`", "@sshleifer \r\nI did the above-mentioned changes however the testing of pruning still failed. There is no `function` called `prune_heads` implemented in the `MultiHeadAttention` class.\r\nIn `modelling_xlm.py` I do observe the implementation of `prune_heads`. \r\nShould I just raise a PR with the above-mentioned changes or look into implementing `prune_heads` for `ctrl`(Would require some help there 😓)?\r\n![Annotation 2020-06-10 120040](https://user-images.githubusercontent.com/18247856/84234521-0b907900-ab12-11ea-8cec-3f798866feac.png)\r\n", "You could try to get that test passing or \r\nwork on https://github.com/huggingface/transformers/issues/4902, which is easier in my opinion.\r\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
references an `attn` parameter than should be `multiheaded_attn`. Easy-ish fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4798/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4797/comments
https://api.github.com/repos/huggingface/transformers/issues/4797/events
https://github.com/huggingface/transformers/issues/4797
631,780,461
MDU6SXNzdWU2MzE3ODA0NjE=
4,797
Write With Transformer Request:
{ "login": "BigSalmon2", "id": 61605789, "node_id": "MDQ6VXNlcjYxNjA1Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BigSalmon2", "html_url": "https://github.com/BigSalmon2", "followers_url": "https://api.github.com/users/BigSalmon2/followers", "following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}", "gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions", "organizations_url": "https://api.github.com/users/BigSalmon2/orgs", "repos_url": "https://api.github.com/users/BigSalmon2/repos", "events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}", "received_events_url": "https://api.github.com/users/BigSalmon2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1565794707, "node_id": "MDU6TGFiZWwxNTY1Nzk0NzA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Write%20With%20Transformer", "name": "Write With Transformer", "color": "a84bf4", "default": false, "description": "" } ]
closed
false
null
[]
[ "This wouldn't change the unit economics of providing this service, but I'm curious, why don't you press tab twice instead?", "The previous suggestions disappear after I press tab. \r\n\r\nIt is easier to pick an autocompletion that is suitable, the more options I have to compare. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
# 🚀 Feature request I understand if this is unattainable for reason of money, but I was wondering if you could replace three autocomplete suggestions with five.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4797/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4796/comments
https://api.github.com/repos/huggingface/transformers/issues/4796/events
https://github.com/huggingface/transformers/pull/4796
631,775,506
MDExOlB1bGxSZXF1ZXN0NDI4NjAwMDA3
4,796
Ignore simlink
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=h1) Report\n> Merging [#4796](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `1.47%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4796/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4796 +/- ##\n==========================================\n+ Coverage 74.59% 76.06% +1.47% \n==========================================\n Files 128 128 \n Lines 21500 21500 \n==========================================\n+ Hits 16037 16355 +318 \n+ Misses 5463 5145 -318 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (+1.10%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (+40.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=footer). Last update [acaa2e6...9ce1f67](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I let @LysandreJik check here, he knows better how to deal with the docs :-) ", "Yes this sounds like a Windows-specific issue :)", "Well having the symlink in the repo would certainly cause issues on Windows (since of course Linux symlinks are incompatible with Linux ones), but it's not linked to Windows, just to the doc-building setup :-p . ", "I think this symbolic link is redundant with the `docs/source/examples.md@`. Since we commited the symbolic link, it would probably just be better to remove the instruction from the doc installation?", "Eh, since I don't know how to read, I did not run the command in the right folder, hence the untracked file.\r\nSo yes, since the symbolic link is in the repo, there is no need to do anything! Closing this." ]
1,591
1,591
1,591
COLLABORATOR
null
Didn't get an answer to my question on #4774 so asking again in the form of a PR ;-) Currently, building the docs require making a simlink to the examples README (as per the [instructions](https://github.com/huggingface/transformers/tree/master/docs#building-the-documentation)) and that file then becomes untracked bit git. We should either ignore it (as proposed in this PR) or add it once for all (might be OS-dependent though). Happy to amend this PR to the second solution, I just don't like untracked files. :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4796", "html_url": "https://github.com/huggingface/transformers/pull/4796", "diff_url": "https://github.com/huggingface/transformers/pull/4796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4796.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4795/comments
https://api.github.com/repos/huggingface/transformers/issues/4795/events
https://github.com/huggingface/transformers/pull/4795
631,771,117
MDExOlB1bGxSZXF1ZXN0NDI4NTk2MzU4
4,795
Explain how to preview the docs in a PR
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=h1) Report\n> Merging [#4795](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **decrease** coverage by `0.63%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4795/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4795 +/- ##\n==========================================\n- Coverage 74.59% 73.95% -0.64% \n==========================================\n Files 128 128 \n Lines 21500 21500 \n==========================================\n- Hits 16037 15900 -137 \n- Misses 5463 5600 +137 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `17.54% <0.00%> (-75.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `28.15% <0.00%> (-63.03%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.96% <0.00%> (-6.70%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.97% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.04% <0.00%> (+0.78%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=footer). Last update [acaa2e6...b41e2cf](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
COLLABORATOR
null
As discussed offline, add the instructions to check how to preview the docs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4795", "html_url": "https://github.com/huggingface/transformers/pull/4795", "diff_url": "https://github.com/huggingface/transformers/pull/4795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4795.patch", "merged_at": 1591404423000 }
https://api.github.com/repos/huggingface/transformers/issues/4794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4794/comments
https://api.github.com/repos/huggingface/transformers/issues/4794/events
https://github.com/huggingface/transformers/pull/4794
631,768,615
MDExOlB1bGxSZXF1ZXN0NDI4NTk0MjA5
4,794
enable multiprocessing in glue dataset
{ "login": "zrxbeijing", "id": 38594797, "node_id": "MDQ6VXNlcjM4NTk0Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/38594797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zrxbeijing", "html_url": "https://github.com/zrxbeijing", "followers_url": "https://api.github.com/users/zrxbeijing/followers", "following_url": "https://api.github.com/users/zrxbeijing/following{/other_user}", "gists_url": "https://api.github.com/users/zrxbeijing/gists{/gist_id}", "starred_url": "https://api.github.com/users/zrxbeijing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zrxbeijing/subscriptions", "organizations_url": "https://api.github.com/users/zrxbeijing/orgs", "repos_url": "https://api.github.com/users/zrxbeijing/repos", "events_url": "https://api.github.com/users/zrxbeijing/events{/privacy}", "received_events_url": "https://api.github.com/users/zrxbeijing/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,591
1,591
1,591
NONE
null
enable multiprocessing when converting examples to features utilizing the multiple cpu cores. N time faster...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4794/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4794", "html_url": "https://github.com/huggingface/transformers/pull/4794", "diff_url": "https://github.com/huggingface/transformers/pull/4794.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4794.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4793/comments
https://api.github.com/repos/huggingface/transformers/issues/4793/events
https://github.com/huggingface/transformers/issues/4793
631,760,669
MDU6SXNzdWU2MzE3NjA2Njk=
4,793
🐛 run_ner.py runtime error linked to TPU training
{ "login": "vinmorel", "id": 15064465, "node_id": "MDQ6VXNlcjE1MDY0NDY1", "avatar_url": "https://avatars.githubusercontent.com/u/15064465?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinmorel", "html_url": "https://github.com/vinmorel", "followers_url": "https://api.github.com/users/vinmorel/followers", "following_url": "https://api.github.com/users/vinmorel/following{/other_user}", "gists_url": "https://api.github.com/users/vinmorel/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinmorel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinmorel/subscriptions", "organizations_url": "https://api.github.com/users/vinmorel/orgs", "repos_url": "https://api.github.com/users/vinmorel/repos", "events_url": "https://api.github.com/users/vinmorel/events{/privacy}", "received_events_url": "https://api.github.com/users/vinmorel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! did you solve your issue?" ]
1,591
1,591
1,591
NONE
null
# 🐛 Bug ## Information Model I am using **Longformer For Token Classification** Language I am using the model on **German**: The problem arises when using: * [x] the official example scripts: T**he problem arises when trying to run run_ner.py on google colab in TPU fp16 mode.** The tasks I am working on is: * [x] an official GLUE/SQUaD task: **CoNLL NER** ## To reproduce I have a colab up if you want to see exactly what I did. You just need to upload data files in a new folder in /Content/<YOUR_FOLDER> to use in training and make sure to modify the run_ner paths correspondingly. [Google Colab](https://github.com/vinmorel/transformers/blob/master/run_ner_TPU.ipynb) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Runtime error after running the following : ``` !python3 "/content/transformers/examples/token-classification/run_ner.py" --data_dir "/content/CoNLL/" \ --labels "/content/CoNLL/labels.txt" \ --model_name_or_path "allenai/longformer-base-4096" \ --output_dir "xlnet-base-cased" \ --max_seq_length 200 \ --num_train_epochs 2 \ --per_device_train_batch_size 1 \ --save_steps 750 \ --seed 1 \ --do_train \ --do_eval \ --do_predict \ --fp16 ``` ``` Traceback (most recent call last): File "/content/transformers/examples/token-classification/run_ner.py", line 303, in <module> main() File "/content/transformers/examples/token-classification/run_ner.py", line 228, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 390, in train model, optimizer = amp.initialize(model, optimizer, opt_level=self.args.fp16_opt_level) File "/usr/local/lib/python3.6/dist-packages/apex/amp/frontend.py", line 358, in initialize return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs) File "/usr/local/lib/python3.6/dist-packages/apex/amp/_initialize.py", line 171, in _initialize check_params_fp32(models) File "/usr/local/lib/python3.6/dist-packages/apex/amp/_initialize.py", line 93, in check_params_fp32 name, param.type())) File "/usr/local/lib/python3.6/dist-packages/apex/amp/_amp_state.py", line 32, in warn_or_err raise RuntimeError(msg) RuntimeError: Found param longformer.embeddings.word_embeddings.weight with type torch.FloatTensor, expected torch.cuda.FloatTensor. When using amp.initialize, you need to provide a model with parameters located on a CUDA device before passing it no matter what optimization level you chose. Use model.to('cuda') to use the default device. ``` ## Expected behavior Would expect the code to run and start training on TPU without the runtime error. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0.dev20200528 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4793/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4792/comments
https://api.github.com/repos/huggingface/transformers/issues/4792/events
https://github.com/huggingface/transformers/pull/4792
631,760,558
MDExOlB1bGxSZXF1ZXN0NDI4NTg3MjQ1
4,792
Fix argument label
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=h1) Report\n> Merging [#4792](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `1.47%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4792/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4792 +/- ##\n==========================================\n+ Coverage 74.59% 76.06% +1.47% \n==========================================\n Files 128 128 \n Lines 21500 21500 \n==========================================\n+ Hits 16037 16354 +317 \n+ Misses 5463 5146 -317 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.23% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (+1.10%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=footer). Last update [acaa2e6...7725750](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "True, it seems like we don't test the `DataCollatorForLangugeModeling` in combination with the trainer. I think we should add a test that runs one training step on trainer with each available DataCollator. What do you think @julien-c ?", "Indeed the `Trainer` will need more thorough testing (which will probably be done in the next few weeks).\r\n\r\nThis wouldn't have been caught by tests though, since `masked_lm_labels` is deprecated but does not raise an error, right?", "Ah yes, should have issued a warning but not an error, right." ]
1,591
1,591
1,591
COLLABORATOR
null
After #4722 the labels are called just `labels` now, not `masked_lm_labels`. The fact it wasn't caught by the tests probably means we have some test missing...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4792/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4792", "html_url": "https://github.com/huggingface/transformers/pull/4792", "diff_url": "https://github.com/huggingface/transformers/pull/4792.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4792.patch", "merged_at": 1591384830000 }
https://api.github.com/repos/huggingface/transformers/issues/4791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4791/comments
https://api.github.com/repos/huggingface/transformers/issues/4791/events
https://github.com/huggingface/transformers/pull/4791
631,737,437
MDExOlB1bGxSZXF1ZXN0NDI4NTY3NDMw
4,791
parse arguments from dict
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @patil-suraj :-),\r\n\r\nHmm, not sure we would definitely need that...let's say you have a dict of arguments that you want to parse it into a dataclass like `TrainingArguments`, you could just do \r\n\r\n```python\r\ntraining_args = TrainingArguments(**your_dict)\r\n```\r\n\r\nlike it is done in the Reformer Colab for example: https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb\r\n\r\nCan you give me an example, where going over the `HfArgumentParser` instead of directly instantiating the dataclass makes more sense? ", "@patrickvonplaten \r\nwouldn't this \r\n```\r\ntrain_args, model_args, data_args = parser.parse_dict(your_dict)\r\n```\r\nbe better than this \r\n```\r\ntraining_args = TrainingArguments(**your_dict)\r\nmodel_args = ModelArguments(**your_dict)\r\ndata_args = DataArguments(**your_dict)\r\n```\r\n\r\nAnyway, its just a small utility, so if not needed by lot of people we can close this.", "I'm ok with merging this!\r\n\r\nAlways nice to add a unit test though:)", "closing this, accidentally merged upstream into this. Will open a new one" ]
1,591
1,591
1,591
MEMBER
null
This PR adds `parse_dict` method to `HfArgumentParser` to allow parsing arguments from `dict`. I find this necessary for notebook workflows where I'm not using `Trainer` from command line. Otherwise I need to write arguments to json file and use that path with `parse_json_file` or pass a list of strings to `parse_args_into_dataclasses`. @julien-c @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4791/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4791", "html_url": "https://github.com/huggingface/transformers/pull/4791", "diff_url": "https://github.com/huggingface/transformers/pull/4791.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4791.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4790/comments
https://api.github.com/repos/huggingface/transformers/issues/4790/events
https://github.com/huggingface/transformers/pull/4790
631,734,747
MDExOlB1bGxSZXF1ZXN0NDI4NTY1MTc2
4,790
Clean-up code
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=h1) Report\n> Merging [#4790](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa661ce749b0d14ae1999d1b097866248624a842&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4790/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4790 +/- ##\n==========================================\n+ Coverage 76.28% 76.29% +0.01% \n==========================================\n Files 128 128 \n Lines 21500 21500 \n==========================================\n+ Hits 16401 16404 +3 \n+ Misses 5099 5096 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.51% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=footer). Last update [fa661ce...a3f8a97](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "👍 " ]
1,591
1,591
1,591
COLLABORATOR
null
Looks like #4747 introduced some bad formatting, fixing so CI is happy again.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4790/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4790", "html_url": "https://github.com/huggingface/transformers/pull/4790", "diff_url": "https://github.com/huggingface/transformers/pull/4790.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4790.patch", "merged_at": 1591374983000 }
https://api.github.com/repos/huggingface/transformers/issues/4789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4789/comments
https://api.github.com/repos/huggingface/transformers/issues/4789/events
https://github.com/huggingface/transformers/pull/4789
631,644,318
MDExOlB1bGxSZXF1ZXN0NDI4NDkzNDA0
4,789
Add model summary
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=h1) Report\n> Merging [#4789](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9109f2de1c4f52967976dc840074a9d62713498&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4789/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4789 +/- ##\n==========================================\n+ Coverage 76.06% 76.46% +0.40% \n==========================================\n Files 128 128 \n Lines 21498 21498 \n==========================================\n+ Hits 16353 16439 +86 \n+ Misses 5145 5059 -86 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=footer). Last update [b9109f2...0e3789d](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Looks great :-) ", "Eh, forgot to finish my sentence linking to the pretrained models doc page at the beginning. No sure if I can link to part of the tables however, does restructured text can support that?", "I doubt it. One link is OK I think" ]
1,591
1,591
1,591
COLLABORATOR
null
This PR adds a high-level summary of all the models in the documentation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4789/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4789", "html_url": "https://github.com/huggingface/transformers/pull/4789", "diff_url": "https://github.com/huggingface/transformers/pull/4789.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4789.patch", "merged_at": 1591374170000 }
https://api.github.com/repos/huggingface/transformers/issues/4788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4788/comments
https://api.github.com/repos/huggingface/transformers/issues/4788/events
https://github.com/huggingface/transformers/issues/4788
631,632,895
MDU6SXNzdWU2MzE2MzI4OTU=
4,788
Onnx conversion for bert models with classification layers
{ "login": "hrsmanian", "id": 9534168, "node_id": "MDQ6VXNlcjk1MzQxNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/9534168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hrsmanian", "html_url": "https://github.com/hrsmanian", "followers_url": "https://api.github.com/users/hrsmanian/followers", "following_url": "https://api.github.com/users/hrsmanian/following{/other_user}", "gists_url": "https://api.github.com/users/hrsmanian/gists{/gist_id}", "starred_url": "https://api.github.com/users/hrsmanian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hrsmanian/subscriptions", "organizations_url": "https://api.github.com/users/hrsmanian/orgs", "repos_url": "https://api.github.com/users/hrsmanian/repos", "events_url": "https://api.github.com/users/hrsmanian/events{/privacy}", "received_events_url": "https://api.github.com/users/hrsmanian/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Might be of interest to @mfuntowicz ", "I think the issue could be due to pipeline which extracts the model\r\n# Allocate tokenizer and model\r\nreturn pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer, framework=framework)\r\n\r\nSeems like default is 'feature-extraction'\r\nI could get this to work by changing it to 'ner'. Also passed in a config option\r\nreturn pipeline(\"ner\", model=model, config=config, tokenizer=tokenizer, framework=framework)\r\n\r\nNot sure if this is the correct solution but it seems to work for me", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
Hi, I am trying to convert a NER model trained with BertForTokenClassification to Onnx format. Am able to convert it using the convert_graph_to_onnx.py script using the following params convert(framework="pt", model="bert_small_ner", output="bert_small_onnx/bert_small_ner.onnx", opset=11) There are no errors thrown either. However, when i run the inference, I see only the Bert layer outputs but not the output of the Linear classification layer as it is in BertForTokenClassification. Am I missing something? Or this is a know issue with some work around? Kindly let me know Thanks in advance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4788/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4787/comments
https://api.github.com/repos/huggingface/transformers/issues/4787/events
https://github.com/huggingface/transformers/issues/4787
631,594,992
MDU6SXNzdWU2MzE1OTQ5OTI=
4,787
🚀 [Feature Request] Add self-contained browsable examples/notebooks in the docs
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Good point! I think we could definitely give more links to the community notebooks and the docs in general to people so that users check the resources first before opening an issue...what are your thoughts on this @sgugger @julien-c @thomwolf ?", "I agree we could add links to example notebooks in the docs, either community or from the notebooks folder in the repo. The same way there is a tips section, there could be a Getting started section with notebooks that illustrate one or several tasks the model is good at. ", "Let me add something from a transformers-beginner's point of view: I read the example scripts and notebooks, and what makes them harder to understand, is that most of them use some kind of downloaded pre-made datasets like GLUE, with custom dataloaders and everything. Unless I'm missing something, it makes questions like \"How to prepare my data for sequence classification fine-tuning with transformers.Trainer ?\" hard to answer, and that might prevent many people from training their own models ", "@klasocki,\r\n\r\nThanks, that's very valuable feedback! We are actually planning to replace all those custom data preprocessing steps with the `nlp` library which should be easier to understand. Here is an example how it can be used:\r\n\r\nhttps://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb\r\n\r\nRegarding the specific task of sequence classification, it would definitely be nice to have that in the examples as well! Also cc @julien-c and @thomwolf here ", "@patrickvonplaten \r\nThank you! But I think you forgot to attach the example, at least I can't see it 😄 ", "> @patrickvonplaten\r\n> Thank you! But I think you forgot to attach the example, at least I can't see it\r\n\r\nTrue, sorry :D Editing the comment above...", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Side note since this pops up in my notifications: all the tutorials in the doc now have associated notebooks." ]
1,591
1,597
1,597
MEMBER
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation and feature request Recently I'm seeing lots of beginner-level issues asking help on how to fine-tune a particular model for a particular task, how to use already trained model from the hub or how to do inference for certain task, what some parameters mean etc. See #4744, #4406, #4677, #4639 While the examples in the /examples directory are really awesome, it seems that they are bit hard to understand from a beginner's perspective. Also even though there is notebooks section, some people still seem to not find it. And as the library is getting very popular lots of students/beginners are starting their NLP journey with Transformers. So IMO it would be really awesome if we have self-contained end-to-end browsable notebook examples with clear task and model descriptions in the docs itself (like the ones in pytorch and keras docs) <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I have few notebooks in community notebooks section and I would be happy to contribute more examples with clear descriptions for individual task with both fine-tuning and inference details. @patrickvonplaten @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4787/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4787/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4786/comments
https://api.github.com/repos/huggingface/transformers/issues/4786/events
https://github.com/huggingface/transformers/issues/4786
631,593,262
MDU6SXNzdWU2MzE1OTMyNjI=
4,786
Usage of Ġ in BPE tokenizer
{ "login": "maschasap", "id": 39886191, "node_id": "MDQ6VXNlcjM5ODg2MTkx", "avatar_url": "https://avatars.githubusercontent.com/u/39886191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maschasap", "html_url": "https://github.com/maschasap", "followers_url": "https://api.github.com/users/maschasap/followers", "following_url": "https://api.github.com/users/maschasap/following{/other_user}", "gists_url": "https://api.github.com/users/maschasap/gists{/gist_id}", "starred_url": "https://api.github.com/users/maschasap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maschasap/subscriptions", "organizations_url": "https://api.github.com/users/maschasap/orgs", "repos_url": "https://api.github.com/users/maschasap/repos", "events_url": "https://api.github.com/users/maschasap/events{/privacy}", "received_events_url": "https://api.github.com/users/maschasap/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@maschasap \r\nAFAIK, you won't need to add `Ġ ` when adding a new token. \r\nAnd you can use the `convert_tokens_to_string` method to convert these tokens to their respective strings.\r\n\r\ntagging @mfuntowicz for more info", "@patil-suraj thanks for your response! But still why doesn't the tokenizer add `Ġ` symbol when returning the tokenized sentence in the example? Does it mean it still needs to be tuned or is it OK?", "@patil-suraj btw using the method `convert_tokens_to_string` the whitespace between the words **love** and **Salah** really disappears :(\r\n`tokenizer.convert_tokens_to_string(tokenizer.tokenize('I love Salah and salad'))` outputs `'I loveSalah and salad`", "@LysandreJik @mfuntowicz may I ask you for help?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Can someone help me understand the symbol Ġ ? I am running into the same thing , want to make sure I am not breaking things down the line", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "There is also a difference here between the fast and slow tokenisers:\r\n```python\r\nimport transformers\r\ntokeniser1 = transformers.RobertaTokenizer.from_pretrained('roberta-base')\r\ntokeniser2 = transformers.RobertaTokenizerFast.from_pretrained('roberta-base')\r\ntokeniser1.add_tokens(['Salah'])\r\ntokeniser2.add_tokens(['Salah'])\r\nprint(tokeniser1.tokenize('I love Salah and salad'))\r\nprint(tokeniser2.tokenize('I love Salah and salad'))\r\n```\r\nOutputs:\r\n```\r\n['I', 'Ġlove', 'Salah', 'and', 'Ġsalad']\r\n['I', 'Ġlove', 'Ġ', 'Salah', 'Ġand', 'Ġsalad']\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "same here", "(in case someone comes across this issue, have a look at [this post in our forum](https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/2?u=joaogante))" ]
1,591
1,663
1,608
NONE
null
Hello, I want to add new words to my BPE tokenizer. I know the symbol Ġ means the end of a new token and the majority of tokens in vocabs of pre-trained tokenizers start with Ġ. Assume I want to add the word **Salah** to my tokenizer. I tried to add both **Salah** token and **ĠSalah**: `tokenizer.add_tokens(['Salah', 'ĠSalah'])` # they get 50265 and 50266 values respectively. However, when I tokenize a sentence where **Salah** appears, the tokenizer will never return me the second number (neither when using `.tokenize` nor `.encode`), for instance: `tokenizer.tokenize('I love Salah and salad')` returns `['I', 'Ġlove', 'Salah', 'Ġand', 'Ġsalad']`. The question is: should I use the symbol `Ġ` when adding new tokens or the tokenizer does it itself? Or, probably, it must be specified manually? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4786/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4785/comments
https://api.github.com/repos/huggingface/transformers/issues/4785/events
https://github.com/huggingface/transformers/issues/4785
631,575,399
MDU6SXNzdWU2MzE1NzUzOTk=
4,785
Question Answering Pipeline with big texts.
{ "login": "thiagomoeng", "id": 64150563, "node_id": "MDQ6VXNlcjY0MTUwNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/64150563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thiagomoeng", "html_url": "https://github.com/thiagomoeng", "followers_url": "https://api.github.com/users/thiagomoeng/followers", "following_url": "https://api.github.com/users/thiagomoeng/following{/other_user}", "gists_url": "https://api.github.com/users/thiagomoeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/thiagomoeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thiagomoeng/subscriptions", "organizations_url": "https://api.github.com/users/thiagomoeng/orgs", "repos_url": "https://api.github.com/users/thiagomoeng/repos", "events_url": "https://api.github.com/users/thiagomoeng/events{/privacy}", "received_events_url": "https://api.github.com/users/thiagomoeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Longformer would be a good choice, but it is currently not implemented in the pipelines. This issue might help :-) : https://github.com/huggingface/transformers/issues/4762", "and #4615", "This notebook might help :-) \r\nhttps://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing" ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help I am searching for some ideas of how to use QA pipeline with big data texts and get a good time response. (I am already using GPU)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4785/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4784/comments
https://api.github.com/repos/huggingface/transformers/issues/4784/events
https://github.com/huggingface/transformers/issues/4784
631,458,611
MDU6SXNzdWU2MzE0NTg2MTE=
4,784
Can I train question-answering on TPU using Huggingface
{ "login": "thak123", "id": 3891859, "node_id": "MDQ6VXNlcjM4OTE4NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3891859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thak123", "html_url": "https://github.com/thak123", "followers_url": "https://api.github.com/users/thak123/followers", "following_url": "https://api.github.com/users/thak123/following{/other_user}", "gists_url": "https://api.github.com/users/thak123/gists{/gist_id}", "starred_url": "https://api.github.com/users/thak123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thak123/subscriptions", "organizations_url": "https://api.github.com/users/thak123/orgs", "repos_url": "https://api.github.com/users/thak123/repos", "events_url": "https://api.github.com/users/thak123/events{/privacy}", "received_events_url": "https://api.github.com/users/thak123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "as you can see in https://github.com/huggingface/transformers/tree/master/examples#the-big-table-of-tasks `question-answering` is not implemented using Trainer, yet – so doesn't have TPU support. \r\n\r\nThis is on our todo-list but might take a few weeks. Feel free to open a PR though – or use TFTrainer is TF is an option", "I think I can live with TFTrainer ... Thanks for the reply", "how do i use TFTrainer with TPU", "Hi! The [running on TPU](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus) section of the examples covers that.\r\n\r\nBasically, if on a correctly setup TPU, the TF trainer will automatically launch on TPU." ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help I am trying to run run_squad from question answering on TPU but I get the following error. ## Details export SQUAD_DIR=/path/to/SQUAD python examples/xla_spawn.py --num_cores 8 \ run_squad.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ **run_squad.py: error: unrecognized arguments: --tpu_num_cores 1**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4784/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4783/comments
https://api.github.com/repos/huggingface/transformers/issues/4783/events
https://github.com/huggingface/transformers/issues/4783
631,417,088
MDU6SXNzdWU2MzE0MTcwODg=
4,783
question about tokenizer
{ "login": "yuimo", "id": 22741826, "node_id": "MDQ6VXNlcjIyNzQxODI2", "avatar_url": "https://avatars.githubusercontent.com/u/22741826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuimo", "html_url": "https://github.com/yuimo", "followers_url": "https://api.github.com/users/yuimo/followers", "following_url": "https://api.github.com/users/yuimo/following{/other_user}", "gists_url": "https://api.github.com/users/yuimo/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuimo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuimo/subscriptions", "organizations_url": "https://api.github.com/users/yuimo/orgs", "repos_url": "https://api.github.com/users/yuimo/repos", "events_url": "https://api.github.com/users/yuimo/events{/privacy}", "received_events_url": "https://api.github.com/users/yuimo/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This notebook might help you, it shows how you can train a LM from scratch on a new language.\r\nhttps://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb", "@patil-suraj thanks for your reply. i have read this notebook. but i want to re-use the functions in the transfomers such as functions like encode_plus, batch_encode_plus\r\nso my question is :\r\nafter train a tokenizer by myself, how could i integrate it to the transformers's tokenizer to use functions in it?\r\nthanks again", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
hi, is there any difference between initializing BertTokenizer and load it using from_pretrained, such as: 1、tokenizer = BertTokenizer(bert-base-uncased-vocab) 2、tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') i want to pretrain for some new language by myself, so i need to create the vocabulary, and how could i integrate it to the transformers's tokenizer thanks a lot!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4783/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4782/comments
https://api.github.com/repos/huggingface/transformers/issues/4782/events
https://github.com/huggingface/transformers/issues/4782
631,412,439
MDU6SXNzdWU2MzE0MTI0Mzk=
4,782
❓ How to use Gradient Accumulator in TF_Trainer ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nThe usage of the gradient accumulator is fully integrated since the beginning in the TF Trainer, the default value of accumulation is 1, if you want to change it you have to fill the `--gradient_accumulation_steps` parameter.", "That's awesome ! Thank you very much for your answer" ]
1,591
1,591
1,591
CONTRIBUTOR
null
# ❓ Questions & Help According to the documentation of Gradient Accumulator : > the accumulator should be called in a replica context. Gradients will be accumulated locally on each replica and without synchronization. Users should then call ``.gradients``, scale the gradients if required, and pass the result to ``apply_gradients``. The default optimizer for TF_Trainer does not provide gradient accumulation. **Is there any example available showing how to use Gradient Accumulator with TF_Trainer ?**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4782/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4781/comments
https://api.github.com/repos/huggingface/transformers/issues/4781/events
https://github.com/huggingface/transformers/issues/4781
631,375,493
MDU6SXNzdWU2MzEzNzU0OTM=
4,781
Regarding generate method used in BART
{ "login": "kunalpagarey", "id": 38290549, "node_id": "MDQ6VXNlcjM4MjkwNTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/38290549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kunalpagarey", "html_url": "https://github.com/kunalpagarey", "followers_url": "https://api.github.com/users/kunalpagarey/followers", "following_url": "https://api.github.com/users/kunalpagarey/following{/other_user}", "gists_url": "https://api.github.com/users/kunalpagarey/gists{/gist_id}", "starred_url": "https://api.github.com/users/kunalpagarey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kunalpagarey/subscriptions", "organizations_url": "https://api.github.com/users/kunalpagarey/orgs", "repos_url": "https://api.github.com/users/kunalpagarey/repos", "events_url": "https://api.github.com/users/kunalpagarey/events{/privacy}", "received_events_url": "https://api.github.com/users/kunalpagarey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @kunalpagarey, \r\n\r\nThe `.generate()` method cannot be used for training. It can be used for validation and testing. \r\nTo create a good model for summarization the following components are important:\r\n\r\n**Train**:\r\n1. What model do you want to use? => Bart for example\r\n2. What loss function do you want to use? => Usually people do pretraining and then finetuning on summarization using standard maximum likelihood. For some detail on how to fine-tune Bart with `transformers` check out: https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb\r\n\r\n** Decoding method** \r\nAfter having fine-tuned a model on summarization, the decoding method to apply is another whole question and is often done independently of training. Now, you can decide how to use the `generate()` method. You could for example try out a bunch of different hyperparameters for `.generate()` on your validation set and then decide on one setting for want to use for your test set. \r\nFor more details on how to choose hyperparameters for `.generate()` check out: \r\nhttps://huggingface.co/blog/how-to-generate\r\n\r\n" ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarily intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiasts can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> I am new to NLP and text generation so please help me understand some basic things. I fine-tuned BART on CNN/DM dataset using provide scripts in the examples section and it works fine. I have some understanding of how model.generate() method works. But need to clarify some basic questions. Is the generate() method used at the time of training/validation? If yes, then how loss is computed if the beam size is greater than 1? Or do we even use the parameters like in generate() method during training? Please help. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4781/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4781/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4780/comments
https://api.github.com/repos/huggingface/transformers/issues/4780/events
https://github.com/huggingface/transformers/issues/4780
631,277,543
MDU6SXNzdWU2MzEyNzc1NDM=
4,780
Reformer hidden_size of output is doubled.
{ "login": "h324yang", "id": 6326212, "node_id": "MDQ6VXNlcjYzMjYyMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6326212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h324yang", "html_url": "https://github.com/h324yang", "followers_url": "https://api.github.com/users/h324yang/followers", "following_url": "https://api.github.com/users/h324yang/following{/other_user}", "gists_url": "https://api.github.com/users/h324yang/gists{/gist_id}", "starred_url": "https://api.github.com/users/h324yang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h324yang/subscriptions", "organizations_url": "https://api.github.com/users/h324yang/orgs", "repos_url": "https://api.github.com/users/h324yang/repos", "events_url": "https://api.github.com/users/h324yang/events{/privacy}", "received_events_url": "https://api.github.com/users/h324yang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @h324yang, \r\n\r\nGood observation! The reason for this is that Reformer uses Reversible Residual Layers and thus always has two input streams (two hidden states inputs) instead of one. Both of these streams have to be concatenated after running through the model which leads to double the size of `hidden_states`. I will soon ~1 week publish a notebook explaining this in more detail :-) " ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> The default hidden_size is 256 but the output dim is 512. Similarly, when I change the config and set hidden_size to 512, the output dim is 1024. ```python enc_config = { "attention_head_size": 64, "attn_layers": ["local", "lsh", "local", "lsh", "local", "lsh"], "axial_pos_embds": True, "sinusoidal_pos_embds": False, "axial_pos_embds_dim": [256, 256], "axial_pos_shape": [64, 64], "lsh_attn_chunk_length": 64, "local_attn_chunk_length": 64, "feed_forward_size": 256, "hidden_act": "relu", "hidden_size": 512, "is_decoder": False, "max_position_embeddings": 4096, "num_attention_heads": 12, "num_buckets": [64, 64], "num_hashes": 4, "lsh_attention_probs_dropout_prob": 0.0, "lsh_num_chunks_before": 1, "lsh_num_chunks_after": 0, "local_num_chunks_before": 1, "local_num_chunks_after": 0, "local_attention_probs_dropout_prob": 0.025, "hidden_dropout_prob": 0.025, "pad_token_id": tokenizer.pad_token_id, "eos_token_id": tokenizer.eos_token_id, "vocab_size": tokenizer.vocab_size, } reformer = ReformerModel(ReformerConfig(**enc_config)) input_ids = tokenizer.encode(doc, max_length=4096, pad_to_max_length=True, return_tensors='pt') out = reformer.forward(input_ids) out[0].shape ``` ``` torch.Size([1, 4096, 1024]) ``` <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4780/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4779/comments
https://api.github.com/repos/huggingface/transformers/issues/4779/events
https://github.com/huggingface/transformers/issues/4779
631,242,893
MDU6SXNzdWU2MzEyNDI4OTM=
4,779
🐛 [BART] Pipeline OOM
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "@sshleifer Can you reproduce on your side or is it just me ?", "Yes I can replicate, sorry for the slow response. I am still trying to figure out why this is happening.", "OK I figured out the problem\r\nLong articles are not getting truncated anymore by pipeline.\r\nWill have a look.\r\nIf you look at the second val.source example it's 1583 tokens, and pipeline does not truncated it, whereas `Huggingface` does.\r\n\r\nRelated: #4236 ", "May be related #5398", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
CONTRIBUTOR
null
# 🐛 Bug I try to run BART model myself versus running the model through `pipeline`. Running the BART model myself is fine, but I have OOM on my GPU if I try to run the same model through pipeline. Please see the following code : https://gist.github.com/Colanim/4fae6ab52c05716062a0f20c4a6b9737 _(It assume you have a file `cnndm/test.source` with an article on each line)_ Run with : `python pipeline_oom.py --model HuggingFace --batch-size 32` (Should **not** produce OOM on 11G-GPU) and `python pipeline_oom.py --model Pipeline --batch-size 32` (Should produce OOM on 11G-GPU) --- **Why the pipeline use more memory ?** @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4779/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4778/comments
https://api.github.com/repos/huggingface/transformers/issues/4778/events
https://github.com/huggingface/transformers/pull/4778
631,220,395
MDExOlB1bGxSZXF1ZXN0NDI4MTUwMzc2
4,778
Updated path "cd examples/text-generation/pplm"
{ "login": "Mr-Ruben", "id": 37179353, "node_id": "MDQ6VXNlcjM3MTc5MzUz", "avatar_url": "https://avatars.githubusercontent.com/u/37179353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mr-Ruben", "html_url": "https://github.com/Mr-Ruben", "followers_url": "https://api.github.com/users/Mr-Ruben/followers", "following_url": "https://api.github.com/users/Mr-Ruben/following{/other_user}", "gists_url": "https://api.github.com/users/Mr-Ruben/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mr-Ruben/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mr-Ruben/subscriptions", "organizations_url": "https://api.github.com/users/Mr-Ruben/orgs", "repos_url": "https://api.github.com/users/Mr-Ruben/repos", "events_url": "https://api.github.com/users/Mr-Ruben/events{/privacy}", "received_events_url": "https://api.github.com/users/Mr-Ruben/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=h1) Report\n> Merging [#4778](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9414f7553d3f1872b372990ef03205c0d1141df&el=desc) will **increase** coverage by `1.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4778/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4778 +/- ##\n==========================================\n+ Coverage 76.06% 77.08% +1.01% \n==========================================\n Files 128 128 \n Lines 21498 21498 \n==========================================\n+ Hits 16352 16571 +219 \n+ Misses 5146 4927 -219 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.04% <0.00%> (-0.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.39% <0.00%> (+0.48%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `89.17% <0.00%> (+2.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <0.00%> (+4.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <0.00%> (+61.53%)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <0.00%> (+64.93%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=footer). Last update [f9414f7...dd5e08e](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
https://github.com/huggingface/transformers/issues/4776
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4778/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4778", "html_url": "https://github.com/huggingface/transformers/pull/4778", "diff_url": "https://github.com/huggingface/transformers/pull/4778.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4778.patch", "merged_at": 1591406209000 }
https://api.github.com/repos/huggingface/transformers/issues/4777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4777/comments
https://api.github.com/repos/huggingface/transformers/issues/4777/events
https://github.com/huggingface/transformers/issues/4777
631,199,208
MDU6SXNzdWU2MzExOTkyMDg=
4,777
The purpose of files merges.txt, special_tokens_map.json, training_args.bin and add_tokens.json
{ "login": "Aktsvigun", "id": 36672861, "node_id": "MDQ6VXNlcjM2NjcyODYx", "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aktsvigun", "html_url": "https://github.com/Aktsvigun", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi. \r\n\r\nYou will get an explanation about `merges.txt` in this [post](https://github.com/huggingface/transformers/issues/1083#issuecomment-524303077).", "@piegu , thanks for you answer! I have already read this post, though still did not quite understand, does it contain all the possible tokens? If so, what is the purpose of it if we can simply take the keys from `vocab.json`? Thanks!", "My understanding is that the file `merges.txt` is build during the training of the BBPE (Byte Level BPE) tokenizer on the corpus: it gets a new entry (line) at each iteration of the tokenizer to find the byte pairs most frequent.\r\n\r\nFor example, the first line can be `Ġ d`. Why? Because at the first iteration, the token most frequent is ` d` (with a space in front of d) and the character `Ġ` means space.\r\n\r\nWhat is the consequence in the vocabulary? The token `Ġd` is listed.\r\n\r\nHope I'm right. If not, please give me your explanation as I have not found any online.", "@piegu thank you! So you mean this is the vocabulary sorted by the frequency on the training data, right? \r\nAnd what about these lines (which are 3rd - 7th for RoBERTa-base, for instance): \r\n```\r\nh e\r\ni n\r\nr e\r\no n\r\n```\r\nI clearly see these are popular words if we stack them but why are they divided?", "First of all, like for GPT2, the Hugging Face (HF) tokenizer of RoBERTa is a [Byte-level Byte-Pair-Encoding](https://arxiv.org/pdf/1909.03341.pdf) (BBPE) as written in the [documentation](https://huggingface.co/transformers/_modules/transformers/tokenization_roberta.html).\r\n\r\nThen, we can check in this page that in the attribute `vocab_files_names`, there are 2 files \r\n```\r\nVOCAB_FILES_NAMES = {\r\n \"vocab_file\": \"vocab.json\",\r\n \"merges_file\": \"merges.txt\",\r\n}\r\n```\r\nLet's open [merges.txt](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt) of RoBERTa-base, for instance. The file starts like this:\r\n\r\n```\r\n#version: 0.2\r\nÄ t\r\nÄ a\r\nh e\r\ni n\r\nr e\r\no n\r\nÄ t he\r\ne r\r\nÄ s\r\na t\r\nÄ w\r\nÄ o\r\n...\r\n```\r\n\r\n_Note: In this [Roberta Tokenizer merge file](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt), the special character `Ä` is used for encoding space instead of `Ġ` that is used by GPT2 Tokenizer ([explanation 1](https://github.com/openai/gpt-2/issues/80) and [explanation 2](https://github.com/pytorch/fairseq/issues/1716)) but in the corresponding [RoBERTa vocab file](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json), the character `Ġ` is used. I do not know why._\r\n\r\nThe merge file shows what tokens will be merged at each iteration (thats' why there is a space between tokens in the merge file). \r\n\r\nAbout your example: It means that at the third iteration, the tokens pair `he` formed by the 2 tokens `h` and `e` is the most frequent in the corpus (token `he` without space before the token `h`).\r\n\r\nIf at the end of iterations, there is at least one pair `he` left (not merged with other tokens), it will appear in the [vocab file](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json) (depends as well of the `min_freq` rules and number of tokens in vocab). Here, the id of `he` in the vocab file is 700.\r\n\r\nHope it helps but that would be great to get the point of view of someone from Hugging Face like @sshleifer or @sgugger.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@piegu appreciate your clear explanation!i am still confused about the defination of \"iteration\" " ]
1,591
1,680
1,598
CONTRIBUTOR
null
Good evening! After I have my RoBERTa model pre-trained, I get the list of the following files: `merges.txt`, `special_tokens_map.json`, `training_args.bin`. I have also seen if you add extra tokens to the tokenizer, the file `add_tokens.json` appears. Could I ask to clarify the meaning of the first three files - how they are used and what they contain? And also how can I add extra tokens when pre-training RoBERTa or any BERT-type model? Million of thanks in advance! Be safe, Akim
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4777/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4777/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4776/comments
https://api.github.com/repos/huggingface/transformers/issues/4776/events
https://github.com/huggingface/transformers/issues/4776
631,195,335
MDU6SXNzdWU2MzExOTUzMzU=
4,776
Correcting path to pplm examples
{ "login": "Mr-Ruben", "id": 37179353, "node_id": "MDQ6VXNlcjM3MTc5MzUz", "avatar_url": "https://avatars.githubusercontent.com/u/37179353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mr-Ruben", "html_url": "https://github.com/Mr-Ruben", "followers_url": "https://api.github.com/users/Mr-Ruben/followers", "following_url": "https://api.github.com/users/Mr-Ruben/following{/other_user}", "gists_url": "https://api.github.com/users/Mr-Ruben/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mr-Ruben/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mr-Ruben/subscriptions", "organizations_url": "https://api.github.com/users/Mr-Ruben/orgs", "repos_url": "https://api.github.com/users/Mr-Ruben/repos", "events_url": "https://api.github.com/users/Mr-Ruben/events{/privacy}", "received_events_url": "https://api.github.com/users/Mr-Ruben/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the report – care to open a PR?", "Done\r\nhttps://github.com/huggingface/transformers/pull/4778\r\n\r\nI hope it is done well." ]
1,591
1,591
1,591
CONTRIBUTOR
null
# 🐛 Bug On https://github.com/huggingface/transformers/tree/master/examples/text-generation/pplm#setup It says ``` git clone https://github.com/huggingface/transformers && cd transformers pip install . pip install nltk torchtext # additional requirements. cd examples/pplm ``` and as you can guess from the url, the correct path is ``` git clone https://github.com/huggingface/transformers && cd transformers pip install . pip install nltk torchtext # additional requirements. cd examples/text-generation/pplm ``` cd examples/**text-generation**/pplm
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4776/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4775
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4775/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4775/comments
https://api.github.com/repos/huggingface/transformers/issues/4775/events
https://github.com/huggingface/transformers/pull/4775
631,157,949
MDExOlB1bGxSZXF1ZXN0NDI4MDk5MzYy
4,775
Create model card for tblard/allocine
{ "login": "TheophileBlard", "id": 37028092, "node_id": "MDQ6VXNlcjM3MDI4MDky", "avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheophileBlard", "html_url": "https://github.com/TheophileBlard", "followers_url": "https://api.github.com/users/TheophileBlard/followers", "following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}", "gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions", "organizations_url": "https://api.github.com/users/TheophileBlard/orgs", "repos_url": "https://api.github.com/users/TheophileBlard/repos", "events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}", "received_events_url": "https://api.github.com/users/TheophileBlard/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=h1) Report\n> Merging [#4775](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17a88d31925a9308e4d7275420033f07a20cd680&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4775/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4775 +/- ##\n==========================================\n+ Coverage 77.08% 77.10% +0.01% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n+ Hits 16234 16237 +3 \n+ Misses 4825 4822 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.03% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=footer). Last update [17a88d3...b1ef414](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for sharing, and great dataset. (consider uploading it to [`nlp`](https://github.com/huggingface/nlp)?)", "> Thanks for sharing, and great dataset. (consider uploading it to [`nlp`](https://github.com/huggingface/nlp)?)\r\n\r\nThanks for merging. Sure, will do asap !" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Model card for: https://huggingface.co/tblard/tf-allocine This is a french sentiment analysis model, trained from camembert-base, finetuned on Allociné.fr data. Original repo: https://github.com/TheophileBlard/french-sentiment-analysis-with-bert
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4775/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4775", "html_url": "https://github.com/huggingface/transformers/pull/4775", "diff_url": "https://github.com/huggingface/transformers/pull/4775.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4775.patch", "merged_at": 1591312508000 }
https://api.github.com/repos/huggingface/transformers/issues/4774
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4774/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4774/comments
https://api.github.com/repos/huggingface/transformers/issues/4774/events
https://github.com/huggingface/transformers/pull/4774
631,119,191
MDExOlB1bGxSZXF1ZXN0NDI4MDY2NDkw
4,774
Add .vs to gitignore
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=h1) Report\n> Merging [#4774](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd4e07a85e6161111016ca6d811d97e59368971a&el=desc) will **increase** coverage by `0.22%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4774/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4774 +/- ##\n==========================================\n+ Coverage 77.09% 77.32% +0.22% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n+ Hits 16235 16283 +48 \n+ Misses 4824 4776 -48 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4774/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4774/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4774/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=footer). Last update [cd4e07a...e51cd58](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
COLLABORATOR
null
My VS code writes stuff there so added it to the .gitignore. I also have other untracked files: - in tests/fixtures/ I have some cached_lm_\*Tokenizer_\*.txt and .txt.lock after running the tests - in docs/ I have the simlink to examples.md as indicated in [here](https://github.com/huggingface/transformers/tree/master/docs#building-the-documentation) Should I add stuff in .gitignore to ignore those as well?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4774/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4774/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4774", "html_url": "https://github.com/huggingface/transformers/pull/4774", "diff_url": "https://github.com/huggingface/transformers/pull/4774.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4774.patch", "merged_at": 1591358172000 }
https://api.github.com/repos/huggingface/transformers/issues/4773
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4773/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4773/comments
https://api.github.com/repos/huggingface/transformers/issues/4773/events
https://github.com/huggingface/transformers/pull/4773
631,047,879
MDExOlB1bGxSZXF1ZXN0NDI4MDA4MzYx
4,773
Don't access pad_token_id if there is no pad_token
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=h1) Report\n> Merging [#4773](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd4e07a85e6161111016ca6d811d97e59368971a&el=desc) will **increase** coverage by `0.34%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4773/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4773 +/- ##\n==========================================\n+ Coverage 77.09% 77.43% +0.34% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n+ Hits 16235 16308 +73 \n+ Misses 4824 4751 -73 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.91% <ø> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=footer). Last update [cd4e07a...734e4af](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM, also pinging @mfuntowicz for interface of Rust tokenizer and transformers tokenizer" ]
1,591
1,591
1,591
COLLABORATOR
null
When using the `encode` method of a fast tokenizer, we end up here and accessing `pad_token_id` may log an error even when no padding is done. This PR corrects that. This fixes #4764 .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4773/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4773/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4773", "html_url": "https://github.com/huggingface/transformers/pull/4773", "diff_url": "https://github.com/huggingface/transformers/pull/4773.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4773.patch", "merged_at": 1591307825000 }
https://api.github.com/repos/huggingface/transformers/issues/4772
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4772/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4772/comments
https://api.github.com/repos/huggingface/transformers/issues/4772/events
https://github.com/huggingface/transformers/pull/4772
631,035,228
MDExOlB1bGxSZXF1ZXN0NDI3OTk3Mzk4
4,772
Fix the __getattr__ method in BatchEncoding
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=h1) Report\n> Merging [#4772](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4772/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4772 +/- ##\n==========================================\n- Coverage 77.98% 77.96% -0.03% \n==========================================\n Files 123 123 \n Lines 20436 20437 +1 \n==========================================\n- Hits 15938 15933 -5 \n- Misses 4498 4504 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.47% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=footer). Last update [5856999...fa0abc6](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Fix the issue where the `__getattr__` method in `BatchEncoding` was raising a `KeyError` instead of an `AttributeError` when the attribute was accessed with `getattr()`. Example: ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-cased") features = tokenizer.encode_plus("Hello here") getattr(features, "attr", False) ``` Previous output: ``` /home/jplu/transformers/src/transformers/tokenization_utils.py:204 __getattr__ return self.data[item] KeyError: 'attr' ``` New output: ``` False ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4772/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4772", "html_url": "https://github.com/huggingface/transformers/pull/4772", "diff_url": "https://github.com/huggingface/transformers/pull/4772.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4772.patch", "merged_at": 1591688640000 }
https://api.github.com/repos/huggingface/transformers/issues/4771
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4771/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4771/comments
https://api.github.com/repos/huggingface/transformers/issues/4771/events
https://github.com/huggingface/transformers/pull/4771
630,978,768
MDExOlB1bGxSZXF1ZXN0NDI3OTU1NDIz
4,771
Remove unnecessary model_type arg in example
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "organizations_url": "https://api.github.com/users/zphang/orgs", "repos_url": "https://api.github.com/users/zphang/repos", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "received_events_url": "https://api.github.com/users/zphang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=h1) Report\n> Merging [#4771](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e645b9ab9407e1c1b2c168317dc79fe13fc6e0b4&el=desc) will **decrease** coverage by `0.81%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4771/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4771 +/- ##\n==========================================\n- Coverage 77.31% 76.49% -0.82% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n- Hits 16281 16110 -171 \n- Misses 4778 4949 +171 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.32% <0.00%> (-55.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=footer). Last update [e645b9a...7ad3a66](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4771/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4771", "html_url": "https://github.com/huggingface/transformers/pull/4771", "diff_url": "https://github.com/huggingface/transformers/pull/4771.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4771.patch", "merged_at": 1591292485000 }
https://api.github.com/repos/huggingface/transformers/issues/4770
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4770/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4770/comments
https://api.github.com/repos/huggingface/transformers/issues/4770/events
https://github.com/huggingface/transformers/pull/4770
630,970,463
MDExOlB1bGxSZXF1ZXN0NDI3OTQ5Mjg5
4,770
Add note about doc generation
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=h1) Report\n> Merging [#4770](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e645b9ab9407e1c1b2c168317dc79fe13fc6e0b4&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4770/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4770 +/- ##\n==========================================\n- Coverage 77.31% 77.26% -0.05% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n- Hits 16281 16271 -10 \n- Misses 4778 4788 +10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `36.80% <0.00%> (-3.88%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=footer). Last update [e645b9a...39dc245](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
COLLABORATOR
null
Just make it explicit that doc generation is only for local inspection.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4770/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4770", "html_url": "https://github.com/huggingface/transformers/pull/4770", "diff_url": "https://github.com/huggingface/transformers/pull/4770.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4770.patch", "merged_at": 1591292595000 }
https://api.github.com/repos/huggingface/transformers/issues/4769
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4769/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4769/comments
https://api.github.com/repos/huggingface/transformers/issues/4769/events
https://github.com/huggingface/transformers/issues/4769
630,938,385
MDU6SXNzdWU2MzA5MzgzODU=
4,769
Bert (sentence classification) output is non-deterministic for PyTorch (not for TF)
{ "login": "lutz-100worte", "id": 38904541, "node_id": "MDQ6VXNlcjM4OTA0NTQx", "avatar_url": "https://avatars.githubusercontent.com/u/38904541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lutz-100worte", "html_url": "https://github.com/lutz-100worte", "followers_url": "https://api.github.com/users/lutz-100worte/followers", "following_url": "https://api.github.com/users/lutz-100worte/following{/other_user}", "gists_url": "https://api.github.com/users/lutz-100worte/gists{/gist_id}", "starred_url": "https://api.github.com/users/lutz-100worte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lutz-100worte/subscriptions", "organizations_url": "https://api.github.com/users/lutz-100worte/orgs", "repos_url": "https://api.github.com/users/lutz-100worte/repos", "events_url": "https://api.github.com/users/lutz-100worte/events{/privacy}", "received_events_url": "https://api.github.com/users/lutz-100worte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, as a quick note, all the instructions you've shown could be resumed in the simpler `BertForSequenceClassification.from_pretrained`.\r\n\r\nWhich checkpoint are you trying to load? How did you obtain it?", "Hi, thanks for the tip.\r\n\r\nThe checkpoint is from an own finetuned model. But would that matter? I would expect that the model behaves deterministically, even if I put random tensors with the correct shape into the `state_dict`.", "Well, it depends. A few things may be responsible here: \r\n\r\n- Your model is not in eval mode (`model.eval()`), resulting in dropout layers affecting your results\r\n- Your fine-tuned model is lacking some layers, which are therefore initialized randomly.\r\n\r\nCan you check the logs by putting the following two lines above your model load?\r\n\r\n```py\r\nimport logging\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\n```\r\n\r\nCan you also try by using the `from_pretrained` method (given that your model filename is `pytorch_model.bin`)?\r\n\r\n```py\r\nconfig = BertConfig.from_json_file(config_filename) \r\n\r\nmodel = BertForSequenceClassification.from_pretrained(model_dir, config=config)\r\n```\r\n\r\nOr, simpler, if the configuration is in the same folder as your model filename:\r\n\r\n```py\r\nmodel = BertForSequenceClassification.from_pretrained(model_dir)\r\n```", "Thanks, @LysandreJik , you were exactly right: After setting `model.eval()`, the PyTorch model also behaves deterministically. Rookie mistake :smile: \r\n\r\nSince you provided the alternative methods, I checked them, too. The logging does not tell me whether or not the model is in eval mode. It just lists some hyperparameters of the model. At least I can see there that there seem to be at least two (`\"attention_probs_dropout_prob\"` and `\"hidden_dropout_prob\"`) that make a difference between train and eval mode.\r\n\r\nAnd finally I tried the loading from one line. That also serves to resolve the issue: If you load like that it seems to be set to eval mode automatically. So not the PyTorch variant was to blame, but the mode of loading (and not explicitly setting to eval mode afterwards). Or ultimately me :wink: \r\n\r\nThanks for the quick and competent response!", "The logging is useful when you're loading using `from_pretrained` as it tells you which layers were not initialized with the model. For example if your checkpoint is a base BERT model that you try to load in the sequence classification model, it will load it but the classifier layer would be randomly initialized. The logging would have told you :smile:.\r\n\r\nGlad we could resolve your problem!" ]
1,591
1,591
1,591
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): German The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load model: `config = BertConfig.from_json_file(config_filename) model = BertForSequenceClassification(config) state_dict = torch.load(model_filename) model.load_state_dict(state_dict)` 2. Do inference twice on the same input + compare results. 3. Alternatively, save the first output, load the model from scratch, and run the same inference. Even in this case, the first output will not be the same as the next time. ## Expected behavior The prediction value should be deterministic. Note that it *is* deterministic when the model parameters are loaded from a TensorFlow file (with `from_tf=True`). ## Environment info - `transformers` version: 2.10.0 - Platform: Linux-5.3.0-55-generic-x86_64-with-Ubuntu-19.10-eoan - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.0.0 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4769/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4768
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4768/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4768/comments
https://api.github.com/repos/huggingface/transformers/issues/4768/events
https://github.com/huggingface/transformers/pull/4768
630,917,725
MDExOlB1bGxSZXF1ZXN0NDI3OTA5OTUz
4,768
Codecov setup
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=h1) Report\n> Merging [#4768](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2b8b6c929e282958a920ba2aa26ee59106986ec3&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4768/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4768 +/- ##\n==========================================\n+ Coverage 77.31% 77.33% +0.01% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n+ Hits 16282 16285 +3 \n+ Misses 4777 4774 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=footer). Last update [2b8b6c9...fc6d7f5](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
MEMBER
null
Setup codecov
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4768/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4768", "html_url": "https://github.com/huggingface/transformers/pull/4768", "diff_url": "https://github.com/huggingface/transformers/pull/4768.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4768.patch", "merged_at": 1591285479000 }
https://api.github.com/repos/huggingface/transformers/issues/4767
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4767/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4767/comments
https://api.github.com/repos/huggingface/transformers/issues/4767/events
https://github.com/huggingface/transformers/issues/4767
630,917,597
MDU6SXNzdWU2MzA5MTc1OTc=
4,767
Model is running on special characters and word pieces for token classification
{ "login": "andrster", "id": 22357321, "node_id": "MDQ6VXNlcjIyMzU3MzIx", "avatar_url": "https://avatars.githubusercontent.com/u/22357321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andrster", "html_url": "https://github.com/andrster", "followers_url": "https://api.github.com/users/andrster/followers", "following_url": "https://api.github.com/users/andrster/following{/other_user}", "gists_url": "https://api.github.com/users/andrster/gists{/gist_id}", "starred_url": "https://api.github.com/users/andrster/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andrster/subscriptions", "organizations_url": "https://api.github.com/users/andrster/orgs", "repos_url": "https://api.github.com/users/andrster/repos", "events_url": "https://api.github.com/users/andrster/events{/privacy}", "received_events_url": "https://api.github.com/users/andrster/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @andrster , it's not clear what you mean here. Can you please provide more explanation", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
['last', 'completed', 'interactions', 'on', 'wed', '##nes', '##day'] and it will return for 9 labels, but should really on return 5
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4767/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4766
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4766/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4766/comments
https://api.github.com/repos/huggingface/transformers/issues/4766/events
https://github.com/huggingface/transformers/issues/4766
630,850,819
MDU6SXNzdWU2MzA4NTA4MTk=
4,766
Issue with HANS evaluation
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Could you please provide some possible areas to look at ? I can look at it and send a PR.", "@sgugger @julien-c Thank you for the [PR](https://github.com/huggingface/transformers/issues/4742). There is a problem associated with this PR, that is, if you run `python3 evaluate_heur_output.py /path_to_hans_predictions.txt`, I'm getting an error:\r\n```\r\nguess = guess_dict[key]\r\nKeyerror: 'ex0'\r\n```\r\nwhich suggests that a key is missing, indicating that `hans_predictions.txt` is not being generated in the expected manner as [HANS repo](https://github.com/huggingface/transformers/issues/4742) indicates. \r\nFor fixing, it would great if evaluation is carried within transformers just like GLUE tasks and that user doesn't have to rely on external [repo](https://github.com/huggingface/transformers/issues/4742) to get predictions. \r\nIt would be good if you can check this, as this error makes the example unusable.", "I forgot the header on that file, #5082 should fix.", "Evaluation seems to be working fine now. Feel free to close this issue @sgugger . Thanks again. Would be great if evaluation method can be integrated.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Albert, Bert Language I am using the model on (English, Chinese ...): GLUE (MNLI), HANS The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce I have tried to evaluate ALBERT and BERT (pretrained, trained on MNLI) on HANS. I'm using the official example. Everytime I run evaluation, I'm getting ``` Heuristic entailed results: lexical_overlap: 0.0 subsequence: 0.0 constituent: 0.0 Heuristic non-entailed results: lexical_overlap: 1.0 subsequence: 1.0 constituent: 1.0 ``` Results can be reproduced by: ``` python3 hans/test_hans.py --task_name hans --model_type bert/albert --do_eval --data_dir $HANS_DIR --model_name_or_path $MODEL_PATH --max_seq_length 128 --output_dir $MODEL_PATH --per_gpu_eval_batch_size 1024 --overwrite_cache ``` ## Expected behavior It shouldn't be exactly 0 and 1 always and entailment score should be higher than non entailment. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux 5.4.0-29-generic - Python version: 3.8.2 - PyTorch version (GPU?): 1.5.0+cu101 - Tensorflow version (GPU?): NA - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4766/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4765
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4765/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4765/comments
https://api.github.com/repos/huggingface/transformers/issues/4765/events
https://github.com/huggingface/transformers/issues/4765
630,849,167
MDU6SXNzdWU2MzA4NDkxNjc=
4,765
Cannot load pretrained model from repo.
{ "login": "jordaniac89", "id": 8006059, "node_id": "MDQ6VXNlcjgwMDYwNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/8006059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jordaniac89", "html_url": "https://github.com/jordaniac89", "followers_url": "https://api.github.com/users/jordaniac89/followers", "following_url": "https://api.github.com/users/jordaniac89/following{/other_user}", "gists_url": "https://api.github.com/users/jordaniac89/gists{/gist_id}", "starred_url": "https://api.github.com/users/jordaniac89/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jordaniac89/subscriptions", "organizations_url": "https://api.github.com/users/jordaniac89/orgs", "repos_url": "https://api.github.com/users/jordaniac89/repos", "events_url": "https://api.github.com/users/jordaniac89/events{/privacy}", "received_events_url": "https://api.github.com/users/jordaniac89/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @jordaniac89 I just tried this, and it worked. Can you try this again ?", "Yeah, nothing. My assumption was that from_pretrained() should reach out to s3 and download the model if it can't find it?", "Figured it out. Just an fyi that `pip install transformers` installs v. 2.2.0. I needed to specify 2.5.1 with `pip install transformers==2.5.1`", "Glad you could solve the issue!" ]
1,591
1,591
1,591
NONE
null
Hi, I'm trying to load a model from the repository: ``` tokenizer = AutoTokenizer.from_pretrained("NeuML/bert-small-cord19-squad2") model = AutoModelForQuestionAnswering.from_pretrained("NeuML/bert-small-cord19-squad2") ``` but I'm receiving the error: `OSError: Model name 'NeuML/bert-small-cord19-squad2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'NeuML/bert-small-cord19-squad2' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. ` I've tried a few other models with the same result. The vocab.txt file is there in each of them. Am I missing something? EDIT: Current libraries - beautifulsoup4==4.9.1 boto3==1.10.30 botocore==1.13.30 certifi==2019.11.28 chardet==3.0.4 Click==7.0 docutils==0.15.2 fsspec==0.6.1 future==0.18.2 html2text==2020.1.16 idna==2.8 jmespath==0.9.4 joblib==0.14.0 nltk==3.4.5 numpy==1.17.4 pandas==0.25.3 Pillow==7.1.2 python-dateutil==2.8.0 pytz==2019.3 regex==2019.11.1 requests==2.22.0 s3fs==0.4.0 s3transfer==0.2.1 sacremoses==0.0.35 scikit-learn==0.21.3 scipy==1.3.2 sentencepiece==0.1.83 six==1.13.0 soupsieve==2.0.1 torch==1.5.0 torchvision==0.6.0 tqdm==4.40.0 transformers==2.2.0 urllib3==1.25.7 wikipedia==1.4.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4765/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4764
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4764/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4764/comments
https://api.github.com/repos/huggingface/transformers/issues/4764/events
https://github.com/huggingface/transformers/issues/4764
630,789,576
MDU6SXNzdWU2MzA3ODk1NzY=
4,764
GPT2TokenizerFast raises pad_token error even if not used
{ "login": "tomhosking", "id": 9419158, "node_id": "MDQ6VXNlcjk0MTkxNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomhosking", "html_url": "https://github.com/tomhosking", "followers_url": "https://api.github.com/users/tomhosking/followers", "following_url": "https://api.github.com/users/tomhosking/following{/other_user}", "gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions", "organizations_url": "https://api.github.com/users/tomhosking/orgs", "repos_url": "https://api.github.com/users/tomhosking/repos", "events_url": "https://api.github.com/users/tomhosking/events{/privacy}", "received_events_url": "https://api.github.com/users/tomhosking/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I can reproduce and think I've found the cause, working on this." ]
1,591
1,591
1,591
CONTRIBUTOR
null
# 🐛 Bug ## Information Model: GPT2 Language: English Encoding with the `GPT2TokenizerFast` causes a `pad_token` error to be sent to stderr, despite not attempting to access that property. ## To reproduce ```import transformers from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained('gpt2') res = tokenizer.encode("This is a sentence") print(transformers.__version__) ``` Output: ``` Using pad_token, but it is not set yet. 2.11.0 ``` ## Expected behavior I'm aware that GPT-2 doesn't include a pad token (#2630) - I haven't tried to use it. I would expect no error to be displayed until I try to access that property. ## Environment info - `transformers` version: 2.11.0 - Platform: Ubuntu - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0 with CUDA - Tensorflow version (GPU?): n/a - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4764/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4763
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4763/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4763/comments
https://api.github.com/repos/huggingface/transformers/issues/4763/events
https://github.com/huggingface/transformers/pull/4763
630,749,871
MDExOlB1bGxSZXF1ZXN0NDI3Nzc3NTYw
4,763
Model Card for RoBERTa trained on Sanskrit
{ "login": "parmarsuraj99", "id": 9317265, "node_id": "MDQ6VXNlcjkzMTcyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parmarsuraj99", "html_url": "https://github.com/parmarsuraj99", "followers_url": "https://api.github.com/users/parmarsuraj99/followers", "following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}", "gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}", "starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions", "organizations_url": "https://api.github.com/users/parmarsuraj99/orgs", "repos_url": "https://api.github.com/users/parmarsuraj99/repos", "events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}", "received_events_url": "https://api.github.com/users/parmarsuraj99/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=h1) Report\n> Merging [#4763](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.31%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4763/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4763 +/- ##\n==========================================\n- Coverage 77.41% 77.09% -0.32% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n- Hits 16302 16236 -66 \n- Misses 4757 4823 +66 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (-14.56%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.03% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=footer). Last update [5bf9afb...30af45b](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "How can I solve this codecov/project checks?", "It was a transient CI error. Thank you!" ]
1,591
1,591
1,591
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4763/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4763", "html_url": "https://github.com/huggingface/transformers/pull/4763", "diff_url": "https://github.com/huggingface/transformers/pull/4763.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4763.patch", "merged_at": 1591304321000 }
https://api.github.com/repos/huggingface/transformers/issues/4762
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4762/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4762/comments
https://api.github.com/repos/huggingface/transformers/issues/4762/events
https://github.com/huggingface/transformers/issues/4762
630,738,469
MDU6SXNzdWU2MzA3Mzg0Njk=
4,762
KeyError in Pipeline Question Answering with LongFormer
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "It seems that I have the same (or at least very similar) issue but using `ner` pipeline.\r\nMy model is a fine-tuned RoBERTa (`xlm-roberta-base`).\r\nI can produce different predictions with different inputs, but all are way outside the range of the actual label IDs.\r\n\r\nThe error shows where the predicted label ID can't be found in the `id2label` map in the model config:\r\n\r\n```\r\n~/projects/env/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)\r\n 920 filtered_labels_idx = [\r\n 921 (idx, label_idx)\r\n--> 922 for idx, label_idx in enumerate(labels_idx)\r\n 923 if self.model.config.id2label[label_idx] not in self.ignore_labels\r\n 924 ]\r\n\r\n~/projects/env/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0)\r\n 921 (idx, label_idx)\r\n 922 for idx, label_idx in enumerate(labels_idx)\r\n--> 923 if self.model.config.id2label[label_idx] not in self.ignore_labels\r\n 924 ]\r\n 925\r\n\r\nKeyError: 741\r\n```", "Longformer isn't yet supported in the pipeline. For now you'll need to do this manually as given in the example or doc.\r\n\r\n@patrickvonplaten ", "That's correct, adding Longformer to the QA pipeline is on the ToDo List :-) ", "Actually LongFormer isn't the only model that fails inside the Pipeline. I'm trying to use now 'ktrapeznikov/biobert_v1.1_pubmed_squad_v2' and it throws the same error: KeyError. ", "Anyone has an example of how to do QA without the Pipeline? That'd be really helpful for checking whether the models work or not, regardless of them having been added to the pipeline or not. ", "@alexvaca0 \r\n\r\nPlease check which architecture you are using, and then go to the docs and find the doc for QA model, it contains the example on how to use it without pipeline. So if your architecture is BERT then there will be a model BertForQuestionAnswering. You'll find the example in the model's doc. Basically what you'll need to do is this\r\n\r\n```python3\r\n# import your model class, you can also use AutoModelForQuestionAnswering and AutoTokenizer\r\nfrom transformers import BertTokenizer, BertForQuestionAnswering\r\nimport torch\r\n\r\n# load the model and tokenizer\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')\r\n\r\n# encode the question and text\r\nquestion, text = \"Who was Jim Henson?\", \"Jim Henson was a nice puppet\"\r\nencoding = tokenizer.encode_plus(question, text)\r\ninput_ids, token_type_ids = encoding[\"input_ids\"], encoding[\"token_type_ids\"]\r\n\r\n# do the forward pass, each qa model returns start_scores, end_scores\r\nstart_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))\r\n\r\n# extract the span\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids)\r\nanswer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])\r\n\r\nassert answer == \"a nice puppet\"\r\n```\r\n\r\nHope this helps you.", "Also https://huggingface.co/transformers/usage.html#extractive-question-answering", "> Actually LongFormer isn't the only model that fails inside the Pipeline. I'm trying to use now 'ktrapeznikov/biobert_v1.1_pubmed_squad_v2' and it throws the same error: KeyError.\r\n\r\nFeel free to open a separate issue on this so that we can investigate more :-)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
I'm trying to do QA with LongFormer in a Pipeline. First of all, I generate the pipeline: ` MODEL_STR = "mrm8488/longformer-base-4096-finetuned-squadv2" tokenizer = AutoTokenizer.from_pretrained(MODEL_STR) model = AutoModelForQuestionAnswering.from_pretrained(MODEL_STR) QA = pipeline('question-answering', model=model, tokenizer=tokenizer) ` Then, I get the paper text from which I want the answer to come from, named my_article, that's a string containing the full body of the article (around 3000 words). Then, I try: ` with torch.no_grad(): answer = QA(question=question, context=articles_abstract.body_text.iloc[0]) ` And it throws the following error: ` eyError Traceback (most recent call last) <ipython-input-53-b5f8dc0503c8> in <module> 1 with torch.no_grad(): ----> 2 answer = QA(question=question, context=articles_abstract.body_text.iloc[0]) ~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 1225 ), 1226 } -> 1227 for s, e, score in zip(starts, ends, scores) 1228 ] 1229 ~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0) 1225 ), 1226 } -> 1227 for s, e, score in zip(starts, ends, scores) 1228 ] 1229 KeyError: 382 ` How can I solve this issue? More importantly, what do you think is causing the issue? Thanks in advance! :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4762/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4761
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4761/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4761/comments
https://api.github.com/repos/huggingface/transformers/issues/4761/events
https://github.com/huggingface/transformers/pull/4761
630,734,690
MDExOlB1bGxSZXF1ZXN0NDI3NzY1NTU4
4,761
[cleanup] PretrainedModel.generate: remove unused kwargs
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=h1) Report\n> Merging [#4761](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4761/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4761 +/- ##\n==========================================\n- Coverage 77.41% 77.31% -0.10% \n==========================================\n Files 128 128 \n Lines 21059 21059 \n==========================================\n- Hits 16302 16282 -20 \n- Misses 4757 4777 +20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <ø> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=footer). Last update [5bf9afb...95a6ef1](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
`_generate_beam_search` and `_generate_no_beam_search` do not use `bos_token_id` or `decoder_start_token_id`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4761/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4761", "html_url": "https://github.com/huggingface/transformers/pull/4761", "diff_url": "https://github.com/huggingface/transformers/pull/4761.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4761.patch", "merged_at": 1591272833000 }
https://api.github.com/repos/huggingface/transformers/issues/4760
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4760/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4760/comments
https://api.github.com/repos/huggingface/transformers/issues/4760/events
https://github.com/huggingface/transformers/issues/4760
630,726,327
MDU6SXNzdWU2MzA3MjYzMjc=
4,760
Fine-tuning of RoBERTa
{ "login": "Aktsvigun", "id": 36672861, "node_id": "MDQ6VXNlcjM2NjcyODYx", "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aktsvigun", "html_url": "https://github.com/Aktsvigun", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @Aktsvigun \r\nTo run the examples you'll need to clone the transformer repo. All examples can be found in examples directory. You can find `run_lm_finetuning` here.\r\nrun_language_modeling.py\r\nhttps://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py", "@patil-suraj thank you for your response,\r\nfeel a bit dummy, is the function name changed to run_language_modeling? 😹\r\nCannot find the name `run_lm_finetuning` on the page itself.\r\n<img width=\"1228\" alt=\"Снимок экрана 2020-06-04 в 16 57 00\" src=\"https://user-images.githubusercontent.com/36672861/83765910-6b0b0680-a684-11ea-8ff6-6ba29f833870.png\">\r\n\r\nBe safe,\r\nAkim\r\n", "yes, the filename `run_lm_finetuning` is changed to `run_language_modeling.py`", "Hi @Aktsvigun, the documentation you're linking is for `transformers` v1.2.0. If you want to run `v1.2.0` scripts, you should look at the [tag v1.2.0](https://github.com/huggingface/transformers/tree/1.2.0/examples).\r\n\r\nPlease note that the latest scripts are more stable. As @patil-suraj said, the script you're looking for was renamed, as you can see in the [current documentation](https://huggingface.co/transformers/examples.html)." ]
1,591
1,591
1,591
CONTRIBUTOR
null
Good afternoon, I'm trying to fine-tune RoBERTa on my own dataset, following the instruction provided here [https://huggingface.co/transformers/v1.2.0/examples.html](url). However, I cannot find the file `run_lm_finetuning.py` - could you please clarify, is the instruction valid or the file has really been deleted? Thanks in advance! Be safe, Akim
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4760/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4759
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4759/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4759/comments
https://api.github.com/repos/huggingface/transformers/issues/4759/events
https://github.com/huggingface/transformers/pull/4759
630,717,276
MDExOlB1bGxSZXF1ZXN0NDI3NzUxMDg2
4,759
Fix resize_token_embeddings for Transformer-XL
{ "login": "RafaelWO", "id": 38643099, "node_id": "MDQ6VXNlcjM4NjQzMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RafaelWO", "html_url": "https://github.com/RafaelWO", "followers_url": "https://api.github.com/users/RafaelWO/followers", "following_url": "https://api.github.com/users/RafaelWO/following{/other_user}", "gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}", "starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions", "organizations_url": "https://api.github.com/users/RafaelWO/orgs", "repos_url": "https://api.github.com/users/RafaelWO/repos", "events_url": "https://api.github.com/users/RafaelWO/events{/privacy}", "received_events_url": "https://api.github.com/users/RafaelWO/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=h1) Report\n> Merging [#4759](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `95.65%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4759/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4759 +/- ##\n==========================================\n- Coverage 77.41% 77.35% -0.06% \n==========================================\n Files 128 128 \n Lines 21059 21105 +46 \n==========================================\n+ Hits 16302 16325 +23 \n- Misses 4757 4780 +23 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.65% <95.65%> (+1.69%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=footer). Last update [5bf9afb...e856841](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks!\r\nYes sure, I was already thinking that a test could be useful here. I will try my best and add a test in [test_modeling_transfo_xl.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_transfo_xl.py), correct? Do I have to add a separate class for it or include it in [TransfoXLModelTester](https://github.com/huggingface/transformers/blob/2b8b6c929e282958a920ba2aa26ee59106986ec3/tests/test_modeling_transfo_xl.py#L42) ?\r\n\r\nIs there any proper way to run/debug only one test so I can try out my test-code easily?", "I was thinking of putting it in `TransfoXLModelTest` under the name `test_resize_tokens_embeddings` so that it overrides the parent class' method. Alongside the method `test_model_from_pretrained`, do you see what I mean?\r\n\r\nYou can run the test suite for only this file using the following command (requires `pytest` and `pytest-cov` installed, which gets you the best stacktrace):\r\n\r\n```\r\npython -m pytest -sv ./tests/*modeling_transfo* --cov \r\n```\r\n\r\nLet me know if you need any help!", "Ok I added a test for the new `resize_token_embeddings` method.\r\n\r\nP.S. running `isort --recursive examples templates tests src utils` reformats the file `examples\\benchmarking\\plot_csv_file.py` for me. Should I add this change too or ignore it?", "@LysandreJik I'm happy to contribute :)\r\nI have another question: Since my implementation supports resizing embedding layers other than the first, the added tokens have to be moved in the tokenizer as well. I also have a solution for this which I added into the `TransfoXLTokenizer`. Should I open a separate issue and PR for this (or just PR) or add it here, as it's somehow related?", "@patrickvonplaten, since you've worked a bit with TransformerXL in the past, do you want to take a look before we merge?", "I like it - looks very clean to me!" ]
1,591
1,593
1,591
CONTRIBUTOR
null
Fixes #3554 As discussed in the issue above, the fix ensures that per default the last layer of the `AdaptiveEmbedding` is resized. Otherwise the target layer can be passed to the `resize_token_embeddings()` method as the parameter `layer`. After the resizing is done, the cutoffs are adjusted accordingly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4759/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4759", "html_url": "https://github.com/huggingface/transformers/pull/4759", "diff_url": "https://github.com/huggingface/transformers/pull/4759.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4759.patch", "merged_at": 1591830187000 }
https://api.github.com/repos/huggingface/transformers/issues/4758
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4758/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4758/comments
https://api.github.com/repos/huggingface/transformers/issues/4758/events
https://github.com/huggingface/transformers/issues/4758
630,697,907
MDU6SXNzdWU2MzA2OTc5MDc=
4,758
run_tf_ner.py output_dir/saved_model empty
{ "login": "jx669", "id": 12667589, "node_id": "MDQ6VXNlcjEyNjY3NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/12667589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jx669", "html_url": "https://github.com/jx669", "followers_url": "https://api.github.com/users/jx669/followers", "following_url": "https://api.github.com/users/jx669/following{/other_user}", "gists_url": "https://api.github.com/users/jx669/gists{/gist_id}", "starred_url": "https://api.github.com/users/jx669/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jx669/subscriptions", "organizations_url": "https://api.github.com/users/jx669/orgs", "repos_url": "https://api.github.com/users/jx669/repos", "events_url": "https://api.github.com/users/jx669/events{/privacy}", "received_events_url": "https://api.github.com/users/jx669/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I got my answers here: https://github.com/huggingface/transformers/issues/3246\r\nso I am closing this issue." ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help https://github.com/huggingface/transformers/tree/master/examples/token-classification I followed the example here and saw the logging message: ` Saving model in model/saved_model` Then I went to the folder saved_model and found it is empty. Is this expected? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4758/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4757
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4757/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4757/comments
https://api.github.com/repos/huggingface/transformers/issues/4757/events
https://github.com/huggingface/transformers/pull/4757
630,522,943
MDExOlB1bGxSZXF1ZXN0NDI3NTk4MTc4
4,757
Add drop_last arg for data loader
{ "login": "setu4993", "id": 1833708, "node_id": "MDQ6VXNlcjE4MzM3MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1833708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/setu4993", "html_url": "https://github.com/setu4993", "followers_url": "https://api.github.com/users/setu4993/followers", "following_url": "https://api.github.com/users/setu4993/following{/other_user}", "gists_url": "https://api.github.com/users/setu4993/gists{/gist_id}", "starred_url": "https://api.github.com/users/setu4993/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/setu4993/subscriptions", "organizations_url": "https://api.github.com/users/setu4993/orgs", "repos_url": "https://api.github.com/users/setu4993/repos", "events_url": "https://api.github.com/users/setu4993/events{/privacy}", "received_events_url": "https://api.github.com/users/setu4993/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! This looks like a cool feature to add, indeed. I'm curious, what was the error you obtained because of an incomplete batch? It shouldn't raise an error if a batch is smaller than what it has previously seen.\r\n\r\nI could see it being useful when the framework needs to trace with a given input size though, like with TPUs or with JAX.", "Hey @LysandreJik, thanks for taking a look!\r\n\r\nThis error occurred on the last step of the epoch:\r\n`RuntimeError: Gather got an input of invalid size: got [2, 1, 20, 256, 64], but expected [2, 2, 20, 256, 64] (gather at /AWS-PyTorch/torch/csrc/cuda/comm.cpp:231)`\r\n\r\nBecause of the nature of the error and it occurring on the last step, my suspicion was it was because of `drop_last`. I implemented a workaround for it and that stopped the error from re-appearing.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=h1) Report\n> Merging [#4757](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.07%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4757/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4757 +/- ##\n==========================================\n- Coverage 77.41% 77.34% -0.08% \n==========================================\n Files 128 128 \n Lines 21059 21060 +1 \n==========================================\n- Hits 16302 16288 -14 \n- Misses 4757 4772 +15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <ø> (ø)` | |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.66% <100.00%> (+0.26%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=footer). Last update [5bf9afb...9a8fb7a](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I'm assuming you use GPU distribution – what method do you use for distribution? `nn.DataParallel`? ", "@julien-c: Correct, it is using `nn.DataParallel` under the hood.", "It should work out of the box with torch.distributed instead of nn.DataParallel.\r\n\r\nI have no objection to merging this though :)", "Hmmm, this was on AWS SageMaker, so I'll double-check how it is implemented there.\r\n\r\nGood recommendation on the change. Also, another question: Should the arg be separate for train and eval data loaders? I assumed not, but just wanted to confirm :).", "No I can't think of a scenario where one would want to drop_last in train and not in eval (or inversely)\r\n\r\nThank you, merging" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Add an extra argument to `TrainingArguments` that would be passed on to `Trainer` for use in DataLoader. I ran into a problem while using the `Trainer` this week and the GPU expecting the full batch size of vector inputs, and put a workaround in place in the dataset class I was using, but would be useful to have this as an optional argument.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4757/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4757", "html_url": "https://github.com/huggingface/transformers/pull/4757", "diff_url": "https://github.com/huggingface/transformers/pull/4757.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4757.patch", "merged_at": 1591309832000 }
https://api.github.com/repos/huggingface/transformers/issues/4756
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4756/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4756/comments
https://api.github.com/repos/huggingface/transformers/issues/4756/events
https://github.com/huggingface/transformers/pull/4756
630,443,053
MDExOlB1bGxSZXF1ZXN0NDI3NTM3Mzcx
4,756
[WIP] feat(wandb): add logging to TFTrainer
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note that there are still a few differences.\r\n\r\nFor example `TFTrainer` uses `args.eval_steps`.\r\nIt could make sense to refactor training args so both classes share the same ones when possible.", "Here is an example run with `TFTrainer` on MRPC.\r\n\r\n![image](https://user-images.githubusercontent.com/715491/83705691-3db64e00-a5db-11ea-93d1-c51a83559997.png)\r\n\r\n[Link to W&B run](https://app.wandb.ai/borisd13/huggingface/runs/1zngxzw0?workspace=user-borisd13)", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=h1) Report\n> Merging [#4756](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `39.24%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4756/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4756 +/- ##\n==========================================\n- Coverage 77.10% 77.09% -0.01% \n==========================================\n Files 128 128 \n Lines 21723 21734 +11 \n==========================================\n+ Hits 16749 16756 +7 \n- Misses 4974 4978 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.06% <9.67%> (+0.02%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `69.76% <56.66%> (-30.24%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.10% <61.11%> (-0.15%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=footer). Last update [e80d6c6...28342e9](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@jplu while I'm doing it, is the `step` variable from `_prediction_loop` used or can I remove it?", "It is used for logging.", "Ping @julien-c as there are some changes in the PT trainer.", "> It is used for logging.\r\n\r\nOk, I don't see it being used anywhere in that function…", "Hummm indeed... Can you still keep it, I will review this later to better check that part :)", "I should have addressed your comments. I moved tensorboard specific logging directly within specific trainers and call it from shared `log_metrics`.\r\nIt could have also been the opposite with a public `trainer.log` method that calls `log_metrics` for everything non-tensorboard (wandb and stdout) but I thought this separation would be less obvious to users.\r\n\r\nLet me know if you have any other suggestions.\r\n\r\nThe main differences now between the 2 trainers are the use of `args.debug` and `args.eval_steps` in `TFTrainer`.", "Let me know if anything else is needed", "The last CI error seems unrelated to this PR.", "Ok it is fine for me for the TF part. I let @julien-c to review.", "Can I have any feedback on any possible changes still required?\r\nThis is a pretty large refactor of logging so it's hard to keep up to date with the repo ;)", "I merged master. Main change is logging for wandb & Tensorboard is applied only for world master now.\r\nNeed to figure out if the same should apply for `TFTrainer`. I left a comment on the code.\r\n\r\nOnce we finalize this, I'll run again logging with both `Trainer` and `TFTrainer` to make sure everything works and later I'll work on tests in a follow-up PR.", "Thanks! Looks ok, about the world master there is no need for the TF part.", "Based on above comments, I can just put all the wandb logic separately in their respective Trainer's (and not use `trainer_utils` anymore).\r\n\r\nCould you confirm this is the way to go @jplu @julien-c ", "I made a new PR as I understood you prefer not to refactor logging into a single file like I did here.\r\nFeel free to close this one if my understanding was correct.", "Obsolete PR" ]
1,591
1,592
1,592
CONTRIBUTOR
null
Bring logging feature parity from `Trainer` to `TFTrainer`. Code has been refactored to share logging utilities.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4756/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4756/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4756", "html_url": "https://github.com/huggingface/transformers/pull/4756", "diff_url": "https://github.com/huggingface/transformers/pull/4756.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4756.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4755
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4755/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4755/comments
https://api.github.com/repos/huggingface/transformers/issues/4755/events
https://github.com/huggingface/transformers/issues/4755
630,428,606
MDU6SXNzdWU2MzA0Mjg2MDY=
4,755
run_ner.py crashes with RoBERTa because of incorrect sequence length
{ "login": "oadams", "id": 1115622, "node_id": "MDQ6VXNlcjExMTU2MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1115622?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oadams", "html_url": "https://github.com/oadams", "followers_url": "https://api.github.com/users/oadams/followers", "following_url": "https://api.github.com/users/oadams/following{/other_user}", "gists_url": "https://api.github.com/users/oadams/gists{/gist_id}", "starred_url": "https://api.github.com/users/oadams/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oadams/subscriptions", "organizations_url": "https://api.github.com/users/oadams/orgs", "repos_url": "https://api.github.com/users/oadams/repos", "events_url": "https://api.github.com/users/oadams/events{/privacy}", "received_events_url": "https://api.github.com/users/oadams/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[ { "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false } ]
[ "getting same error. any solution yet", "The quick temp fix would be to change the num_special_tokens_to_add() to be 3 instead OR not write the extra sep tag since it's single sequence tagging.\r\n\r\nI'm guessing the second option is more appropriate because the code that adds the extra sep tag is in utils_ner.py, and as far as I'm aware NER should never involve more than one sep tag.", "I will prepare a fix for that soon!" ]
1,591
1,592
1,592
NONE
null
# 🐛 Bug ## Information I'm running `examples/token-classification/run_ner.py` with RoBERTa. An assert statement fails: ``` assert len(input_ids) == max_seq_length AssertionError ``` Looks like the cause is a mismatch between the value of roberta's `tokenizer.num_special_tokens_to_add()` and the number of special tokens that is actually added to the sequence in `utils_ner.py::convert_examples_to_features()`. Specifically, `tokenizer.num_special_tokens_to_add()` is 2 (presumably for `<s>` and `</s>`). However, `convert_examples_to_features()` adds an extra `</s>` token at line 331, in addition to the `<s>` token and first `</s>` token. So the result is that there are three special tokens, and the sequence ends with `</s> </s>`. `convert_examples_to_features()` relies on `num_special_tokens_to_add()` to determine how many content tokens from the sequence to use, but because of the mismatch above, you can end up with a sequence length of 129 even when the sequence length was set to a max of 128. To reproduce this, just follow the instructions at `examples/token-classification/README.md` except use the flag `--model_name_or_path roberta-base`. The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.15.0-101-generic-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: fails in both cases - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4755/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4754
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4754/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4754/comments
https://api.github.com/repos/huggingface/transformers/issues/4754/events
https://github.com/huggingface/transformers/issues/4754
630,416,550
MDU6SXNzdWU2MzA0MTY1NTA=
4,754
bart-large-cnn model weights updated?
{ "login": "lzmax888", "id": 55983394, "node_id": "MDQ6VXNlcjU1OTgzMzk0", "avatar_url": "https://avatars.githubusercontent.com/u/55983394?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lzmax888", "html_url": "https://github.com/lzmax888", "followers_url": "https://api.github.com/users/lzmax888/followers", "following_url": "https://api.github.com/users/lzmax888/following{/other_user}", "gists_url": "https://api.github.com/users/lzmax888/gists{/gist_id}", "starred_url": "https://api.github.com/users/lzmax888/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lzmax888/subscriptions", "organizations_url": "https://api.github.com/users/lzmax888/orgs", "repos_url": "https://api.github.com/users/lzmax888/repos", "events_url": "https://api.github.com/users/lzmax888/events{/privacy}", "received_events_url": "https://api.github.com/users/lzmax888/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! The weights have not been updated. The path has been changed in the last version, as you can see in the [release notes](https://github.com/huggingface/transformers/releases/tag/v2.11.0) (cc @julien-c), but no change was done to these weights.\r\n" ]
1,591
1,591
1,591
NONE
null
Hi, Today, I noticed that the bart model path has been changed to 'facebook/bart-large-cnn' rather than 'bart-large-cnn'. And I run the demo below, but the result changed since then. (seems worse) https://colab.research.google.com/drive/11hKBPfsfBXPKo-dK_gHsPklF4PcNflQZ#scrollTo=dyTJ_ZavDp1q So, is that weights updated? Thanks, Max
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4754/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4753
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4753/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4753/comments
https://api.github.com/repos/huggingface/transformers/issues/4753/events
https://github.com/huggingface/transformers/issues/4753
630,401,864
MDU6SXNzdWU2MzA0MDE4NjQ=
4,753
Can I use TorchText Iterator output as the input_ids for Hugging Face Transformer?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
Hello, For a given text, `TorchText` iterator, such as the `BPTTIterator`, returns the string of the text after each token is converted to their respective integer ID. So for example, if the integer ids are assigned as the following manner: "I" = 53, "like"=753, "dogs" = 2 Then for the string "I like dogs", a `TorchText` iterator would return `[53, 753, 2]`. Is it okay to use this type of TorchText Iterator output directly as an `input_ids` for Hugging Face Transformers, providing that the Transformer models I use is not the Hugging Face pre-trained ones? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4753/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4752
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4752/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4752/comments
https://api.github.com/repos/huggingface/transformers/issues/4752/events
https://github.com/huggingface/transformers/issues/4752
630,350,336
MDU6SXNzdWU2MzAzNTAzMzY=
4,752
Batching not speeding up Transformer-XL
{ "login": "tommccoy1", "id": 19821261, "node_id": "MDQ6VXNlcjE5ODIxMjYx", "avatar_url": "https://avatars.githubusercontent.com/u/19821261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tommccoy1", "html_url": "https://github.com/tommccoy1", "followers_url": "https://api.github.com/users/tommccoy1/followers", "following_url": "https://api.github.com/users/tommccoy1/following{/other_user}", "gists_url": "https://api.github.com/users/tommccoy1/gists{/gist_id}", "starred_url": "https://api.github.com/users/tommccoy1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tommccoy1/subscriptions", "organizations_url": "https://api.github.com/users/tommccoy1/orgs", "repos_url": "https://api.github.com/users/tommccoy1/repos", "events_url": "https://api.github.com/users/tommccoy1/events{/privacy}", "received_events_url": "https://api.github.com/users/tommccoy1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! I've also observed this with the CMU Transformer-XL codebase. The main difference with other Transformers is the adaptive softmax, so that's the first I'd look at; does an XL model with a normal projection layer also have problems with batching ?\r\n\r\nI was actually planning to investigate a suspected bug in HF's Transformer-XL training performance tomorrow morning, so if it's not urgent I can also take a look at that at the same time.", "It's encouraging to hear that someone else has observed this! Thanks for the suggestion - I just tried turning off the adaptive softmax (by changing the line `model = model_class.from_pretrained(args.model_name_or_path)` to `model = model_class.from_pretrained(args.model_name_or_path, adaptive=False)`), but that did not change the runtimes.\r\n\r\nIt's not urgent, so it would be much appreciated if you can take a look!", "So here are my observations for now, running on my laptop's RTX 2070 (transformers 2.11.0, torch 1.5.0, python 3.6.9, CUDA 10.2, no mixed precision) at training time for that other bug hunt:\r\n\r\n - passing `adaptive=False` does not actually do anything as far as I can tell, the `adaptive` attribute of `config` isn't used anywhere\r\n - at training time, the XL model with adaptive softmax seems to be both quicker and more batch-friendly than GPT 2 and an XL model with a normal Linear projection layer.\r\n\r\n| batch size | Adaptive XL | Linear XL | GPT-2 |\r\n| --- | --- | --- | --- |\r\n| 1 | 33.27 it/s | 29.16 it/s | 35.06 it/s |\r\n| 2 | 31.06 it/s | 19.93 it/s | 24.86 it/s |\r\n| 4 | 29.30 it/s | 13.63 it/s | 14.87 it/s |\r\n| 8 | 23.03 it/s | 7.85 it/s | 8.49 it/s |\r\n\r\nSo that's pretty strange. What is your version of transformers ? I'll be looking at inference time now, as it may be different from training to inference. EDIT: also the case for me at inference time\r\n| batch size | Adaptive XL | Linear XL | GPT-2 |\r\n| --- | --- | --- | --- |\r\n| 1 | 286.92 it/s | 197.25 it/s | 216.45 it/s |\r\n| 2 | 264.54 it/s | 102.02 it/s | 109.74 it/s |\r\n| 4 | 214.71 it/s | 56.27 it/s | 59.91 it/s |\r\n| 8 | 148.69 it/s | 30.35 it/s | 31.97 it/s |\r\n\r\nAnother lead is the einsum function; it's used in transformer-XL but doesn't look like it is used in GPT-2, and I know that it can behave poorly sometimes especially in mixed-precision settings. Are you using apex?", "Interesting! \r\n\r\nI'm using transformers 2.10.0, and am not using apex.\r\n\r\nIf you're able to share the code you were using for inference time, that would be helpful, so I can try it & see if it's my code or my environment that's giving us different results.", "I cleaned up the code a bit and uploaded it on [Google Drive](https://drive.google.com/file/d/1dpHwVdAcchb87ZOXoAi_qP5wmqTNEHtS/view?usp=sharing). It uses lightning and operates on real wt103 data (included in the zip) so it's not quite minimal though.\r\n\r\nAnother (more remote) possibility is an issue in batching, after looking again at my dataloader code it was a bit more complex than usual to support transfoXL memories.", "Thanks for the code! The main difference I see between your code and mine is that I am using the ```generate``` function, whereas you don't. After looking into the ```generate``` function for Transformer-XL, I believe I have found a bug.\r\n\r\nHere is code that uses greedy generation without the ```generate``` function:\r\n```\r\nfrom transformers import TransfoXLLMHeadModel, TransfoXLTokenizer\r\nimport torch\r\n\r\ntokenizer = TransfoXLTokenizer.from_pretrained(\"transfo-xl-wt103\")\r\nmodel = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')\r\n\r\ngenerated = tokenizer.encode(\"The Manhattan Bridge\")\r\ncontext = torch.tensor([generated])\r\nmems = None\r\n\r\nfor i in range(100):\r\n print(i)\r\n output, mems = model(context, mems=mems)[:2]\r\n token = torch.argmax(output[..., -1, :])\r\n\r\n generated += [token.tolist()]\r\n context = token.unsqueeze(0).unsqueeze(0)\r\n\r\nsequence = tokenizer.decode(generated)\r\n\r\nprint(sequence)\r\n```\r\n\r\nThis generates the following text:\r\n\r\n> The Manhattan Bridge, <eos> <eos> = = = = The Bridge = = = = <eos> <eos> The bridge over the Delaware River was built in the late 19th century by the Delaware and Hudson Canal Company. The bridge was built in the style of a drawbridge, with a single span of 1 @,@ 200 feet ( 370 m ). The bridge was designed by John Roebling, who also designed the Delaware River Bridge. The bridge was built in the style of a drawbridge, with a single span of 1 @,@ 200 feet ( 370 m\r\n\r\nThe code below should also generate the same text, just using the ```generate``` function:\r\n\r\n```\r\nfrom transformers import TransfoXLLMHeadModel, TransfoXLTokenizer\r\nimport torch\r\n\r\ntokenizer = TransfoXLTokenizer.from_pretrained(\"transfo-xl-wt103\")\r\nmodel = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')\r\nmodel.to(\"cuda\")\r\n\r\ngenerated = tokenizer.encode(\"The Manhattan Bridge\")\r\ncontext = torch.tensor([generated]).to(\"cuda\")\r\nmems = None\r\n\r\nprint(context)\r\n\r\noutput_sequences = model.generate(\r\n input_ids=context,\r\n max_length=100 + len(generated),\r\n min_length=100 + len(generated),\r\n eos_token_id=267734,\r\n #temperature=1.0,\r\n #top_k=1,\r\n #top_p=1.0,\r\n #do_sample=True,\r\n #num_return_sequences=1,\r\n)\r\n\r\nsequence = tokenizer.decode(output_sequences[0])\r\n\r\nprint(sequence)\r\n```\r\nHowever, it does not give the same output; instead, it generates:\r\n> The Manhattan Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> <eos> = = = The Manhattan Bridge, <eos> <eos> =, \" the <eos> <eos> <eos> the <eos> the the.. the <eos>, <eos>, The, <eos> The The, <eos> The New York Bridge, <eos> is a double @-@ A @-@ A @-@ The Manhattan Bridge, <eos> the Brooklyn Bridge,\r\n\r\nI was able to fix the discrepancy by changing the ```prepare_inputs_for_generation``` function of Transformer-XL to the code below (similar to the code used for that function in GPT-2):\r\n\r\n```\r\n def prepare_inputs_for_generation(self, input_ids, past, **model_kwargs):\r\n inputs = {}\r\n\r\n # if past is defined in model kwargs then use it for faster decoding\r\n if past:\r\n inputs[\"mems\"] = past\r\n inputs[\"input_ids\"] = input_ids[:, -1].unsqueeze(-1)\r\n else:\r\n inputs[\"input_ids\"] = input_ids\r\n\r\n return inputs\r\n```\r\n\r\nWith this code, the ```generate``` function gives the same output as a for-loop. In addition, this also speeds up generation substantially: My use case is generating 500-token text from 512-token prompts, and that now takes about 30 seconds per prompt, while previously it was 3 minutes per prompt. Batching also is now more helpful than before - still not as helpful as I would expect, but that doesn't matter because it's now fast enough to be perfectly useful for me.\r\n\r\nI've made a draft pull request here: https://github.com/huggingface/transformers/pull/4826. But I'm not sure if it's ready to be submitted (I've never submitted a pull request before): some of the tests in ```make test``` fail, and I'm not sure what is required for step 5 of the pull request checklist (\"Add high-coverage tests.\").\r\n\r\n\r\n\r\n \r\n", "Fixed by #4826" ]
1,591
1,593
1,593
CONTRIBUTOR
null
I have modified the example `[run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py)` so that it can use batches. My code (pared down for the example) is below, called `batch_gen.py`: ``` #!/usr/bin/env python3 # coding=utf-8 import argparse import logging import numpy as np import torch from transformers import ( GPT2LMHeadModel, GPT2Tokenizer, TransfoXLLMHeadModel, TransfoXLTokenizer, ) logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger = logging.getLogger(__name__) MAX_LENGTH = int(10000) # Hardcoded max length to avoid infinite loop MODEL_CLASSES = { "gpt2": (GPT2LMHeadModel, GPT2Tokenizer), "transfo-xl": (TransfoXLLMHeadModel, TransfoXLTokenizer), } # Convert a list of prompts (strings) into batches (lists of strings, # where each list is of size batch_size). The final batch might be # smaller than batch_size def batchify_prompts(prompt_list, batch_size): batches = [] this_batch = [] for prompt in prompt_list: this_batch.append(prompt) if len(this_batch) == batch_size: batches.append(this_batch[:]) this_batch = [] if len(this_batch) > 0: batches.append(this_batch) return batches parser = argparse.ArgumentParser() parser.add_argument("--model_type",default=None,type=str,required=True,help="Model type selected in the list: " + ", ".join(MODEL_CLASSES.keys()),) parser.add_argument("--model_name_or_path",default=None,type=str,required=True,help="Path to pre-trained model or shortcut name selected in the list: " + ", ".join(MODEL_CLASSES.keys()),) parser.add_argument("--length", type=int, default=20) parser.add_argument("--prompt_file", type=str, default=None, help="File of prompts, 1 prompt per line.") parser.add_argument("--batch_size", type=int, default=10, help="Number of prompts to include in a batch.") args = parser.parse_args() args.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") args.n_gpu = torch.cuda.device_count() # Create file to print to output_filename = "_".join([str(x) for x in [args.model_type, args.prompt_file.split("/")[-1]]]) + ".generated" fo = open(output_filename, "w", encoding="utf-8") args.model_type = args.model_type.lower() model_class, tokenizer_class = MODEL_CLASSES[args.model_type] tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path) model = model_class.from_pretrained(args.model_name_or_path) model.to(args.device) # Read in prompts from file prompt_file = open(args.prompt_file, "r", encoding="utf-8") prompt_list = [] for prompt_line in prompt_file: prompt_list.append(prompt_line); prompt_batches = batchify_prompts(prompt_list, args.batch_size) # Generate text for each prompt for prompt_batch in prompt_batches: tokenizer.pad_token = "<PADDINGTOKEN>" tokenizer.padding_side = "left" encoding = tokenizer.batch_encode_plus(prompt_batch, add_special_tokens=False, return_tensors="pt", pad_to_max_length=True, add_space_before_punct_symbol=True) encoded_prompt = encoding["input_ids"] # Attention mask is not automatically returned by batch_encode_plus, so here we generate it manually attention_mask = 1 - (encoded_prompt == tokenizer.pad_token_id).type(torch.LongTensor) encoded_prompt = encoded_prompt.to(args.device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt output_sequences = model.generate( input_ids=input_ids, max_length=50 + len(encoded_prompt[0]), min_length=50 + len(encoded_prompt[0]), temperature=1.0, top_k=40, top_p=1, repetition_penalty=1.0, do_sample=True, num_return_sequences=1, attention_mask=attention_mask, ) # Write the generations to the output file for generated_sequence_idx, generated_sequence in enumerate(output_sequences): fo.write("=== PROMPT ===\n") generated_sequence = generated_sequence.tolist() # Decode text text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True) # Add the prompt at the beginning of the sequence. Remove the excess text that was used for pre-processing generated_sequence = ( text[len(tokenizer.decode(encoded_prompt[0], clean_up_tokenization_spaces=True)) :] ) fo.write(prompt_batch[generated_sequence_idx] + "\n=== GENERATED ===\n") fo.write(generated_sequence + "\n\n") ``` To test the speedup provided by batching, I use a text file called `prompts.txt` with the following prompts: ``` The accompanying music video , directed by Vaughan Arnell , Inspired by the Beach Boys , cult surfing films , Premiering worldwide on Vevo on 7 January 2013 , the The video features scenes reminiscent of the films South Pacific The music video garnered 10 @.@ 4 million views in Despite a 34 % gain in weekly activity to their 191 @,@ 000 Twitter followers added contributed to their overall Rebecca <unk> of E ! Online praised its " intentionally Molly Chance , writing for Zap2it , was convinced that Mikael Wood , the critic for Los Angeles Times , It is said that when he died in Osaka during A variety of styles have been used in efforts to As Burton Watson remarks in The Selected Poems of Du The translators have had to contend with bringing out One extreme on each issue is represented by Kenneth Rexroth His are free translations , which seek to conceal the <unk> Other translators have placed much greater weight on trying to Vikram Seth in Three Chinese Poets uses English @-@ style In The Selected Poems of Du Fu , Burton Watson follows the Traditional Chinese literary criticism emphasized the life of the author Since many of Du Fu 's poems feature morality and Another reason , identified by the Chinese historian William Hung For modern Western readers , " The less accurately we Stephen Owen suggests a third factor particular to Du Fu Most of what is known of Du Fu 's life His paternal grandfather was Du <unk> , a noted politician Du Fu was born in 712 ; the exact birthplace In later life , he considered himself to belong to He also had three half brothers and one half sister The son of a minor scholar @-@ official , his ``` The following command is used to run the code with GPT-2: ``` python batch_gen.py --model_type=gpt2 --model_name_or_path=gpt2 --prompt_file prompts.txt --batch_size 10 ``` With GPT-2, batching speeds up the runtime as expected: Each batch takes approximately 1 second, regardless of whether the batch size is 1, 5, or 10. However, with Transformer-XL, this is not the case. Here is the command to run with Transformer-XL: ``` python batch_gen.py --model_type=transfo-xl --model_name_or_path=transfo-xl-wt103 --prompt_file prompts.txt --batch_size 1 ``` With a batch size of 1, each batch takes 3 seconds. With a batch size of 5, each batch takes 12 seconds. With a batch size of 10, each batch takes 21 seconds. Thus, batching is not providing much of a speedup compared to generating examples serially. (You can see the amount of time each batch takes by looking at the time stamps on the log messages that are printed out). Therefore, I am wondering if there is a bug in the batching for Transformer-XL? Or is there some reason why the architecture cannot support efficient batching? I am running this code on a p100 GPU through Ubuntu version 18.04 with PyTorch version 1.5.0 and Python version 3.7.7. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4752/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4751
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4751/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4751/comments
https://api.github.com/repos/huggingface/transformers/issues/4751/events
https://github.com/huggingface/transformers/pull/4751
630,305,514
MDExOlB1bGxSZXF1ZXN0NDI3NDM2NzI2
4,751
Update encode documentation
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,591
1,591
1,591
MEMBER
null
closes #4750
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4751/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4751", "html_url": "https://github.com/huggingface/transformers/pull/4751", "diff_url": "https://github.com/huggingface/transformers/pull/4751.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4751.patch", "merged_at": 1591216260000 }
https://api.github.com/repos/huggingface/transformers/issues/4750
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4750/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4750/comments
https://api.github.com/repos/huggingface/transformers/issues/4750/events
https://github.com/huggingface/transformers/issues/4750
630,291,101
MDU6SXNzdWU2MzAyOTExMDE=
4,750
Tokenizer.encode documentation not correct
{ "login": "lubok-dot", "id": 25525113, "node_id": "MDQ6VXNlcjI1NTI1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/25525113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lubok-dot", "html_url": "https://github.com/lubok-dot", "followers_url": "https://api.github.com/users/lubok-dot/followers", "following_url": "https://api.github.com/users/lubok-dot/following{/other_user}", "gists_url": "https://api.github.com/users/lubok-dot/gists{/gist_id}", "starred_url": "https://api.github.com/users/lubok-dot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lubok-dot/subscriptions", "organizations_url": "https://api.github.com/users/lubok-dot/orgs", "repos_url": "https://api.github.com/users/lubok-dot/repos", "events_url": "https://api.github.com/users/lubok-dot/events{/privacy}", "received_events_url": "https://api.github.com/users/lubok-dot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You're right! The documentation should be updated.", "Done! Thanks for raising an issue :)", "You are welcome. Thank you for the transformers package -- great work!", "Hi, I was checking the documentation page however, could not find the documentation for encode and encode_plus. \r\n\r\n[Documentation Link](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode)\r\n\r\nCan someone point me to the right documentation for these methods?", "Hi, I think, in the newer version of the transformer package, the encode_plus method has been consumed by the __call__ method which yields (as batch_encode_plus, and encode_plus) a BatchEncoding object. The preamble of the Tokenizer documentation contains this information. ", "@ank-shukla the response given by @lubok-dot is correct, the `__call__` method should be used instead of `encode` and `encode_plus` in newer versions. If you're on an older version or would still like to use said methods, please check an older version of the documentation, for example [v2.11.0](https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus)", "@lubok-dot @LysandreJik Thanks for the clarification. I am on the new version and the said methods encode and encode_plus still work as expected. I understand now that __call__ method can carry out the same process based on the parameters." ]
1,591
1,598
1,591
NONE
null
# 🐛 Bug ## Information Model I am using Bert (bert-base-german-cased): Language I am using the model on German: The problem arises when using: * tokenizer of this model The tasks I am working on is: * encoding ## To reproduce Steps to reproduce the behavior: 1. lang_model = 'bert-base-german-cased' 2. tokenizer = BertTokenizer.from_pretrained(lang_model) 3. test_sentence = 'Das war gut' 4. tokenizer.encode(test_sentence) output: [3, 295, 185, 1522, 4] 5. tokenizer.convert_tokens_to_ids(tokenizer.tokenize(test_sentence)) output: [295, 185, 1522] ## Expected behavior According to the documentation of the encoding method in https://huggingface.co/transformers/main_classes/tokenizer.html these two outputs should be the same. The problem is, that the _encode_ method adds special tokens [CLS] and [SEP] at the beginning and the end. However, the transformation in line 5. does not. This is not a problem at all but one should consider correcting the online-documentation. That is, the hint **Same as doing self.convert_tokens_to_ids(self.tokenize(text))** in the encode method is misleading. Instead, one could also add, that these two commands are the same if _add_special_tokens_ is set _False_ in the _encode_ method. ## Environment info - `transformers` version: 2.8.0 - Platform: Windows 10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.0+cpu - Tensorflow version (GPU?): None - Using GPU in script?:No - Using distributed or parallel set-up in script?:No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4750/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4749
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4749/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4749/comments
https://api.github.com/repos/huggingface/transformers/issues/4749/events
https://github.com/huggingface/transformers/issues/4749
630,283,884
MDU6SXNzdWU2MzAyODM4ODQ=
4,749
Hugging Face GPT-2 Tokenizer
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You would need to fine-tune your GPT-2 model on a dataset containing the word, yes. The reason being that your model needs to understand in which context is the word used, what it means, etc.", "Hello,\r\n\r\nThank you for your reply.\r\nSo I have a set of multiple choice questions, and when I use the `add_tokens` function to add whichever the tokens from the dataset that are not originally included in the GPT-2 tokenizer, the length of my GPT-2 tokenizer jumps up by ~3000 (so my dataset contains 3000 new tokens)\r\n\r\neven if I fine tune the pre-trained GPT-2 model on a portion of my dataset (say), I won't be able to train the pre-trained model on all of the new tokens. So I am not sure how I will go about this.\r\n\r\nWhat I am trying to do is though, I want the pre-trained `GPT2DoubleHeadsModel` to solve a set of multiple-choice questions, and I want to compare the error rates generated by the hidden outputs of each of the 12 layers when they are fed directly into the multiple-choice head of the model. That is, my goal is not to minimize the overall error rate of the GPT-2 model, my goal is to simply compare the error rates generated by the different layers of the model. Given this information, do I still need to fine-tune my GPT-2 model on all of the new tokens that I am adding?\r\n\r\nThank you,", "If you don't fine-tune your model on the new tokens you're adding, then when the model sees it at inference it will be a completely unknown token, and the model probably won't handle it correctly.\r\n\r\nIf you don't have any training data to fine-tune the model, why don't you keep the tokens as they are? The GPT-2 tokenizer should be able to correctly tokenize them, as it's a byte level BPE. ", "Hello,\r\n\r\nThank you again for your reply.\r\nI think I want to add the word as a new token mainly because I do not want the word to be treated as the mere `<unk>` token.\r\n\r\nI am not sure what byte level BPE means, but if I do not add the new words as extra tokens, would it really work just because the tokenizer is a byte level BPE?\r\n\r\nThank you :s", "Byte level BPEs should be able to tokenize everything. The GPT-2 tokenizer has no unknown token for that reason.\r\n\r\nYou should try to tokenize your tokens to see if some come back as unknown, but it shouldn't with GPT-2 (and RoBERTa for that matter)!", "Thank you! I was able to confirm that what you mentioned in your previous post also works for my case." ]
1,591
1,591
1,591
NONE
null
Hello, I know that if I choose to add any new "special token" onto the pre-made GPT-2 tokenizer, and if I want to use the pre-trained GPT-2 model for my analysis, I will need to re-train the pre-trained GPT-2 to make the model learn that new special token. But what if I just add an extra non-special token? for example, a word "paradox" is not included in the existing GPT-2 tokenizer, so say I add the word "paradox" to the existing set of GPT-2 vocabulary, like below: ```python # load the pre-trained GPT2-tokenizer gpt2_tokenizer = GPT2Tokenizer.from_pretrained('gpt2') # adding a new word (not special token) to the existing vocabulary, # but I am not making any changes to the pre-assigned special tokens gpt2_tokenizer.add_tokens("paradox") # get the pre-trained HuggingFace GPT2DoubleHeadsModel model_gpt2DoubleHeadsModel = GPT2DoubleHeadsModel.from_pretrained('gpt2', output_hidden_states = True) # resize the token embeddings # (not sure what this function does) model_gpt2DoubleHeadsModel.resize_token_embeddings(len(gpt2_tokenizer)) ``` Given that I didn't make any changes to the special tokens in the GPT-2-tokenizer, do I still need to train the already pre-trained `GPT2DoubleHeadsModel` before I start using it, just because I added a new word to the set of vocabulary? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4749/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4748/comments
https://api.github.com/repos/huggingface/transformers/issues/4748/events
https://github.com/huggingface/transformers/issues/4748
630,263,567
MDU6SXNzdWU2MzAyNjM1Njc=
4,748
QuestionAnsweringPipeline query performance
{ "login": "davidmezzetti", "id": 561939, "node_id": "MDQ6VXNlcjU2MTkzOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/561939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidmezzetti", "html_url": "https://github.com/davidmezzetti", "followers_url": "https://api.github.com/users/davidmezzetti/followers", "following_url": "https://api.github.com/users/davidmezzetti/following{/other_user}", "gists_url": "https://api.github.com/users/davidmezzetti/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidmezzetti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidmezzetti/subscriptions", "organizations_url": "https://api.github.com/users/davidmezzetti/orgs", "repos_url": "https://api.github.com/users/davidmezzetti/repos", "events_url": "https://api.github.com/users/davidmezzetti/events{/privacy}", "received_events_url": "https://api.github.com/users/davidmezzetti/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Hi! Thanks for the detailed report. Indeed, it would be nice to keep the performance high, especially if it's due to something annex than pure inference. I'm looking into it.", "Great, thank you for the quick response!", "After looking into it, it seems that the threading is only part of the problem. Removing it results in 24 seconds instead of 36 seconds, which is still 10x slower than pure inference.\r\n\r\nI believe this is mostly due to the `squad_convert_example_to_features`, which is made to be very robust. By doing so, it slows things down by quite a big factor.\r\n\r\nThere's probably a few things that are overkill for the pipeline when compared to a SQuAD training.", "Thanks once again for the quick response. I did notice that the tokenizer in squad_convert_example_to_features was also padding to the max sequence length, which makes sense for batch inputs. My guess is that the value add was in how the squad processor can robustly extract answers. It's tricky to find the match in the original text when all you have are model tokens.\r\n\r\nThe custom example referenced above builds a regular expression joining the tokens on \\s? and handles BERT subwords but I'm not sure how that would work for all models.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @davidmezzetti, just to let you know we're working towards a bigger pipeline refactor, with a strong focus on performance. Let's keep this issue open while it's still in the works in case more is to be said on the matter.", "Thank you for following up, sounds great, thank you.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@LysandreJik has there been any update in the library with respect to this issue ?", "I know this is an old issue but just to close the loop - v4.0.0 improved pipeline qa performance on par with the methods referenced above. Thank you!", "Glad to hear it!" ]
1,591
1,607
1,602
CONTRIBUTOR
null
This is my first issue posted here, so first off thank you for building this library, it's really pushing NLP forward. The current [QuestionAnsweringPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L1187) relies on the method [squad_convert_examples_to_features](https://github.com/huggingface/transformers/blob/ed4df85572924871758ca32133b46116121c706f/src/transformers/data/processors/squad.py#L269) to convert question/context pairs to SquadFeatures. In reviewing this method, it looks like it spawns a process for each example. This is causing performance issues when looking to support near real-time queries or bulk queries. As a workaround, I can directly issue the queries against the model but the pipeline has a lot of nice logic to help format answers properly and pulling the best answer vs start/end argmax. Please see the results of a rudimentary performance test to demonstrate: ```python import time from transformers import pipeline context = r""" The extractive question answering process took an average of 36.555 seconds using pipelines and about 2 seconds when queried directly using the models. """ question = "How long did the process take?" nlp = pipeline("question-answering", model="distilbert-base-cased-distilled-squad", tokenizer="distilbert-base-cased-distilled-squad") start = time.time() for x in range(100): answer = nlp(question=question, context=context) print("Answer", answer) print("Time", time.time() - start, "s") ``` ``` Answer {'score': 0.8029816785368773, 'start': 62, 'end': 76, 'answer': '36.555 seconds'} Time 36.703474044799805 s ``` ```python import torch from transformers import pipeline, AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad") tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad") start = time.time() for x in range(100): inputs = tokenizer.encode_plus(question, context, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = model(**inputs) answer_start = torch.argmax( answer_start_scores ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print("Answer", answer) print("Time", time.time() - start, "s") ``` ``` Answer 36 . 555 seconds Time 2.1718859672546387 s ``` I believe the 10x slowdown is that the first example had to spawn 100 processes. I also tried passing a list of 100 question/context pairs to see if that was better and that took ~28s. But for this use case, all 100 questions wouldn't be available at once. The additional logic for answer extraction doesn't come for free but it doesn't add much overhead. The third test below uses a [custom pipeline component](https://github.com/neuml/cord19q/blob/master/src/python/cord19q/pipeline.py) to demonstrate. ```python from cord19q.pipeline import Pipeline pipeline = Pipeline("distilbert-base-cased-distilled-squad", False) start = time.time() for x in range(100): answer = pipeline([question], [context]) print("\nAnswer", answer) print("Time", time.time() - start, "s") ``` ``` Answer [{'answer': '36.555 seconds', 'score': 0.8029860216482803}] Time 2.219379186630249 s ``` It would be great if the QuestionAnsweringPipeline could either not use the squad processor or the processor is changed to have an argument to not spawn processes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4748/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4747/comments
https://api.github.com/repos/huggingface/transformers/issues/4747/events
https://github.com/huggingface/transformers/pull/4747
630,180,688
MDExOlB1bGxSZXF1ZXN0NDI3MzQ3ODUx
4,747
No silent error when d_head already in the configuration
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=h1) Report\n> Merging [#4747](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ed4df85572924871758ca32133b46116121c706f&el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4747/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4747 +/- ##\n==========================================\n+ Coverage 77.12% 77.22% +0.09% \n==========================================\n Files 128 128 \n Lines 21061 21063 +2 \n==========================================\n+ Hits 16243 16265 +22 \n+ Misses 4818 4798 -20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.00% <100.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.59% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=footer). Last update [ed4df85...fe85f3e](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
MEMBER
null
closes #4696
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4747/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4747", "html_url": "https://github.com/huggingface/transformers/pull/4747", "diff_url": "https://github.com/huggingface/transformers/pull/4747.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4747.patch", "merged_at": 1591372903000 }
https://api.github.com/repos/huggingface/transformers/issues/4746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4746/comments
https://api.github.com/repos/huggingface/transformers/issues/4746/events
https://github.com/huggingface/transformers/issues/4746
630,149,228
MDU6SXNzdWU2MzAxNDkyMjg=
4,746
Why I can't generate phrases in batches if I include an attention mask? (GPT2)
{ "login": "Barbara931120", "id": 62270260, "node_id": "MDQ6VXNlcjYyMjcwMjYw", "avatar_url": "https://avatars.githubusercontent.com/u/62270260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Barbara931120", "html_url": "https://github.com/Barbara931120", "followers_url": "https://api.github.com/users/Barbara931120/followers", "following_url": "https://api.github.com/users/Barbara931120/following{/other_user}", "gists_url": "https://api.github.com/users/Barbara931120/gists{/gist_id}", "starred_url": "https://api.github.com/users/Barbara931120/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Barbara931120/subscriptions", "organizations_url": "https://api.github.com/users/Barbara931120/orgs", "repos_url": "https://api.github.com/users/Barbara931120/repos", "events_url": "https://api.github.com/users/Barbara931120/events{/privacy}", "received_events_url": "https://api.github.com/users/Barbara931120/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Probably of interest to @patrickvonplaten ", "Hi @Barbara931120,\r\n\r\nBatch generation is sadly currently not implemented in the `.generate()` method. Also, see https://github.com/huggingface/transformers/issues/3021 for reasons why. It's on our roadmap to implement this functionality soon :-) " ]
1,591
1,591
1,591
NONE
null
Assuming these are my input phrases and model: ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>') prompt_text = [ "are there any good coaching institutes for civil services preparations in bangalore? ->"] ``` If I try to generate phrases in batches with the corresponding attention mask it doesn't work. It outputs the input phrase without any new words on it: ``` # encode plus batch handles multiple batches and automatically creates attention_masks seq_len = 100 encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True) input_ids = torch.tensor(encodings_dict['input_ids']) attn_mask = torch.tensor(encodings_dict['attention_mask']) encoded_result = model.generate(input_ids, attention_mask=attn_mask, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, num_return_sequences=10, top_k=50, top_p=0.95, do_sample=True, max_length=100) for er in encoded_result: print(tokenizer.decode(er, skip_special_tokens=True)) ``` However, if I generate phrases one by one (without batches) then it works: ``` encoded_text = tokenizer.encode(prompt_text[0], return_tensors='pt') encoded_result = model.generate(encoded_text,eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, num_return_sequences=10, top_k=50, top_p=0.95, do_sample=True, max_length=100) print(tokenizer.decode(encoded_result[0], skip_special_tokens=True)) ``` ## Details Any ideas what could be causing this problem? Thanks!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4746/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4745/comments
https://api.github.com/repos/huggingface/transformers/issues/4745/events
https://github.com/huggingface/transformers/pull/4745
630,097,575
MDExOlB1bGxSZXF1ZXN0NDI3MjgzMjEw
4,745
[Generation Beam Search] Fix bug when changing the <EOS> token for generate
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=h1) Report\n> Merging [#4745](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **increase** coverage by `0.22%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4745/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4745 +/- ##\n==========================================\n+ Coverage 77.14% 77.36% +0.22% \n==========================================\n Files 128 128 \n Lines 21073 21073 \n==========================================\n+ Hits 16256 16304 +48 \n+ Misses 4817 4769 -48 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.29% <ø> (+0.35%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=footer). Last update [47a551d...491f4e2](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
MEMBER
null
This PR fixes https://github.com/huggingface/transformers/issues/4121 . When comparing int `!=` should be used here instead of `is not`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4745/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4745", "html_url": "https://github.com/huggingface/transformers/pull/4745", "diff_url": "https://github.com/huggingface/transformers/pull/4745.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4745.patch", "merged_at": 1591203204000 }
https://api.github.com/repos/huggingface/transformers/issues/4744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4744/comments
https://api.github.com/repos/huggingface/transformers/issues/4744/events
https://github.com/huggingface/transformers/issues/4744
630,091,692
MDU6SXNzdWU2MzAwOTE2OTI=
4,744
How to use pretrained model for inference?
{ "login": "andrster", "id": 22357321, "node_id": "MDQ6VXNlcjIyMzU3MzIx", "avatar_url": "https://avatars.githubusercontent.com/u/22357321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andrster", "html_url": "https://github.com/andrster", "followers_url": "https://api.github.com/users/andrster/followers", "following_url": "https://api.github.com/users/andrster/following{/other_user}", "gists_url": "https://api.github.com/users/andrster/gists{/gist_id}", "starred_url": "https://api.github.com/users/andrster/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andrster/subscriptions", "organizations_url": "https://api.github.com/users/andrster/orgs", "repos_url": "https://api.github.com/users/andrster/repos", "events_url": "https://api.github.com/users/andrster/events{/privacy}", "received_events_url": "https://api.github.com/users/andrster/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello @andrster \r\nWhich model are you trying ? for ex, for QA, token classification, sentence classification etc.\r\nPlease elaborate more. ", "Sorry, token classification ", "@andrster have you checked out the pipelines section of the README?", "doesn't have one on NER", "@andrster \r\nner pipeline is available in Transformers. Check here https://huggingface.co/transformers/usage.html#named-entity-recognition", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
I have a bert model finetuned for pytorch and have trouble actually using it. Like how can I get to model("<some sentance>") outputting results for the tokens?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4744/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4743/comments
https://api.github.com/repos/huggingface/transformers/issues/4743/events
https://github.com/huggingface/transformers/pull/4743
630,085,645
MDExOlB1bGxSZXF1ZXN0NDI3MjczODUz
4,743
Create README.md
{ "login": "orena1", "id": 8983713, "node_id": "MDQ6VXNlcjg5ODM3MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orena1", "html_url": "https://github.com/orena1", "followers_url": "https://api.github.com/users/orena1/followers", "following_url": "https://api.github.com/users/orena1/following{/other_user}", "gists_url": "https://api.github.com/users/orena1/gists{/gist_id}", "starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orena1/subscriptions", "organizations_url": "https://api.github.com/users/orena1/orgs", "repos_url": "https://api.github.com/users/orena1/repos", "events_url": "https://api.github.com/users/orena1/events{/privacy}", "received_events_url": "https://api.github.com/users/orena1/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=h1) Report\n> Merging [#4743](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1b5820a56540a2096daeb43a0cd8247c8c94a719&el=desc) will **increase** coverage by `0.22%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4743/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4743 +/- ##\n==========================================\n+ Coverage 77.11% 77.34% +0.22% \n==========================================\n Files 128 128 \n Lines 21061 21061 \n==========================================\n+ Hits 16242 16290 +48 \n+ Misses 4819 4771 -48 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=footer). Last update [1b5820a...24a7fe1](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Main issue is to refer to the fairseq website.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4743/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4743", "html_url": "https://github.com/huggingface/transformers/pull/4743", "diff_url": "https://github.com/huggingface/transformers/pull/4743.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4743.patch", "merged_at": 1591304377000 }
https://api.github.com/repos/huggingface/transformers/issues/4742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4742/comments
https://api.github.com/repos/huggingface/transformers/issues/4742/events
https://github.com/huggingface/transformers/issues/4742
630,077,647
MDU6SXNzdWU2MzAwNzc2NDc=
4,742
Perform evaluation on HANS with Trainer (like GLUE example)
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Maybe @sgugger would be interested in taking a look at this", "There is an evaluation issue that I faced with HANS evaluation. Issue [here](https://github.com/huggingface/transformers/issues/4766). I tried to run in the exact same manner as listed in `examples/adversarial`. But the results obtained seem skewed for some reason. If @sgugger can point some possible reason out, I can work on it and send a PR. Thanks." ]
1,591
1,592
1,592
CONTRIBUTOR
null
Current [HANS](https://github.com/huggingface/transformers/tree/master/examples/adversarial) evaluation implementation is carried out in the old way. It'd be good to do it in the same manner as other examples are implemented now with the Trainer class.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4742/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4741/comments
https://api.github.com/repos/huggingface/transformers/issues/4741/events
https://github.com/huggingface/transformers/pull/4741
630,013,434
MDExOlB1bGxSZXF1ZXN0NDI3MjE2NzM3
4,741
Implemented resizing of token embeddings for TensorFlow models
{ "login": "RobMcH", "id": 7346905, "node_id": "MDQ6VXNlcjczNDY5MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/7346905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RobMcH", "html_url": "https://github.com/RobMcH", "followers_url": "https://api.github.com/users/RobMcH/followers", "following_url": "https://api.github.com/users/RobMcH/following{/other_user}", "gists_url": "https://api.github.com/users/RobMcH/gists{/gist_id}", "starred_url": "https://api.github.com/users/RobMcH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RobMcH/subscriptions", "organizations_url": "https://api.github.com/users/RobMcH/orgs", "repos_url": "https://api.github.com/users/RobMcH/repos", "events_url": "https://api.github.com/users/RobMcH/events{/privacy}", "received_events_url": "https://api.github.com/users/RobMcH/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello !\r\n\r\nThanks a lot for this PR!! Can you rebase on master and push force in order to be able to review :)", "Hi,\r\n\r\nI've rebased on master. Are you able to review the PR like this?", "Hello! Thanks a lot for your contribution, unfortunately we've decided to move on with https://github.com/huggingface/transformers/pull/4351 that was contributed earlier. \r\n\r\nYour contribution is still valuable, and we would have went with it had another PR not done it already. We look forward to your future PRs!" ]
1,591
1,592
1,592
NONE
null
As mentioned in [this issue](https://github.com/huggingface/transformers/issues/1838) transformers currently does not support resizing token embeddings with TensorFlow. I have implemented this functionality for ALBERT, BERT, DistilBERT, and GPT2. **Note** All of the respective TF[...]MainLayer[s] inherit from a utility class (TFLayerUtilsMixin; similar to the already existing TFModelUtilsMixin; both of them live in `modeling_tf_utils.py`) to avoid code duplication. This has to be done because TensorFlow models are structured differently than the corresponding PyTorch models - i.e., all the TF{ModelName} classes have a corresponding TF{ClassName}MainLayer which itself inherits from tf.keras.layers.Layer, whereas the PyTorch {ModelName} classes implement all the functionality themselves. **Usage** The usage is exactly the same as for the PyTorch models. ``` import transformers bert = transformers.TFAutoModelForSequenceClassification.from_pretrained("bert-base-uncased") tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-uncased") tokenizer.add_tokens("[E1]"); tokenizer.add_tokens("[/E1]"); bert.resize_token_embeddings(len(tokenizer)) ``` **Tests** There are no tests in the existing transformers code for the `resize_token_embeddings` methods of the PyTorch models (as far as I can tell). This might be a separate issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4741/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4741", "html_url": "https://github.com/huggingface/transformers/pull/4741", "diff_url": "https://github.com/huggingface/transformers/pull/4741.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4741.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4740/comments
https://api.github.com/repos/huggingface/transformers/issues/4740/events
https://github.com/huggingface/transformers/issues/4740
629,983,164
MDU6SXNzdWU2Mjk5ODMxNjQ=
4,740
Can't find config.json
{ "login": "mustafameruyert", "id": 43790316, "node_id": "MDQ6VXNlcjQzNzkwMzE2", "avatar_url": "https://avatars.githubusercontent.com/u/43790316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mustafameruyert", "html_url": "https://github.com/mustafameruyert", "followers_url": "https://api.github.com/users/mustafameruyert/followers", "following_url": "https://api.github.com/users/mustafameruyert/following{/other_user}", "gists_url": "https://api.github.com/users/mustafameruyert/gists{/gist_id}", "starred_url": "https://api.github.com/users/mustafameruyert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mustafameruyert/subscriptions", "organizations_url": "https://api.github.com/users/mustafameruyert/orgs", "repos_url": "https://api.github.com/users/mustafameruyert/repos", "events_url": "https://api.github.com/users/mustafameruyert/events{/privacy}", "received_events_url": "https://api.github.com/users/mustafameruyert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @mustafameruyert, please poste a codesnippet so that we can reproduce the error", "> Hi @mustafameruyert, please poste a codesnippet so that we can reproduce the error\r\n![1](https://user-images.githubusercontent.com/43790316/83646412-6afffd80-a5d5-11ea-89d2-0ea80b14adff.png)\r\n![1](https://user-images.githubusercontent.com/43790316/83646625-b87c6a80-a5d5-11ea-942f-399147acc1d4.png)\r\nI am using dockers and this is error", "> Hi @mustafameruyert, please poste a codesnippet so that we can reproduce the error\r\n\r\nSorry I forgot to uncomment first two lines of nevertheless it shows same error and instead of distilbert-base-cased I use bert-multi-cased-finetuned-xquadv1.I have downloaded all files using :\r\nmodel.save_pretrained(path)\r\ntokenizer.save_pretrained(path) \r\nMy code written in setup_qa.py file and all downloaded files are stored with setup_qa.py in one folder", "It would be great if you could copy-paste code like this:\r\n\r\n```python \r\nfrom transformers import pipeline\r\nanswerer = pipeline(\"question-answering\", model=\"mrm8488/bert-multi-cased-finetuned-xquadv1\", tokenizer=\"mrm8488/bert-multi-cased-finetuned-xquadv1\")\r\n\r\nanswerer(context=\"The dog is blue\", question=\"Which color is the dog?\")['answer']\r\n```\r\n\r\nso that one can just copy-paste it and does not have to type it manually from a screenshot.\r\n\r\nThe following code works as well:\r\n\r\n```python \r\nfrom transformers import pipeline, AutoModelForQuestionAnswering, AutoTokenizer\r\nanswerer = pipeline(\"question-answering\", model=AutoModelForQuestionAnswering.from_pretrained(\"mrm8488/bert-multi-cased-finetuned-xquadv1\"), tokenizer=AutoTokenizer.from_pretrained(\"mrm8488/bert-multi-cased-finetuned-xquadv1\"))\r\nanswerer(context=\"The dog is blue\", question=\"Which color is the dog?\")['answer']\r\n```\r\n\r\nLet me know if this does not fix your problem :-) \r\n", "@patrickvonplaten I tried to run your code but it also show this error\r\n![1](https://user-images.githubusercontent.com/43790316/83657571-d56b6a80-a5e2-11ea-85ab-cb71ef43fb44.png)\r\n", "It looks like the problem is that you cannot create a folder called `/.cache` , which has nothing to do with the pipeline. You should have sudo rights from your home folder. \r\n\r\nTo solve this you could:\r\n```\r\nsudo mkdir /.cache\r\n```\r\n\r\nand then make sure that `/.cache` has the correct permission rights. The error is a bit out of scope for this issue. \r\n\r\nIt would be important to make sure that you have the correct sudo rights.\r\n", "> It looks like the problem is that you cannot create a folder called `/.cache` , which has nothing to do with the pipeline. You should have sudo rights from your home folder.\r\n> \r\n> To solve this you could:\r\n> \r\n> ```\r\n> sudo mkdir /.cache\r\n> ```\r\n> \r\n> and then make sure that `/.cache` has the correct permission rights. The error is a bit out of scope for this issue.\r\n> \r\n> It would be important to make sure that you have the correct sudo rights.\r\n\r\nThank you for response I will try to fix it" ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help Hello!When I use transformers I get this error Make sure that 'mrm8488/bert-multi-cased-finetuned-xquadv1' is the correct path to a directory containing a config.json file.How to solve it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4740/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4739/comments
https://api.github.com/repos/huggingface/transformers/issues/4739/events
https://github.com/huggingface/transformers/issues/4739
629,980,891
MDU6SXNzdWU2Mjk5ODA4OTE=
4,739
Extending run_language_modeling.py for XLNet
{ "login": "shngt", "id": 20009551, "node_id": "MDQ6VXNlcjIwMDA5NTUx", "avatar_url": "https://avatars.githubusercontent.com/u/20009551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shngt", "html_url": "https://github.com/shngt", "followers_url": "https://api.github.com/users/shngt/followers", "following_url": "https://api.github.com/users/shngt/following{/other_user}", "gists_url": "https://api.github.com/users/shngt/gists{/gist_id}", "starred_url": "https://api.github.com/users/shngt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shngt/subscriptions", "organizations_url": "https://api.github.com/users/shngt/orgs", "repos_url": "https://api.github.com/users/shngt/repos", "events_url": "https://api.github.com/users/shngt/events{/privacy}", "received_events_url": "https://api.github.com/users/shngt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "See also https://github.com/huggingface/transformers/issues/2008", "@shngt can you please share your code for adapting xlnet to the domain specific data ? ( the cli command or the code you used after the merged PR ) \r\nI was following the latest changes to transformers but I still don't understand how to continue training xlnet on the domain specific data.\r\nthanks ", "@shngt @krannnn Are there any updates on this? I'm interested as well (: Thanks! ", "@matthiaslmz @krannnn I'm afraid I no longer have the cli command I used to run the example script, but I think it was quite similar to what's given in the docs. Are you facing some specific issue?" ]
1,591
1,605
1,594
CONTRIBUTOR
null
# 🚀 Feature request The run_language_modeling.py script in examples/language-modeling/ currently works for BERT, RoBERTa, GPT and related models. It would be helpful if it also allowed XLNet. I believe this would involve writing functions to generate `perm_mask`, `target_mapping` and `labels` for the inputs sequences as per the paper and `https://github.com/zihangdai/xlnet/blob/master/data_utils.py`, but I'm not a 100% sure about this. ## Motivation I have to adapt XLNet onto a specialized domain (finance) and am yet to find a decent guide or implementation which discusses how to do this. I've decided to do this myself (with the help of this library), and would like to share my code in case others find it useful. ## Your contribution Since I will be trying to do this anyway, I would like to submit a relevant PR when it is done. I was also hoping to receive guidance or feedback as appropriate by other contributors to ensure correctness and utility.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4739/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4739/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4738/comments
https://api.github.com/repos/huggingface/transformers/issues/4738/events
https://github.com/huggingface/transformers/issues/4738
629,962,508
MDU6SXNzdWU2Mjk5NjI1MDg=
4,738
Question Answering Modeling through Hugging Face Models
{ "login": "AishwaryaVerma", "id": 53822388, "node_id": "MDQ6VXNlcjUzODIyMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/53822388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AishwaryaVerma", "html_url": "https://github.com/AishwaryaVerma", "followers_url": "https://api.github.com/users/AishwaryaVerma/followers", "following_url": "https://api.github.com/users/AishwaryaVerma/following{/other_user}", "gists_url": "https://api.github.com/users/AishwaryaVerma/gists{/gist_id}", "starred_url": "https://api.github.com/users/AishwaryaVerma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AishwaryaVerma/subscriptions", "organizations_url": "https://api.github.com/users/AishwaryaVerma/orgs", "repos_url": "https://api.github.com/users/AishwaryaVerma/repos", "events_url": "https://api.github.com/users/AishwaryaVerma/events{/privacy}", "received_events_url": "https://api.github.com/users/AishwaryaVerma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "How long is your document, you may wanna try longformer model which can handle sequences upto 4096 tokens. Here's a longformer model trained for QA https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1\r\n\r\nAlso, take a loot at this https://github.com/deepset-ai/haystack. This might help you a lot", "> How long is your document, you may wanna try longformer model which can handle sequences upto 4096 tokens. Here's a longformer model trained for QA https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1\r\n> \r\n> Also, take a loot at this https://github.com/deepset-ai/haystack. This might help you a lot\r\n\r\nI looked into it. That is a great help. Thanks. Can we also decide the output length with these type of pretrained models?", "Theses QA models aren't generative. So there's no output length constraint", "@AishwaryaVerma - For QA the output length is usually very small (only a couple of words). It is very rare that the answer of `AutoModelForQuestionAnswering` is longer than 3,4 words.\r\n\r\nYou might also want to take a look at: https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa", "And this notebook: https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing " ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hello everyone, I have large number of documents and I need to extract specific information through Hugging Face Question Answering Model. First issue I faced was the document size was very large so it gave me token error and afterwards, I divided the data into small paragraphs, then I applied the given model. But this time, answer was not accurate. So, I just want to know, is there any alternative method or model to do this. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4738/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4737/comments
https://api.github.com/repos/huggingface/transformers/issues/4737/events
https://github.com/huggingface/transformers/pull/4737
629,901,894
MDExOlB1bGxSZXF1ZXN0NDI3MTI3MTcz
4,737
Create model card for T5-base fine-tuned for Sentiment Span Extraction
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=h1) Report\n> Merging [#4737](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e5928c57d57db3071638e6beaec9349a75b6a22&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4737/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4737 +/- ##\n=======================================\n Coverage 77.29% 77.29% \n=======================================\n Files 128 128 \n Lines 21004 21004 \n=======================================\n Hits 16234 16234 \n Misses 4770 4770 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=footer). Last update [3e5928c...d41d001](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks Manuel:)", "My pleasure. Coming soon model card for https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news and RuPERTa ofc 😉" ]
1,591
1,591
1,591
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4737/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4737", "html_url": "https://github.com/huggingface/transformers/pull/4737", "diff_url": "https://github.com/huggingface/transformers/pull/4737.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4737.patch", "merged_at": 1591304397000 }
https://api.github.com/repos/huggingface/transformers/issues/4736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4736/comments
https://api.github.com/repos/huggingface/transformers/issues/4736/events
https://github.com/huggingface/transformers/issues/4736
629,845,422
MDU6SXNzdWU2Mjk4NDU0MjI=
4,736
BertModel Inputs
{ "login": "pn12", "id": 64300791, "node_id": "MDQ6VXNlcjY0MzAwNzkx", "avatar_url": "https://avatars.githubusercontent.com/u/64300791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pn12", "html_url": "https://github.com/pn12", "followers_url": "https://api.github.com/users/pn12/followers", "following_url": "https://api.github.com/users/pn12/following{/other_user}", "gists_url": "https://api.github.com/users/pn12/gists{/gist_id}", "starred_url": "https://api.github.com/users/pn12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pn12/subscriptions", "organizations_url": "https://api.github.com/users/pn12/orgs", "repos_url": "https://api.github.com/users/pn12/repos", "events_url": "https://api.github.com/users/pn12/events{/privacy}", "received_events_url": "https://api.github.com/users/pn12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you please link the example you mean? :-) ", "Hi , \r\n\r\nIn the example code below; I have highlighted the inputs** being provided to the model i.e. ids , attention_mask and token_type_ids.\r\n\r\nIf I add start_positions and end_positions as inputs here - I get an error saying unknown inputs : start_positions and end_positions.\r\n\r\nSo, my question is - how is the model able to train when starts and ends not being provided as inputs to the model training part?\r\n\r\nThanks \r\n\r\n```\r\nclass QAModel (transformers.BertPreTrainedModel):\r\n def __init__(self, conf):\r\n super(QAModel, self).__init__(conf)\r\n self.bert = transformers.BertModel.from_pretrained(config.BERT_PATH, config=conf)\r\n self.drop_out = nn.Dropout(0.1)\r\n self.l0 = nn.Linear(768 * 2, 2)\r\n torch.nn.init.normal_(self.l0.weight, std=0.02)\r\n \r\n def forward(self, ids, mask, token_type_ids):\r\n **_, _, out = self.bert(\r\n ids,\r\n attention_mask=mask,\r\n token_type_ids=token_type_ids**\r\n )\r\n\r\n out = torch.cat((out[-1], out[-2]), dim=-1)\r\n out = self.drop_out(out)\r\n logits = self.l0(out)\r\n\r\n start_logits, end_logits = logits.split(1, dim=-1)\r\n\r\n start_logits = start_logits.squeeze(-1)\r\n end_logits = end_logits.squeeze(-1)\r\n\r\n return start_logits, end_logits\r\n\r\n```\r\n", "Hey @pn12,\r\n\r\nI'm not sure if this answers you question: If you want to fine-tune a `Bert` model on Question Answering you have to prove both the `start_positions` and the `end_positions`, see this line: https://github.com/huggingface/transformers/blob/f9414f7553d3f1872b372990ef03205c0d1141df/src/transformers/modeling_bert.py#L1405\r\n\r\nOnly for validation and evaluation, you don't have to provide those so that the model can predict them.\r\n\r\nThanks to @patil-suraj, you can also take a look at this notebook to check out how to fine-tune a Bert-Like model on Squad: \r\nhttps://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb\r\n\r\nCheck out his `DummyDataCollator` to see that he passes those two arguments for training." ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help Hi, I have been using BertModel for Question and Answering . In examples - I see there are no start_position , end_position being provided to the Model . How is the Model able to train in this case using Input_ids , Mask and Attention Head ? Might be a naive question , but I have dig into the source codes and referred to runsquad.py - could not gain clarity. Can anybody suggest? Thanks <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4736/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4735/comments
https://api.github.com/repos/huggingface/transformers/issues/4735/events
https://github.com/huggingface/transformers/issues/4735
629,780,457
MDU6SXNzdWU2Mjk3ODA0NTc=
4,735
Bart model for textinfilling
{ "login": "andompesta", "id": 6725612, "node_id": "MDQ6VXNlcjY3MjU2MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6725612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andompesta", "html_url": "https://github.com/andompesta", "followers_url": "https://api.github.com/users/andompesta/followers", "following_url": "https://api.github.com/users/andompesta/following{/other_user}", "gists_url": "https://api.github.com/users/andompesta/gists{/gist_id}", "starred_url": "https://api.github.com/users/andompesta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andompesta/subscriptions", "organizations_url": "https://api.github.com/users/andompesta/orgs", "repos_url": "https://api.github.com/users/andompesta/repos", "events_url": "https://api.github.com/users/andompesta/events{/privacy}", "received_events_url": "https://api.github.com/users/andompesta/received_events", "type": "User", "site_admin": false }
[ { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
null
[]
[ "it is a hack to make `MarianMTModel`, which inherits from Bart work. For the Bart models that parameter does nothing, as you suggest. You can remove or ignore.", "Thanks got it" ]
1,591
1,591
1,591
CONTRIBUTOR
null
# ❓ Questions & Help ## Details <!-- Description of your issue --> @sshleifer I was wandering why you used a registered_buffer to define the ``final_logits_bias`` in the BartForConditionalGeneration. In my understanding a registered_buffer would have not gradient so if I fine-tune the a Bart model for generation these biases would not be updated (remain to 0). Should we use register_parameter or remove the bias completely or there is something I haven't understand ? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4735/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4734/comments
https://api.github.com/repos/huggingface/transformers/issues/4734/events
https://github.com/huggingface/transformers/issues/4734
629,705,895
MDU6SXNzdWU2Mjk3MDU4OTU=
4,734
TFTrainer: Checkpoints not getting saved in `output_dir` but in {cwd}/checkpoint
{ "login": "0dust", "id": 29033531, "node_id": "MDQ6VXNlcjI5MDMzNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/29033531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0dust", "html_url": "https://github.com/0dust", "followers_url": "https://api.github.com/users/0dust/followers", "following_url": "https://api.github.com/users/0dust/following{/other_user}", "gists_url": "https://api.github.com/users/0dust/gists{/gist_id}", "starred_url": "https://api.github.com/users/0dust/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0dust/subscriptions", "organizations_url": "https://api.github.com/users/0dust/orgs", "repos_url": "https://api.github.com/users/0dust/repos", "events_url": "https://api.github.com/users/0dust/events{/privacy}", "received_events_url": "https://api.github.com/users/0dust/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @jplu as it might be of interest.", "@0dust This is the intent behavior :)\r\n\r\nA solution would be to add a parameter to the arguments to select the checkpoint folder location you want", "@jplu Sorry if i am missing something but isn't 'output_dir' the folder to save the checkpoint?\r\nhttps://github.com/huggingface/transformers/blob/ed4df85572924871758ca32133b46116121c706f/src/transformers/training_args.py#L41-L43", "Not for the TF one, it is one of the few difference between the both trainers.", "Ohh, I see! Thanks for the clarification. Just a quick question before i close the issue, Is there any specific reason for this? Or it's just a matter of time before it starts to behave similar to pytorch trainer. ", "It is just matter of time :)" ]
1,591
1,591
1,591
NONE
null
I am using TFTrainer for the SQuAD task. Checkpoints are being created in cwd/checkpoint insted of output_dir. **Potential Cause:** https://github.com/huggingface/transformers/blob/9ca485734aea269961d63a040ff194365d151fd1/src/transformers/trainer_tf.py#L156 Instead of PREFIX_CHECKPOINT_DIR we need to have ```python os.path.join(self.args.output_dir, PREFIX_CHECKPOINT_DIR) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4734/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4734/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4733/comments
https://api.github.com/repos/huggingface/transformers/issues/4733/events
https://github.com/huggingface/transformers/issues/4733
629,667,510
MDU6SXNzdWU2Mjk2Njc1MTA=
4,733
When I use TFBertEncoder in my laptop, I get an error.I can not build a model. Here is a simple examples.
{ "login": "shange1996", "id": 49185852, "node_id": "MDQ6VXNlcjQ5MTg1ODUy", "avatar_url": "https://avatars.githubusercontent.com/u/49185852?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shange1996", "html_url": "https://github.com/shange1996", "followers_url": "https://api.github.com/users/shange1996/followers", "following_url": "https://api.github.com/users/shange1996/following{/other_user}", "gists_url": "https://api.github.com/users/shange1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/shange1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shange1996/subscriptions", "organizations_url": "https://api.github.com/users/shange1996/orgs", "repos_url": "https://api.github.com/users/shange1996/repos", "events_url": "https://api.github.com/users/shange1996/events{/privacy}", "received_events_url": "https://api.github.com/users/shange1996/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I also meet this problem? Do you solve it? @shange1996 ", "> I also meet this problem? Do you solve it? @shange1996\r\n\r\nNo. The problem bothers me.", "Me too. When I change use keras not tf.keras, it has another problem...", "It bothers me, too.", "> It bothers me, too.\r\n\r\nNow I solve it. NOT import it, instead of copying the layer code in your main code.\r\nLike this:\r\n`class TFBertSelfAttention(tf.keras.layers.Layer):\r\n def __init__(self, config, **kwargs):\r\n super().__init__(**kwargs)\r\n if config.hidden_size % config.num_attention_heads != 0:\r\n raise ValueError(\r\n \"The hidden size (%d) is not a multiple of the number of attention \"\r\n \"heads (%d)\" % (config.hidden_size, config.num_attention_heads)\r\n )\r\n self.output_attentions = config.output_attentions\r\n\r\n self.num_attention_heads = config.num_attention_heads\r\n assert config.hidden_size % config.num_attention_heads == 0\r\n self.attention_head_size = int(config.hidden_size / config.num_attention_heads)\r\n self.all_head_size = self.num_attention_heads * self.attention_head_size\r\n\r\n self.query = tf.keras.layers.Dense(\r\n self.all_head_size, kernel_initializer=tf.keras.initializers.TruncatedNormal(config.initializer_range), name=\"query\"\r\n )\r\n self.key = tf.keras.layers.Dense(\r\n self.all_head_size, kernel_initializer=tf.keras.initializers.TruncatedNormal(config.initializer_range), name=\"key\"\r\n )\r\n self.value = tf.keras.layers.Dense(\r\n self.all_head_size, kernel_initializer=tf.keras.initializers.TruncatedNormal(config.initializer_range), name=\"value\"\r\n )\r\n\r\n self.dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob)\r\n\r\n def transpose_for_scores(self, x, batch_size):\r\n x = tf.reshape(x, (batch_size, -1, self.num_attention_heads, self.attention_head_size))\r\n return tf.transpose(x, perm=[0, 2, 1, 3])\r\n\r\n def call(self, inputs, training=False):\r\n hidden_states, attention_mask, head_mask = inputs\r\n\r\n batch_size = tf.shape(hidden_states)[0]\r\n mixed_query_layer = self.query(hidden_states)\r\n mixed_key_layer = self.key(hidden_states)\r\n mixed_value_layer = self.value(hidden_states)\r\n\r\n query_layer = self.transpose_for_scores(mixed_query_layer, batch_size)\r\n key_layer = self.transpose_for_scores(mixed_key_layer, batch_size)\r\n value_layer = self.transpose_for_scores(mixed_value_layer, batch_size)\r\n\r\n # Take the dot product between \"query\" and \"key\" to get the raw attention scores.\r\n attention_scores = tf.matmul(\r\n query_layer, key_layer, transpose_b=True\r\n ) # (batch size, num_heads, seq_len_q, seq_len_k)\r\n dk = tf.cast(tf.shape(key_layer)[-1], tf.float32) # scale attention_scores\r\n attention_scores = attention_scores / tf.math.sqrt(dk)\r\n\r\n if attention_mask is not None:\r\n # Apply the attention mask is (precomputed for all layers in TFBertModel call() function)\r\n attention_scores = attention_scores + attention_mask\r\n\r\n # Normalize the attention scores to probabilities.\r\n attention_probs = tf.nn.softmax(attention_scores, axis=-1)\r\n\r\n # This is actually dropping out entire tokens to attend to, which might\r\n # seem a bit unusual, but is taken from the original Transformer paper.\r\n attention_probs = self.dropout(attention_probs, training=training)\r\n\r\n # Mask heads if we want to\r\n if head_mask is not None:\r\n attention_probs = attention_probs * head_mask\r\n\r\n context_layer = tf.matmul(attention_probs, value_layer)\r\n\r\n context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3])\r\n context_layer = tf.reshape(\r\n context_layer, (batch_size, -1, self.all_head_size)\r\n ) # (batch_size, seq_len_q, all_head_size)\r\n\r\n outputs = (context_layer, attention_probs) if self.output_attentions else (context_layer,)\r\n return outputs`", "@shange1996 just copy `TFBertSelfAttention` class?", "> @shange1996 just copy `TFBertSelfAttention` class?\r\n\r\nYes! Just copy, no import.", "Hey guys, \r\n\r\nI looked into the issue and I think the best solution is to use a keras layer wrapper as follows:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport tensorflow as tf\r\nimport numpy as np\r\nfrom transformers.modeling_tf_bert import BertConfig, TFBertEncoder\r\n\r\nprint(tf.__name__, tf.__version__)\r\n\r\nconfig = BertConfig()\r\nconfig.hidden_size = 128\r\nconfig.num_attention_heads = 4\r\n\r\n\r\nclass NewTFBertEncoder(tf.keras.layers.Layer):\r\n\r\n def __init__(self, config):\r\n super(NewTFBertEncoder, self).__init__()\r\n# self.inputs = tf.keras.layers.Input(input_shape) # not really needed here IMO.\r\n self.encoder = TFBertEncoder(config=config)\r\n self.dense = tf.keras.layers.Dense(config.hidden_size)\r\n\r\n def call(self, inputs):\r\n head_mask = [None for _ in range(config.num_hidden_layers)]\r\n output = self.encoder([inputs, None, head_mask])[0]\r\n dense_output = self.dense(output)\r\n\r\n return dense_output\r\n\r\n\r\nnew_bert_encoder = NewTFBertEncoder(config)\r\noutput = new_bert_encoder(np.ones((2, 91, 128))) # batch size , sequence length, hidden size\r\n```", "Two things:\r\n\r\n- If a customized layer is to be used with standard keras layers (as is the case here), as far as I know it is recommended to use a `keras.layers` wrapper class. Also see here: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda#variables_2\r\n- I don't think that the tf.keras.inputs class is needed here (or do you have a speciifc use case in mind?). Keras usually creates such an instance under the hood anyways, see: https://www.tensorflow.org/api_docs/python/tf/keras/layers/InputLayer\r\n\r\nAlso pinging @jplu to check if my code proposal is the right choice here.", "@shange1996 @etveritas - Let me know if the proposed solution works for you. If not feel free to re-open the issue :-) \r\nAlso linking this issue to: https://github.com/huggingface/transformers/issues/5046. ", "Both work for me , thanks!", "Hey ! This error appears when some variable are initialized elsewhere than in the Layer itself. The solution that @patrickvonplaten proposes is a good one!! Good job people :)" ]
1,591
1,592
1,592
NONE
null
# 🐛 Bug ## Information Model I am using TFBertEncoder: Language I am using the model on English: The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. When I use, TFBertEncoder, I get an error. Here is my code. ```py import tensorflow as tf import numpy as np from transformers.modeling_tf_bert import BertConfig, TFBertEncoder print(tf.__name__, tf.__version__) input_a = tf.keras.layers.Input(shape=(91, 128)) config = BertConfig() config.hidden_size = 128 config.num_attention_heads = 4 # config.output_attentions = False # config.output_hidden_states = False head_mask = [None for _ in range(config.num_hidden_layers)] encoder_output = TFBertEncoder(config=config)([input_a, None, head_mask])[0] print(encoder_output.shape) test_out = tf.keras.layers.Dense(128)(encoder_output) print(test_out.shape) ``` ## Expected behavior Here is the error: ``` (None, 91, 128) 2020-06-03 11:18:10.160647: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Failed precondition: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist. [[{{node output_23/dense/BiasAdd/ReadVariableOp}}]] Traceback (most recent call last): File "D:/python/tx/TEST.py", line 16, in <module> a = tf.keras.layers.Dense(128)(encoder_output) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 720, in __call__ base_layer_utils.create_keras_history(inputs) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 187, in create_keras_history _, created_layers = _create_keras_history_helper(tensors, set(), []) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper layer_inputs, processed_ops, created_layers) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper layer_inputs, processed_ops, created_layers) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper layer_inputs, processed_ops, created_layers) [Previous line repeated 5 more times] File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 247, in _create_keras_history_helper constants[i] = backend.function([], op_input)([]) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\backend.py", line 3727, in __call__ outputs = self._graph_fn(*converted_inputs) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1551, in __call__ return self._call_impl(args, kwargs) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1591, in _call_impl return self._call_flat(args, self.captured_inputs, cancellation_manager) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1692, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 545, in call ctx=ctx) File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist. [[node output_23/dense/BiasAdd/ReadVariableOp (defined at /python/tx/TEST.py:16) ]] [Op:__inference_keras_scratch_graph_5205] Function call stack: keras_scratch_graph ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.3.0 (in conda list) - Platform: - Python version:3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?):TF2.1.0(GPU) - Using GPU in script?: - Using distributed or parallel set-up in script?:No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4733/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4732/comments
https://api.github.com/repos/huggingface/transformers/issues/4732/events
https://github.com/huggingface/transformers/pull/4732
629,627,404
MDExOlB1bGxSZXF1ZXN0NDI2OTE0MzA3
4,732
Adding notebooks for Fine Tuning [Community Notebook]
{ "login": "abhimishra91", "id": 27291199, "node_id": "MDQ6VXNlcjI3MjkxMTk5", "avatar_url": "https://avatars.githubusercontent.com/u/27291199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhimishra91", "html_url": "https://github.com/abhimishra91", "followers_url": "https://api.github.com/users/abhimishra91/followers", "following_url": "https://api.github.com/users/abhimishra91/following{/other_user}", "gists_url": "https://api.github.com/users/abhimishra91/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhimishra91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhimishra91/subscriptions", "organizations_url": "https://api.github.com/users/abhimishra91/orgs", "repos_url": "https://api.github.com/users/abhimishra91/repos", "events_url": "https://api.github.com/users/abhimishra91/events{/privacy}", "received_events_url": "https://api.github.com/users/abhimishra91/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=h1) Report\n> Merging [#4732](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9ca485734aea269961d63a040ff194365d151fd1&el=desc) will **increase** coverage by `1.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4732/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4732 +/- ##\n==========================================\n+ Coverage 75.64% 77.07% +1.42% \n==========================================\n Files 128 128 \n Lines 20996 20996 \n==========================================\n+ Hits 15883 16182 +299 \n+ Misses 5113 4814 -299 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.02% <0.00%> (-14.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.34% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.94% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.49% <0.00%> (+6.36%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.43% <0.00%> (+75.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=footer). Last update [9ca4857...9d50901](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hey @abhimishra91,\r\n\r\nThe notebooks are very clean and well-written! Thanks a lot! :-) Just did some renaming in the explanations." ]
1,591
1,591
1,591
CONTRIBUTOR
null
Hi @patrickvonplaten, Adding 3 documented notebooks to fine tune transformers to downstream NLP tasks with PyTorch: - Multi-class classification: Using DistilBert - Multi-label classification: Using Bert - Summarization: Using T5 - **Model Tracking with WandB** These notebooks are pulled from the git repo: https://github.com/abhimishra91/transformers-tutorials
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4732/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4732", "html_url": "https://github.com/huggingface/transformers/pull/4732", "diff_url": "https://github.com/huggingface/transformers/pull/4732.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4732.patch", "merged_at": 1591175247000 }
https://api.github.com/repos/huggingface/transformers/issues/4731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4731/comments
https://api.github.com/repos/huggingface/transformers/issues/4731/events
https://github.com/huggingface/transformers/pull/4731
629,565,458
MDExOlB1bGxSZXF1ZXN0NDI2ODY4OTg5
4,731
[DOT NOT MERGE] Tokenizers Shape Polymorphism - Introduce pad_to_next_multiple_of parameters
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That's a really cool feature. Looking forward to it!", "Is the bucketization size not going to be too linear? Shouldn't we rather do `pad_to_next_power_of_two` or similar?", "@julien-c That will be linear yet, by using a `power_of` growth we might rapidly increase the number of padding tokens to add and then fall into the opposite situation where most of the computation will be wasted on padding tokens." ]
1,591
1,651
1,592
MEMBER
null
Needs new release of tokenizers (cc @n1t0)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4731/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4731", "html_url": "https://github.com/huggingface/transformers/pull/4731", "diff_url": "https://github.com/huggingface/transformers/pull/4731.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4731.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4730/comments
https://api.github.com/repos/huggingface/transformers/issues/4730/events
https://github.com/huggingface/transformers/pull/4730
629,525,332
MDExOlB1bGxSZXF1ZXN0NDI2ODM3NTUz
4,730
bert-small-cord19 model cards
{ "login": "davidmezzetti", "id": 561939, "node_id": "MDQ6VXNlcjU2MTkzOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/561939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidmezzetti", "html_url": "https://github.com/davidmezzetti", "followers_url": "https://api.github.com/users/davidmezzetti/followers", "following_url": "https://api.github.com/users/davidmezzetti/following{/other_user}", "gists_url": "https://api.github.com/users/davidmezzetti/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidmezzetti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidmezzetti/subscriptions", "organizations_url": "https://api.github.com/users/davidmezzetti/orgs", "repos_url": "https://api.github.com/users/davidmezzetti/repos", "events_url": "https://api.github.com/users/davidmezzetti/events{/privacy}", "received_events_url": "https://api.github.com/users/davidmezzetti/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=h1) Report\n> Merging [#4730](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9ca485734aea269961d63a040ff194365d151fd1&el=desc) will **increase** coverage by `1.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4730/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4730 +/- ##\n==========================================\n+ Coverage 75.64% 77.07% +1.42% \n==========================================\n Files 128 128 \n Lines 20996 20996 \n==========================================\n+ Hits 15883 16182 +299 \n+ Misses 5113 4814 -299 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.02% <0.00%> (-14.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.34% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.94% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.49% <0.00%> (+6.36%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.43% <0.00%> (+75.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=footer). Last update [9ca4857...c9d87f1](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Adds model cards for the bert-small-cord19 series of models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4730/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4730", "html_url": "https://github.com/huggingface/transformers/pull/4730", "diff_url": "https://github.com/huggingface/transformers/pull/4730.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4730.patch", "merged_at": 1591170014000 }
https://api.github.com/repos/huggingface/transformers/issues/4729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4729/comments
https://api.github.com/repos/huggingface/transformers/issues/4729/events
https://github.com/huggingface/transformers/issues/4729
629,523,673
MDU6SXNzdWU2Mjk1MjM2NzM=
4,729
[Feature request] Support batched conditional generation from GPT-2
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Also see: https://github.com/huggingface/transformers/issues/3021", "This is known to not work at the moment with `generate()`. I have to think a bit about the cleanest way to implement it :-) Code suggestions are very welcome! \r\n\r\n", "Very interested in this! Came here from #3021 (many hours after wondering why my batch generation was not working...)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,602
1,602
NONE
null
# 🚀 Feature request Support batched conditional generation from GPT-2 ## Motivation Currently the [method](https://github.com/huggingface/transformers/blob/9ca485734aea269961d63a040ff194365d151fd1/src/transformers/modeling_utils.py#L802) to generate text from GPT-2 conditioned on an input sequence only supports either 1) a single input at a time, or 2) a batch of inputs where the conditioning input sequence is the same length. It would be great (for efficiency) if this method could be updated to support a batch with conditional inputs of varying length, done by ignoring padding in the input_ids. ## Your contribution Unlikely to have time to code this, but will submit a PR if I do.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4729/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4728/comments
https://api.github.com/repos/huggingface/transformers/issues/4728/events
https://github.com/huggingface/transformers/pull/4728
629,422,118
MDExOlB1bGxSZXF1ZXN0NDI2NzU4MTUz
4,728
Possible fix to make AMP work with DDP in the trainer
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=h1) Report\n> Merging [#4728](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b231a413f5d58592bb4d98304c3d3b668c5d4a42&el=desc) will **decrease** coverage by `1.65%`.\n> The diff coverage is `33.33%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4728/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4728 +/- ##\n==========================================\n- Coverage 77.27% 75.62% -1.66% \n==========================================\n Files 128 128 \n Lines 20980 20982 +2 \n==========================================\n- Hits 16213 15868 -345 \n- Misses 4767 5114 +347 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `75.82% <33.33%> (-0.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `17.51% <0.00%> (-75.92%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.13% <0.00%> (-6.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.75% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.72% <0.00%> (+1.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=footer). Last update [b231a41...8af2fd8](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM, thanks for the detailed write-up and research @BramVanroy \r\n\r\nI think this was here before but I removed it when refactoring, assuming that it was redundant – looks like it wasn't really:)\r\n\r\nAs for which examples do NOT use the new trainer, you should refer to the table at https://github.com/huggingface/transformers/tree/master/examples – we expect all of them to use Trainer/TFTrainer eventually.\r\n\r\nThank you!" ]
1,591
1,592
1,592
COLLABORATOR
null
closes https://github.com/huggingface/transformers/issues/4657 Using multiple GPUs in PyTorch (with DistrubutedDataParallel) speeds up performance drastically. To get even more speed out of it, the example scripts often - if not always - allow the use of apex for automatic mixed precision. This is great. However, some issues can arise, particularly the infamous "illegal memory access" error which seems to have an easy, one-line solution. Currently, we assume that by using things like `.to(args.device)` we solve any and all issues of where our data or model should go. However, the author of AMP @mcarilli seems to [suggest](https://github.com/NVIDIA/apex/issues/319#issuecomment-503372924) that it is recommended to always set the current process' default device, too, to ensure no further issues. This suggestion also [helped](https://github.com/huggingface/transformers/issues/4657#issuecomment-637703146) the aforementioned issue. Therefore it seems a good idea to also implement this. In fact, some examples such as Hans already do this. https://github.com/huggingface/transformers/blob/b231a413f5d58592bb4d98304c3d3b668c5d4a42/examples/adversarial/test_hans.py#L518 To avoid DRY issues, I would suspect that the trainer_args file is the best place to do this _only once_ but other suggestions are welcome (this is different from what I suggested in the linked issue, though I think `trainer_args` is the better place). I am not sure which examples do not use trainer_args, but those would need to be checked and updated as well. If anyone can give a quick rundown of which examples do NOT use the new trainer, I can have a look quickly. Otherwise I'll have to go over the examples another time. **Side note**: I am not sure how and if this works with TPUs so to be sure that this only involves CUDA devices, I first check whether the device is a CUDA device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4728/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4728/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4728", "html_url": "https://github.com/huggingface/transformers/pull/4728", "diff_url": "https://github.com/huggingface/transformers/pull/4728.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4728.patch", "merged_at": 1592230226000 }
https://api.github.com/repos/huggingface/transformers/issues/4727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4727/comments
https://api.github.com/repos/huggingface/transformers/issues/4727/events
https://github.com/huggingface/transformers/issues/4727
629,421,628
MDU6SXNzdWU2Mjk0MjE2Mjg=
4,727
Albert pretraining loss not decreasing
{ "login": "008karan", "id": 18630864, "node_id": "MDQ6VXNlcjE4NjMwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/008karan", "html_url": "https://github.com/008karan", "followers_url": "https://api.github.com/users/008karan/followers", "following_url": "https://api.github.com/users/008karan/following{/other_user}", "gists_url": "https://api.github.com/users/008karan/gists{/gist_id}", "starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/008karan/subscriptions", "organizations_url": "https://api.github.com/users/008karan/orgs", "repos_url": "https://api.github.com/users/008karan/repos", "events_url": "https://api.github.com/users/008karan/events{/privacy}", "received_events_url": "https://api.github.com/users/008karan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Not certain, but looks like maybe nvidia apex was not installed correctly?\r\n\r\n\"was: ModuleNotFoundError(\"No module named 'amp_C'\",)\"", "thats warning. \r\nHave followed `pip install -v --no-cache-dir ./` for apex installation. \r\nChanging LR to 5e-5 reducing the loss.", "@008karan, setting the loss to 5e-5 led your model to convergence?", "@LysandreJik loss is decreasing as of now \r\n\r\n> Also, weird thing is while I am setting the number of epochs 3 but in training its showing 9\r\n\r\ncan you comment on this?", "I'll have a look.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
NONE
null
I am training Albert from scratch using run_language_modeling.py. Doing training on 8 V100.p3dn8x. Launching with this parameters. ``` python transformers/examples/language-modeling/test.py --train_data_file x.txt --output_dir albert_model --model_type albert --mlm --config_name test --tokenizer_name test --do_train --line_by_line --learning_rate 0.00088 --num_train_epochs 3 --save_total_limit 50 --save_steps 5000 --per_gpu_train_batch_size 150 --seed 42 --overwrite_output_dir --max_steps 200000 --fp16 ``` ![image](https://user-images.githubusercontent.com/18630864/83558269-65041100-a530-11ea-965b-f24a76550a28.png) loss is not decreasing. The above plot contains training with and without warmup steps training curve. loss is stuck on `7.27` in both case. Also, weird thing is while I am setting the number of epochs 3 but in training its showing 9 ``` was: ModuleNotFoundError("No module named 'amp_C'",) 06/02/2020 14:45:14 - INFO - transformers.trainer - ***** Running training ***** 06/02/2020 14:45:14 - INFO - transformers.trainer - Num examples = 28236463 06/02/2020 14:45:14 - INFO - transformers.trainer - Num Epochs = 9 06/02/2020 14:45:14 - INFO - transformers.trainer - Instantaneous batch size per device = 150 06/02/2020 14:45:14 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 1200 06/02/2020 14:45:14 - INFO - transformers.trainer - Gradient Accumulation steps = 1 06/02/2020 14:45:14 - INFO - transformers.trainer - Total optimization steps = 200000 ``` Can anyone suggest what could be reasons for such behavior?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4727/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4726/comments
https://api.github.com/repos/huggingface/transformers/issues/4726/events
https://github.com/huggingface/transformers/pull/4726
629,357,322
MDExOlB1bGxSZXF1ZXN0NDI2NzA3Njg0
4,726
TFRobertaModelIntegrationTest requires tf
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=h1) Report\n> Merging [#4726](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d976ef262e0b2c52363d201b2e14e5ecc42abbb3&el=desc) will **increase** coverage by `0.82%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4726/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4726 +/- ##\n==========================================\n+ Coverage 75.63% 76.46% +0.82% \n==========================================\n Files 128 128 \n Lines 20979 20979 \n==========================================\n+ Hits 15867 16041 +174 \n+ Misses 5112 4938 -174 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.51% <0.00%> (-54.67%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.94% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.49% <0.00%> (+6.36%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.43% <0.00%> (+75.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=footer). Last update [d976ef2...49ce6ad](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4726/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4726", "html_url": "https://github.com/huggingface/transformers/pull/4726", "diff_url": "https://github.com/huggingface/transformers/pull/4726.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4726.patch", "merged_at": 1591117141000 }
https://api.github.com/repos/huggingface/transformers/issues/4725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4725/comments
https://api.github.com/repos/huggingface/transformers/issues/4725/events
https://github.com/huggingface/transformers/issues/4725
629,342,164
MDU6SXNzdWU2MjkzNDIxNjQ=
4,725
Save & load sparse models from the models database
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,596
1,596
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Handle saving and loading pipeline for sparse models such as PruneBERT. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> New sparse models will be extremely useful if we can use them by downloading the compressed. version with the `.from_pretrained` functions ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> An example is already part of the current repo so I can try to create a PR later: https://github.com/huggingface/transformers/blob/master/examples/movement-pruning/Saving_PruneBERT.ipynb
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4725/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4724/comments
https://api.github.com/repos/huggingface/transformers/issues/4724/events
https://github.com/huggingface/transformers/pull/4724
629,244,524
MDExOlB1bGxSZXF1ZXN0NDI2NjE4NTQ3
4,724
Fix CI after killing archive maps
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,591
1,591
1,591
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4724/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4724", "html_url": "https://github.com/huggingface/transformers/pull/4724", "diff_url": "https://github.com/huggingface/transformers/pull/4724.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4724.patch", "merged_at": 1591107669000 }
https://api.github.com/repos/huggingface/transformers/issues/4723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4723/comments
https://api.github.com/repos/huggingface/transformers/issues/4723/events
https://github.com/huggingface/transformers/pull/4723
629,170,366
MDExOlB1bGxSZXF1ZXN0NDI2NTU3OTU2
4,723
never_split on slow tokenizers should not split
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=h1) Report\n> Merging [#4723](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76779363160a598f130433209a77f8a747351b61&el=desc) will **increase** coverage by `0.36%`.\n> The diff coverage is `80.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4723/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4723 +/- ##\n==========================================\n+ Coverage 77.38% 77.74% +0.36% \n==========================================\n Files 128 128 \n Lines 21071 21071 \n==========================================\n+ Hits 16305 16381 +76 \n+ Misses 4766 4690 -76 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.25% <80.00%> (-3.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=footer). Last update [7677936...b0fd2b3](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
MEMBER
null
I'm actually not sure if it's the right behavior, but when using `do_basic_tokenization` on `BertTokenizer` the parameter `never_split` is not used to determine if a token should be sent to wordpiece tokenizer. This PR checks, for each tokens returned by `basic_tokenizer`, if the token is not in the `never_split: set` before sending to wordpiece. If the token is found in `never_split` then it is added as-it in the returned list of tokens. Updated `never_split: List` -> `never_split: Set` as we're always testing for membership in the set and not for a specific index in the collection. [Set are 10x faster for membership operations than list](https://stackoverflow.com/a/17945009) **Before:** ```python tokenizer = BertTokenizer.from_pretrained( "bert-base-cased", use_fast=False, never_split=['lol'], do_basic_tokenize=True ) tokenizer.tokenize("lol") Out[4]: ['lo', '##l'] ``` **After** ```python tokenizer = BertTokenizer.from_pretrained( "bert-base-cased", use_fast=False, never_split=['lol'], do_basic_tokenize=True ) tokenizer.tokenize("lol") Out[5]: ['lol'] ``` Related to #3518
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4723/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4723/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4723", "html_url": "https://github.com/huggingface/transformers/pull/4723", "diff_url": "https://github.com/huggingface/transformers/pull/4723.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4723.patch", "merged_at": 1591217309000 }
https://api.github.com/repos/huggingface/transformers/issues/4722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4722/comments
https://api.github.com/repos/huggingface/transformers/issues/4722/events
https://github.com/huggingface/transformers/pull/4722
629,128,371
MDExOlB1bGxSZXF1ZXN0NDI2NTI1MjE3
4,722
Unify label args
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Like it!", "Added a tentative documentation for the kwargs, not sure if we want it or not.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=h1) Report\n> Merging [#4722](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `95.34%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4722/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4722 +/- ##\n==========================================\n+ Coverage 77.14% 77.19% +0.05% \n==========================================\n Files 128 128 \n Lines 21073 21130 +57 \n==========================================\n+ Hits 16256 16311 +55 \n- Misses 4817 4819 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <ø> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `27.27% <ø> (ø)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.94% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.92% <57.14%> (-0.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.69% <90.90%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.67% <100.00%> (+0.46%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.81% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.17% <100.00%> (+0.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `98.18% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.89% <100.00%> (+0.46%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=footer). Last update [47a551d...68fddbd](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Made the same for all models since @julien-c liked it. A few comments as I was reading and deprecating.\r\n- I found a few more wrong docstrings (mentioning `lm_label` when the arg was called `labels`) so fixed them.\r\n- As @patrickvonplaten mentioned on #4711, `BertForMaskedLM` should be split in two (and add a `BertWithLMHead`) to remove the `lm_labels` argument. I made this a TODO to avoid this PR become too big.\r\n- The GPT2 and openai models also have a version with two labels (`GPT2DoubleHeadsModel` and `OpenAIDoubleHeadsModel`), I renamed `lm_labels` to `labels` there but there may still be a need for a second label. Can revert the change on those models if we want each labels arg to have a useful name.\r\n- In `LongformerModel`, the `label` argument is not used, should it be dropped?\r\n\r\nAlso, I note that quite a few docstrings have example that don't match the model they document (for instance an electra model was using Bert as an example, but there are a few instances)." ]
1,591
1,591
1,591
COLLABORATOR
null
Following up from #4711, this is a proposal to deprecate any argument that's not `labels` (like `madked_lm_labels`, `lm_labels`, etc.) to `labels`. I've only done one model for now to get feedback on the design, once we have something you like, I can do them all (or have separate PRs if you think that's best).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4722/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4722", "html_url": "https://github.com/huggingface/transformers/pull/4722", "diff_url": "https://github.com/huggingface/transformers/pull/4722.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4722.patch", "merged_at": 1591191387000 }
https://api.github.com/repos/huggingface/transformers/issues/4721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4721/comments
https://api.github.com/repos/huggingface/transformers/issues/4721/events
https://github.com/huggingface/transformers/pull/4721
629,125,433
MDExOlB1bGxSZXF1ZXN0NDI2NTIzMDAx
4,721
Faster bert basic tokenizer
{ "login": "GuillemGSubies", "id": 37592763, "node_id": "MDQ6VXNlcjM3NTkyNzYz", "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuillemGSubies", "html_url": "https://github.com/GuillemGSubies", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=h1) Report\n> Merging [#4721](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4721/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4721 +/- ##\n=======================================\n Coverage 77.14% 77.14% \n=======================================\n Files 128 128 \n Lines 21073 21072 -1 \n=======================================\n Hits 16256 16256 \n+ Misses 4817 4816 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `94.97% <100.00%> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=footer). Last update [47a551d...6f7ff85](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Rather than making them regular functions I'd make them into static methods since they're still very much related to the class. You just don't need `self`.", "> Rather than making them regular functions I'd make them into static methods since they're still very much related to the class. You just don't need `self`.\r\n\r\nDone", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,597
1,597
CONTRIBUTOR
null
In this PR I tried 2 things: * First I change some comparisons in the form of `a == sth or a == otherthing` for `a in {sth, otherthing}`. It is faster and more readable. * I noticed that some methods could actually be functions because they did not use anything from the class. I am a bit new to this, so I will accept any feedback.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4721/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4721", "html_url": "https://github.com/huggingface/transformers/pull/4721", "diff_url": "https://github.com/huggingface/transformers/pull/4721.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4721.patch", "merged_at": null }