url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/2712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2712/comments | https://api.github.com/repos/huggingface/transformers/issues/2712/events | https://github.com/huggingface/transformers/issues/2712 | 558,705,269 | MDU6SXNzdWU1NTg3MDUyNjk= | 2,712 | a problem occur when I train Chinese distilgpt2 model | {
"login": "ScottishFold007",
"id": 36957508,
"node_id": "MDQ6VXNlcjM2OTU3NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36957508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScottishFold007",
"html_url": "https://github.com/ScottishFold007",
"followers_url": "https://api.github.com/users/ScottishFold007/followers",
"following_url": "https://api.github.com/users/ScottishFold007/following{/other_user}",
"gists_url": "https://api.github.com/users/ScottishFold007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ScottishFold007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ScottishFold007/subscriptions",
"organizations_url": "https://api.github.com/users/ScottishFold007/orgs",
"repos_url": "https://api.github.com/users/ScottishFold007/repos",
"events_url": "https://api.github.com/users/ScottishFold007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ScottishFold007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | CONTRIBUTOR | null | ### When I was training a new model from zero to one, the following questions appeared, please help me answer them, thank you very much!


C:\Users\gaochangkuan\Desktop\transformers-master\examples\distillation>python train.py --student_type gpt2 --student_config training_configs/distilgpt2.json --teacher_type gpt2 --teacher_name distilgpt2 --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm --freeze_pos_embs --data_file data/binarized_text.bert-base-chinese.pickle --token_counts data/token_counts.bert-base-chinese.pickle --dump_path model --force
02/02/2020 22:26:54 - INFO - transformers.file_utils - PID: 27864 - PyTorch version 1.4.0+cpu available.
02/02/2020 22:27:02 - INFO - utils - PID: 27864 - Experiment will be dumped and logged in model
02/02/2020 22:27:02 - INFO - utils - PID: 27864 - Param: Namespace(adam_epsilon=1e-06, alpha_ce=5.0, alpha_clm=0.0, alpha_cos=1.0, alpha_mlm=2.0, alpha_mse=0.0, batch_size=5, checkpoint_interval=4000, data_file='data/binarized_text.bert-base-chinese.pickle', dump_path='model', force=True, fp16=False, fp16_opt_level='O1', freeze_pos_embs=True, freeze_token_type_embds=False, gradient_accumulation_steps=50, group_by_size=True, initializer_range=0.02, is_master=True, learning_rate=0.0005, local_rank=0, log_interval=500, master_port=-1, max_grad_norm=5.0, mlm=True, mlm_mask_prop=0.15, mlm_smoothing=0.7, multi_gpu=False, n_epoch=3, n_gpu=0, restrict_ce_to_mask=False, seed=56, student_config='training_configs/distilgpt2.json', student_pretrained_weights=None, student_type='gpt2', teacher_name='distilgpt2', teacher_type='gpt2', temperature=2.0, token_counts='data/token_counts.bert-base-chinese.pickle', warmup_prop=0.05, weight_decay=0.0, word_keep=0.1, word_mask=0.8, word_rand=0.1)
Using cache found in C:\Users\gaochangkuan/.cache\torch\hub\huggingface_pytorch-pretrained-BERT_master
02/02/2020 22:27:12 - INFO - transformers.configuration_utils - PID: 27864 - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json from cache at C:\Users\gaochangkuan\.cache\torch\transformers\8a3b1cfe5da58286e12a0f5d7d182b8d6eca88c08e26c332ee3817548cf7e60a.3767c74c8ed285531d04153fe84a0791672aff52f7249b27df341dbce09b8305
02/02/2020 22:27:12 - INFO - transformers.configuration_utils - PID: 27864 - Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"directionality": "bidi",
"do_sample": false,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 21128
}
02/02/2020 22:27:22 - INFO - transformers.tokenization_utils - PID: 27864 - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt from cache at C:\Users\gaochangkuan\.cache\torch\transformers\8a0c070123c1f794c42a29c6904beb7c1b8715741e235bee04aca2c7636fc83f.9b42061518a39ca00b8b52059fd2bede8daa613f8a8671500e518a8c29de8c00
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Special tokens {'unk_token': 100, 'sep_token': 102, 'pad_token': 0, 'cls_token': 101, 'mask_token': 103}
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Loading data from data/binarized_text.bert-base-chinese.pickle
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Loading token counts from data/token_counts.bert-base-chinese.pickle (already pre-computed)
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Splitting 124 too long sequences.
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Remove 2840 too short (<=11 tokens) sequences.
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Remove 0 sequences with a high level of unknown tokens (50%).
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - 30807 sequences
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Data loader created.
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Loading student config from training_configs/distilgpt2.json
02/02/2020 22:27:22 - INFO - transformers.configuration_utils - PID: 27864 - loading configuration file training_configs/distilgpt2.json
02/02/2020 22:27:22 - INFO - transformers.configuration_utils - PID: 27864 - Model config GPT2Config {
"architectures": null,
"attn_pdrop": 0.1,
"bos_token_id": 0,
"do_sample": false,
"embd_pdrop": 0.1,
"eos_token_ids": 0,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 6,
"n_positions": 1024,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 21128
}
02/02/2020 22:27:24 - INFO - utils - PID: 27864 - Student loaded.
02/02/2020 22:27:24 - INFO - transformers.configuration_utils - PID: 27864 - loading configuration file E:\GPT2_Text_generation\GPT2-Chinese-master\GPT2Model\config.json
02/02/2020 22:27:24 - INFO - transformers.configuration_utils - PID: 27864 - Model config GPT2Config {
"architectures": null,
"attn_pdrop": 0.1,
"bos_token_id": 0,
"do_sample": false,
"embd_pdrop": 0.1,
"eos_token_ids": 0,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 10,
"n_positions": 1024,
"num_beams": 1,
"num_labels": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 21128
}
02/02/2020 22:27:24 - INFO - transformers.modeling_utils - PID: 27864 - loading weights file E:\GPT2_Text_generation\GPT2-Chinese-master\GPT2Model\pytorch_model.bin
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Teacher loaded from distilgpt2.
21128 21128
768 768
1024 1024
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Initializing Distiller
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Using [0, 3, 7, 11, 15, 19, 23, 27, 31, 35, 39, 43, 47, 51, 55, 59, 63, 67, 71, 75, 79, 83, 87, 91, 95, 99, 103, 107, 111, 115, 119, 123, 127, 131, 135, 139, 143, 147, 151, 155, 159, 163, 167, 171, 175, 179, 183, 187, 191, 195, 199, 203, 207, 211, 215, 219, 223, 227, 231, 235, 239, 243, 247, 251, 255, 259, 263, 267, 271, 275, 279, 283, 287, 291, 295, 299, 303, 307, 311, 315, 319, 323, 327, 331, 335, 339, 343, 347, 351, 355, 359, 363, 367, 371, 375, 379, 383, 387, 391, 395, 399, 403, 407, 411, 415, 419, 423, 427, 431, 435, 439, 443, 447, 451, 455, 459, 463, 467, 471, 475, 479, 483, 487, 491, 495, 499, 503, 507, 511, inf] as bins for aspect lengths quantization
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Count of instances per bin: [1267 1907 1866 1702 1584 1483 1380 1237 1205 1101 1047 974 882 854
758 672 598 583 593 519 492 453 444 414 371 338 352 305
290 298 260 250 260 214 210 200 189 190 149 153 121 125
124 106 116 105 87 100 78 103 73 70 74 78 65 70
52 43 46 51 48 38 49 28 32 41 34 27 29 31
28 39 28 23 25 26 17 25 23 12 20 17 17 20
8 12 15 16 8 11 11 10 13 11 3 9 8 5
9 5 6 6 6 4 10 4 6 3 3 3 2 4
3 6 4 3 7 2 6 9 1 2 6 2 3 134]
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Using MLM loss for LM step.
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - --- Initializing model optimizer
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - ------ Number of trainable parameters (student): 58755072
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - ------ Number of parameters (student): 59541504
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - --- Initializing Tensorboard
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Starting training
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - --- Starting epoch 0/2
-Iter: 0%| | 0/6162 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 329, in <module>
main()
File "train.py", line 324, in main
distiller.train()
File "C:\Users\gaochangkuan\Desktop\transformers-master\examples\distillation\distiller.py", line 355, in train
self.step(input_ids=token_ids, attention_mask=attn_mask, lm_labels=lm_labels)
File "C:\Users\gaochangkuan\Desktop\transformers-master\examples\distillation\distiller.py", line 385, in step
input_ids=input_ids, attention_mask=attention_mask
**ValueError: too many values to unpack (expected 2)** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2712/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2711/comments | https://api.github.com/repos/huggingface/transformers/issues/2711/events | https://github.com/huggingface/transformers/issues/2711 | 558,698,789 | MDU6SXNzdWU1NTg2OTg3ODk= | 2,711 | TypeError: apply_gradients() missing 1 required positional argument: 'clip_norm' | {
"login": "dimitreOliveira",
"id": 16668746,
"node_id": "MDQ6VXNlcjE2NjY4NzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/16668746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimitreOliveira",
"html_url": "https://github.com/dimitreOliveira",
"followers_url": "https://api.github.com/users/dimitreOliveira/followers",
"following_url": "https://api.github.com/users/dimitreOliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/dimitreOliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimitreOliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimitreOliveira/subscriptions",
"organizations_url": "https://api.github.com/users/dimitreOliveira/orgs",
"repos_url": "https://api.github.com/users/dimitreOliveira/repos",
"events_url": "https://api.github.com/users/dimitreOliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimitreOliveira/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, indeed this optimizer `AdamWeightDecay` requires an additional argument for truncating the gradient norm.\r\n\r\nIt essentially feeds the `clip_norm` argument (which is the second required argument in `apply_gradients`) to [tf.clip_by_global_norm](https://www.tensorflow.org/api_docs/python/tf/clip_by_global_norm).\r\n\r\nYou can see a usage example in our [run_tf_ner.py example](https://github.com/huggingface/transformers/blob/master/examples/run_tf_ner.py#L203)",
"I wasn't able to implement this fix on my problem but I think this answer closes the issue, thank!",
"This problem occurs if you don't specify `clip_norm` when calling `apply_gradients`.\r\nIf using a custom training loop, the fix is easy :)\r\nIf you are using `keras.model.fit`, you can do it the following way:\r\n\r\n```\r\nfrom functools import partialmethod\r\n\r\nAdamWeightDecay.apply_gradients = partialmethod(AdamWeightDecay.apply_gradients, clip_norm=1.0)\r\noptimizer = create_optimizer(p.learning_rate, num_train_steps=total_steps, num_warmup_steps=warmup_steps)\r\n```"
] | 1,580 | 1,588 | 1,581 | CONTRIBUTOR | null | # π Bug
## Information
Model I am using (TFBertModel):
Language I am using the model on (English):
Also I'm using `tensorflow==2.1.0` and `transformers==2.3.0`
The problem arises when using:
* [x] the official example scripts: (give details below)
I'm trying to use the `optimization_tf.create_optimizer` from the source code.
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
Just a text classification task.
## To reproduce
Steps to reproduce the behavior:
1. Try to use the model the regular way
2. When running the model with "optimization_tf.create_optimizer"
## Environment info
```
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _process_single_batch(model, inputs, targets, output_loss_metrics, sample_weights, training)
271 loss_scale_optimizer.LossScaleOptimizer):
272 grads = model.optimizer.get_unscaled_gradients(grads)
--> 273 model.optimizer.apply_gradients(zip(grads, trainable_weights))
274 else:
275 logging.warning('The list of trainable weights is empty. Make sure that'
TypeError: apply_gradients() missing 1 required positional argument: 'clip_norm'
```
## How I am able to run
On the class `AdamWeightDecay` and the method `apply_gradients` I just call the supper function like this:
```
def apply_gradients(self, grads_and_vars, name=None):
return super().apply_gradients(grads_and_vars)
```
but as you can see I'm not using the `clip_norm` as the source example uses.
Is there a way to use the original source function as described in the source code? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2711/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2710/comments | https://api.github.com/repos/huggingface/transformers/issues/2710/events | https://github.com/huggingface/transformers/pull/2710 | 558,667,128 | MDExOlB1bGxSZXF1ZXN0MzY5OTg5ODQ3 | 2,710 | Removed unused fields in DistilBert TransformerBlock | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=h1) Report\n> Merging [#2710](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ba147ecffa28e5a4f96eebd09dcd642117dedae?src=pr&el=desc) will **decrease** coverage by `0.27%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2710 +/- ##\n==========================================\n- Coverage 74.09% 73.81% -0.28% \n==========================================\n Files 93 93 \n Lines 15248 15243 -5 \n==========================================\n- Hits 11298 11252 -46 \n- Misses 3950 3991 +41\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.79% <ΓΈ> (-0.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `52.94% <0%> (-21.57%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.79% <0%> (-3.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `84.87% <0%> (-0.82%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.3% <0%> (-0.52%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=footer). Last update [2ba147e...d40db22](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,582 | 1,582 | CONTRIBUTOR | null | A few fields in the TransformerBlock are unused - this small PR cleans it up.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2710/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2710",
"html_url": "https://github.com/huggingface/transformers/pull/2710",
"diff_url": "https://github.com/huggingface/transformers/pull/2710.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2710.patch",
"merged_at": 1582232902000
} |
https://api.github.com/repos/huggingface/transformers/issues/2709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2709/comments | https://api.github.com/repos/huggingface/transformers/issues/2709/events | https://github.com/huggingface/transformers/issues/2709 | 558,655,284 | MDU6SXNzdWU1NTg2NTUyODQ= | 2,709 | DistributedDataParallel for multi-gpu single-node runs in run_lm_finetuning.py | {
"login": "Genius1237",
"id": 15867363,
"node_id": "MDQ6VXNlcjE1ODY3MzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/15867363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Genius1237",
"html_url": "https://github.com/Genius1237",
"followers_url": "https://api.github.com/users/Genius1237/followers",
"following_url": "https://api.github.com/users/Genius1237/following{/other_user}",
"gists_url": "https://api.github.com/users/Genius1237/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Genius1237/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Genius1237/subscriptions",
"organizations_url": "https://api.github.com/users/Genius1237/orgs",
"repos_url": "https://api.github.com/users/Genius1237/repos",
"events_url": "https://api.github.com/users/Genius1237/events{/privacy}",
"received_events_url": "https://api.github.com/users/Genius1237/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As far as I can see, the script fully supports DDP:\r\n\r\nhttps://github.com/huggingface/transformers/blob/2ba147ecffa28e5a4f96eebd09dcd642117dedae/examples/run_lm_finetuning.py#L282-L286\r\n\r\nI haven't run the script myself, but looking at the source this should work with the [torch launch](https://pytorch.org/docs/stable/distributed.html#launch-utility) utility. Your command would then look like this when using a single node with four GPUs.\r\n\r\n```bash\r\npython -m torch.distributed.launch --nproc_per_node 4 run_lm_finetuning.py [arguments]\r\n```\r\n",
"Ah ok, I wasn't aware that it had to be launched this way. I was looking at the code and thought it DDP would happen only when the process was launched across multiple nodes.\r\n\r\nThanks for the help @BramVanroy "
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Modify `run_lm_finetuning.py` with DDP for multi-gpu single-node jobs.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
In it's current state, `run_lm_finetuning.py` does not run in DDP for multi-gpu single-node training jobs. This results in all but the first GPU having very low utilization (as low as 50%, when the first one is in the high 80%) due to the way simple DP works. Once implemented, the load would be more evenly balanced across all the GPUs.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can help implementing this feature, but would need guidance on what should/shouldn't be modified to get this working properly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2709/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2708/comments | https://api.github.com/repos/huggingface/transformers/issues/2708/events | https://github.com/huggingface/transformers/issues/2708 | 558,610,431 | MDU6SXNzdWU1NTg2MTA0MzE= | 2,708 | Can't pickle local object using the finetuning example. | {
"login": "Normand-1024",
"id": 17085216,
"node_id": "MDQ6VXNlcjE3MDg1MjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/17085216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Normand-1024",
"html_url": "https://github.com/Normand-1024",
"followers_url": "https://api.github.com/users/Normand-1024/followers",
"following_url": "https://api.github.com/users/Normand-1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Normand-1024/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Normand-1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Normand-1024/subscriptions",
"organizations_url": "https://api.github.com/users/Normand-1024/orgs",
"repos_url": "https://api.github.com/users/Normand-1024/repos",
"events_url": "https://api.github.com/users/Normand-1024/events{/privacy}",
"received_events_url": "https://api.github.com/users/Normand-1024/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you mind specifying which versions of everything you're using, as detailed in the [bug report issue template](https://github.com/huggingface/transformers/issues/new/choose)?",
"Hi @Normand-1024 \r\n\r\nwere you able to fix this error?\r\nas i am getting the same error while trying to run glue task (QQP) but works fine when i run MRPC.",
"Hi,\r\n\r\nI was able to get rid of this error by upgrading the torch version.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Still having this issue with transformers `2.11.0`.",
"@lucadiliello did you maange to fix it?",
"Not yet. A solution would be to use `dill` instead of `pickle`... but I'm not sure how to do it.",
"Getting same error,\r\nNot sure how to fix this error.",
"same error, with all newest version",
"I solved by reimplementing all the schedulers without lambda functions. [here](https://github.com/iKernels/transformers-lightning) I published many schedulers.",
"Same error with all newest version too.\r\n\r\n\r\n",
"UP. \r\nIs it a package version related issue?",
"Having the same issue: \r\n`Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'`",
"For running the example scripts passing `--no_multi_process` solved it for me.\r\n\r\nI haven't looked into the huggingface code yet but I could imagine that [this](https://stackoverflow.com/questions/52265120/python-multiprocessing-pool-attributeerror) is the bug here. I think it only shows up when `spawn` instead of `fork` is used to create new processes, which is why the developers might have missed it.",
"I set the `gpus=1`, and it works. ",
"Well, this seems that it is a local object that can not be forked, you may define it at each forked process. This may work well. However, somebody should fix it."
] | 1,580 | 1,647 | 1,587 | NONE | null | I was testing out the finetuning example from the repo:
`python run_lm_finetuning.py --train_data_file="finetune-output/KantText.txt" --output_dir="finetune-output/hugkant" --model_type=gpt2 --model_name_or_path=gpt2 --do_train --block_size=128`
While saving the checkpoint, it gives the following error:
```
Traceback (most recent call last):
File "run_lm_finetuning.py", line 790, in <module>
main()
File "run_lm_finetuning.py", line 740, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 398, in train
torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 209, in save
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 134, in _with_file_like
return body(f)
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 209, in <lambda>
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 282, in _save
pickler.dump(obj)
AttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2708/reactions",
"total_count": 12,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2708/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2707/comments | https://api.github.com/repos/huggingface/transformers/issues/2707/events | https://github.com/huggingface/transformers/pull/2707 | 558,559,779 | MDExOlB1bGxSZXF1ZXN0MzY5OTEyMjM4 | 2,707 | Fix typo in examples/utils_ner.py | {
"login": "falcaopetri",
"id": 8387736,
"node_id": "MDQ6VXNlcjgzODc3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8387736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/falcaopetri",
"html_url": "https://github.com/falcaopetri",
"followers_url": "https://api.github.com/users/falcaopetri/followers",
"following_url": "https://api.github.com/users/falcaopetri/following{/other_user}",
"gists_url": "https://api.github.com/users/falcaopetri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/falcaopetri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falcaopetri/subscriptions",
"organizations_url": "https://api.github.com/users/falcaopetri/orgs",
"repos_url": "https://api.github.com/users/falcaopetri/repos",
"events_url": "https://api.github.com/users/falcaopetri/events{/privacy}",
"received_events_url": "https://api.github.com/users/falcaopetri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=h1) Report\n> Merging [#2707](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ddb6f9476b58ed9bf4433622ca9aa49932929bc0?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2707 +/- ##\n=======================================\n Coverage 74.25% 74.25% \n=======================================\n Files 92 92 \n Lines 15216 15216 \n=======================================\n Hits 11298 11298 \n Misses 3918 3918\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=footer). Last update [ddb6f94...dd19c80](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good catch, thanks"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | `"%s-%d".format()` -> `"{}-{}".format()` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2707/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2707",
"html_url": "https://github.com/huggingface/transformers/pull/2707",
"diff_url": "https://github.com/huggingface/transformers/pull/2707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2707.patch",
"merged_at": 1580573458000
} |
https://api.github.com/repos/huggingface/transformers/issues/2706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2706/comments | https://api.github.com/repos/huggingface/transformers/issues/2706/events | https://github.com/huggingface/transformers/issues/2706 | 558,555,100 | MDU6SXNzdWU1NTg1NTUxMDA= | 2,706 | Load from tf2.0 checkpoint fail | {
"login": "stevewyl",
"id": 12755003,
"node_id": "MDQ6VXNlcjEyNzU1MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12755003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevewyl",
"html_url": "https://github.com/stevewyl",
"followers_url": "https://api.github.com/users/stevewyl/followers",
"following_url": "https://api.github.com/users/stevewyl/following{/other_user}",
"gists_url": "https://api.github.com/users/stevewyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevewyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevewyl/subscriptions",
"organizations_url": "https://api.github.com/users/stevewyl/orgs",
"repos_url": "https://api.github.com/users/stevewyl/repos",
"events_url": "https://api.github.com/users/stevewyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevewyl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, in order to convert an official checkpoint to a checkpoint readable by `transformers`, you need to use the script `convert_bert_original_tf_checkpoint_to_pytorch`. You can then load it in a `BertModel` (PyTorch) or a `TFBertModel` (TensorFlow), by specifying the argument `from_pt=True` in your `from_pretrained` method.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Download tf2.0 checkpoint from https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12.tar.gz
2. unpack the model tar.gz to `bert_models` folder
3. start an iPython console and type following codes:
```python
import tensorflow as tf
from transformers import TFBertModel, BertConfig
config = BertConfig.from_json_file("./bert_models/uncased_L-12_H-768_A-12/bert_config.json")
model = TFBertModel.from_pretrained("./bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt.index", config=config)
```
4. I check the original code from tf2.0 and found they didn't implement model.load_weights when by_name is True. Error is following:
NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS: CentOS Linux release 7.4.1708 (Core)
* Python version: 3.7.6
* PyTorch version: 1.3.1
* `transformers` version (or branch):
* Using GPU ? Yes
* Distributed or parallel setup ? No
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2706/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2705/comments | https://api.github.com/repos/huggingface/transformers/issues/2705/events | https://github.com/huggingface/transformers/issues/2705 | 558,519,030 | MDU6SXNzdWU1NTg1MTkwMzA= | 2,705 | What is the input for TFBertForSequenceClassification? | {
"login": "sainimohit23",
"id": 26195811,
"node_id": "MDQ6VXNlcjI2MTk1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/26195811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sainimohit23",
"html_url": "https://github.com/sainimohit23",
"followers_url": "https://api.github.com/users/sainimohit23/followers",
"following_url": "https://api.github.com/users/sainimohit23/following{/other_user}",
"gists_url": "https://api.github.com/users/sainimohit23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sainimohit23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sainimohit23/subscriptions",
"organizations_url": "https://api.github.com/users/sainimohit23/orgs",
"repos_url": "https://api.github.com/users/sainimohit23/repos",
"events_url": "https://api.github.com/users/sainimohit23/events{/privacy}",
"received_events_url": "https://api.github.com/users/sainimohit23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have a look at an example, for instance [`run_tf_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py). To better understand all the arguments, I advise you to read the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertmodel). You'll find that token_type_ids are\r\n\r\n> Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token\r\n\r\nSo they're only practically useful if your input contains two sequences (for instance if you wish to model some relationship between sentence A and sentence B). In your case, it's probably not needed.",
"Hi @BramVanroy as you said I tried to run the code from run_tf_glue.py. Yesterday it was working fine on google colab. But today when I tried to rerun the script. I am getting following error:\r\n\r\n```\r\nImportError Traceback (most recent call last)\r\n<ipython-input-7-63fb7d040ab0> in <module>()\r\n 4 import tensorflow_datasets\r\n 5 \r\n----> 6 from transformers import (\r\n 7 BertConfig,\r\n 8 BertForSequenceClassification,\r\n\r\nImportError: cannot import name 'TFBertForSequenceClassification'\r\n```",
"Looks like there was some issue in colab session. So, closing this.",
"@sainimohit23 Getting similar issue in local Jupyter notebook. \r\n\"AttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification' \" \r\nLooks like there is some changes in transformers package.\r\n\r\nlet me know if this is fixed..\r\n",
"> @sainimohit23 Getting similar issue in local Jupyter notebook.\r\n> \"AttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification' \"\r\n> Looks like there is some changes in transformers package.\r\n> \r\n> let me know if this is fixed..\r\n\r\nAre you using the latest version of transformers? Try updating, because it is right there in the source code:\r\n\r\nhttps://github.com/huggingface/transformers/blob/5c3d441ee1dc9150ccaf1075eb0168bbfe28c7f9/src/transformers/modeling_tf_bert.py#L875",
"@BramVanroy Using latest version of transformers. Double checked. \r\nLet me know if there is any other issue.\r\n\r\nPlease find below details useful. \r\n```\r\n`AttributeError Traceback (most recent call last)\r\n<ipython-input-6-5c0ab52ed729> in <module>\r\n----> 1 model = BertForSequenceClassification.from_pretrained('sentiment_model/',from_tf=True) # re-load\r\n 2 tokenizer = BertTokenizer.from_pretrained('sentiment_model/')\r\n\r\n~\\AppData\\Roaming\\Python\\Python37\\site-packages\\transformers\\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 485 from transformers import load_tf2_checkpoint_in_pytorch_model\r\n 486 \r\n--> 487 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)\r\n 488 except ImportError:\r\n 489 logger.error(\r\n\r\n~\\AppData\\Roaming\\Python\\Python37\\site-packages\\transformers\\modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys)\r\n 223 # Instantiate and load the associated TF 2.0 model\r\n 224 tf_model_class_name = \"TF\" + pt_model.__class__.__name__ # Add \"TF\" at the beggining\r\n--> 225 tf_model_class = getattr(transformers, tf_model_class_name)\r\n 226 tf_model = tf_model_class(pt_model.config)\r\n 227 \r\n\r\nAttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification'`\r\n```",
"I went through the source code, and this should work _unless_ Tensorflow is not installed in your environment. In such a case, the Tensorflow models are not imported in __init__. Make sure that Tensorflow is installed.\r\n\r\nhttps://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/__init__.py#L287-L313",
"Hi @BramVanroy, Thanks for the help there was a miss match of tensorflow version, but it looks like the issue is something different. \r\n`RuntimeError: storage has wrong size: expected -273778883 got 768`\r\n\r\nEither fine tuned model is corrupted or other issue. .\r\n\r\nThanks",
"Can you post the full trace?",
"@BramVanroy Please find the below details useful. \r\nLet me know what can be the issue. \r\n```\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-14-d609d3be6585> in <module>\r\n 2 # model = BertForSequenceClassification.from_pretrained('sentiment_model/')\r\n 3 \r\n----> 4 model = BertForSequenceClassification.from_pretrained(\"sentiment_model/\", num_labels=2)\r\n 5 tokenizer = BertTokenizer.from_pretrained('sentiment_model/')\r\n\r\n~\\AppData\\Roaming\\Python\\Python37\\site-packages\\pytorch_pretrained_bert\\modeling.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 601 if state_dict is None and not from_tf:\r\n 602 weights_path = os.path.join(serialization_dir, WEIGHTS_NAME)\r\n--> 603 state_dict = torch.load(weights_path, map_location='cpu')\r\n 604 if tempdir:\r\n 605 # Clean up temp dir\r\n\r\n~\\AppData\\Roaming\\Python\\Python37\\site-packages\\torch\\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)\r\n 384 f = f.open('rb')\r\n 385 try:\r\n--> 386 return _load(f, map_location, pickle_module, **pickle_load_args)\r\n 387 finally:\r\n 388 if new_fd:\r\n\r\n~\\AppData\\Roaming\\Python\\Python37\\site-packages\\torch\\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)\r\n 578 for key in deserialized_storage_keys:\r\n 579 assert key in deserialized_objects\r\n--> 580 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)\r\n 581 if offset is not None:\r\n 582 offset = f.tell()\r\n\r\nRuntimeError: storage has wrong size: expected -273778883 got 768\r\n```\r\n\r\n",
"@vijender412 i found this [comment](https://github.com/pytorch/pytorch/issues/12042#issuecomment-426466826) useful ",
"I don't use Tensorflow, but the documentation suggests that you should load your model like this:\r\n\r\nhttps://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/modeling_utils.py#L366-L368",
"@ArashHosseini gone through that but was not able to link my code.\r\n\r\n@BramVanroy \r\nThe fine tuned model was saved using'\r\n```\r\nmodel.save_pretrained('./sentiment_model/')\r\ntokenizer.save_pretrained('./sentiment_model/')\r\n```\r\nAnd files created were (config.json,pytorch_model.bin,special_tokens_map.json,tokenizer_config.json,vocab.txt) So no checkpoint were created wrt to tensorflow. \r\n\r\nNow as per documentation the loading should be \r\n```\r\nmodel = BertForSequenceClassification.from_pretrained(\"sentiment_model/\", num_labels=2)\r\ntokenizer = BertTokenizer.from_pretrained('sentiment_model/')\r\n```\r\nThe tokenizer is getting loaded but getting issues while loading model. \r\n\"RuntimeError: storage has wrong size: expected -273778883 got 768\"\r\n",
"Then why did you say in your original comment that you had a Tensorflow mismatch? \r\n\r\nI am not sure why this happens. Please open your own topic, and provide all necessary information from the template.",
"@BramVanroy Earlier I was getting this issue\r\n`AttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification'`\r\nwhich got resolved by changing tensorflow version to 2.0. \r\n**For current issue l will create a new issue after tracing out my code from scratch.**",
"> @ArashHosseini gone through that but was not able to link my code.\r\n> \r\n> @BramVanroy\r\n> The fine tuned model was saved using'\r\n> \r\n> ```\r\n> model.save_pretrained('./sentiment_model/')\r\n> tokenizer.save_pretrained('./sentiment_model/')\r\n> ```\r\n> \r\n> And files created were (config.json,pytorch_model.bin,special_tokens_map.json,tokenizer_config.json,vocab.txt) So no checkpoint were created wrt to tensorflow.\r\n> \r\n> Now as per documentation the loading should be\r\n> \r\n> ```\r\n> model = BertForSequenceClassification.from_pretrained(\"sentiment_model/\", num_labels=2)\r\n> tokenizer = BertTokenizer.from_pretrained('sentiment_model/')\r\n> ```\r\n> \r\n> The tokenizer is getting loaded but getting issues while loading model.\r\n> \"RuntimeError: storage has wrong size: expected -273778883 got 768\"\r\n\r\nHi, I meet the same issue(can't load state_dict after saving it), Have you solve it?"
] | 1,580 | 1,584 | 1,580 | NONE | null | # β Questions & Help
What is the input for TFBertForSequenceClassification?
## Details
I have a simple multiclass text data on which I want to train the BERT model.
From docs I have found the input format of data:
```a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])```
In my understanding:
`input_ids`- tokenized sentences, generated from BERT tokenizer.
`attention_mask`- As name suggests it is attention mask. I should use it to mask out padding tokens. Please correct me if I am wrong.
Now what is `token_type_ids'? is it necessary?
When I tried to print output_shape of the model? I got:
`AttributeError: The layer has never been called and thus has no defined output shape.`
So, let's say my dataset has 5 classes. Does this model expect one-hot encoded vector of shape [BATCH_SIZE, CLASSES] for .fit() method?
Also if I don't use .from_pretrained() method, will it load an untrained model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2705/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2704/comments | https://api.github.com/repos/huggingface/transformers/issues/2704/events | https://github.com/huggingface/transformers/issues/2704 | 558,474,768 | MDU6SXNzdWU1NTg0NzQ3Njg= | 2,704 | How to make transformers examples use GPU? | {
"login": "abhijith-athreya",
"id": 387274,
"node_id": "MDQ6VXNlcjM4NzI3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/387274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhijith-athreya",
"html_url": "https://github.com/abhijith-athreya",
"followers_url": "https://api.github.com/users/abhijith-athreya/followers",
"following_url": "https://api.github.com/users/abhijith-athreya/following{/other_user}",
"gists_url": "https://api.github.com/users/abhijith-athreya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhijith-athreya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhijith-athreya/subscriptions",
"organizations_url": "https://api.github.com/users/abhijith-athreya/orgs",
"repos_url": "https://api.github.com/users/abhijith-athreya/repos",
"events_url": "https://api.github.com/users/abhijith-athreya/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhijith-athreya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"GPU should be used by default and can be disabled with the `no_cuda` flag. If your GPU is not being used, that means that PyTorch can't access your CUDA installation. \r\n\r\nWhat is the output of running this in your Python interpreter?\r\n\r\n```python\r\nimport torch\r\ntorch.cuda.is_available()\r\n```",
"Thanks for the response. The output is True. Looks like it is using the GPU. But the utilization never crosses 10%. ",
"And how is your CPU usage? Which GPU are you using? Which settings are you using? (Batch size, seq len...)",
"CPU Usage also is less than 10%. I'm using a Ryzen 3700X with Nvidia 2080 ti. I did not change any default settings of the batch size (4) and sequence length. ",
"@abhijith-athreya What was the issue? I am facing the same issue. I am encoding the sentences using bert model but it's quite slow and not using GPU too.\r\n",
"You need to post some sample code @monk1337, also https://discuss.huggingface.co will be more suited",
"@julien-c \r\n\r\nIt's working now.\r\n\r\nfrom transformers import BertTokenizer, BertModel, BertForMaskedLM\r\ndef assign_GPU(Tokenizer_output):\r\n \r\n tokens_tensor = Tokenizer_output['input_ids'].to('cuda:0')\r\n token_type_ids = Tokenizer_output['token_type_ids'].to('cuda:0')\r\n attention_mask = Tokenizer_output['attention_mask'].to('cuda:0')\r\n \r\n output = {'input_ids' : tokens_tensor, \r\n 'token_type_ids' : token_type_ids, \r\n 'attention_mask' : attention_mask}\r\n \r\n return output\r\n\r\n\r\n\r\n```\r\nsentence = 'Hello World!'\r\ntokenizer = BertTokenizer.from_pretrained('bert-large-uncased')\r\nmodel = BertModel.from_pretrained('bert-large-uncased')\r\n\r\ninputs = assign_GPU(tokenizer(sentence, return_tensors=\"pt\"))\r\nmodel = model.to('cuda:0')\r\noutputs = model(**inputs)\r\noutputs\r\n```",
"> @julien-c\r\n> \r\n> It's working now.\r\n> \r\n> from transformers import BertTokenizer, BertModel, BertForMaskedLM\r\n> def assign_GPU(Tokenizer_output):\r\n> \r\n> ```\r\n> tokens_tensor = Tokenizer_output['input_ids'].to('cuda:0')\r\n> token_type_ids = Tokenizer_output['token_type_ids'].to('cuda:0')\r\n> attention_mask = Tokenizer_output['attention_mask'].to('cuda:0')\r\n> \r\n> output = {'input_ids' : tokens_tensor, \r\n> 'token_type_ids' : token_type_ids, \r\n> 'attention_mask' : attention_mask}\r\n> \r\n> return output\r\n> ```\r\n> \r\n> ```\r\n> sentence = 'Hello World!'\r\n> tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')\r\n> model = BertModel.from_pretrained('bert-large-uncased')\r\n> \r\n> inputs = assign_GPU(tokenizer(sentence, return_tensors=\"pt\"))\r\n> model = model.to('cuda:0')\r\n> outputs = model(**inputs)\r\n> outputs\r\n> ```\r\n\r\nHey, I just want to complement here. The current version of transformers does support the call to `to()` for the `BatchEncoding` returned by the tokenizer, making it much more cleaner:\r\n\r\n```python\r\n> device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\n> sentence = 'Hello World!'\r\n> tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')\r\n> model = BertModel.from_pretrained('bert-large-uncased')\r\n\r\n> inputs = tokenizer(sentence, return_tensors=\"pt\").to(device)\r\n> model = model.to(device)\r\n> outputs = model(**inputs)\r\n```",
"wanted to add that in the new version of transformers, the Pipeline instance can also be run on GPU using as in the following example:\r\n```python\r\npipeline = pipeline(TASK, \r\n model=MODEL_PATH,\r\n device=1, # to utilize GPU cuda:1\r\n device=0, # to utilize GPU cuda:0\r\n device=-1) # default value which utilize CPU\r\n```",
"> wanted to add that in the new version of transformers, the Pipeline instance can also be run on GPU using as in the following example:\r\n> \r\n> ```python\r\n> pipeline = pipeline(TASK, \r\n> model=MODEL_PATH,\r\n> device=1, # to utilize GPU cuda:1\r\n> device=0, # to utilize GPU cuda:0\r\n> device=-1) # default value which utilize CPU\r\n> ```\r\n\r\nAnd about work with multiple GPUs?"
] | 1,580 | 1,659 | 1,580 | NONE | null | # β Questions & Help
I'm training the run_lm_finetuning.py with wiki-raw dataset. The training seems to work fine, but it is not using my GPU. Is there any flag which I should set to enable GPU usage?
## Details
I'm training the run_lm_finetuning.py with wiki-raw dataset. The training seems to work fine, but it is not using my GPU. Is there any flag which I should set to enable GPU usage?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2704/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2704/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2703/comments | https://api.github.com/repos/huggingface/transformers/issues/2703/events | https://github.com/huggingface/transformers/issues/2703 | 558,468,127 | MDU6SXNzdWU1NTg0NjgxMjc= | 2,703 | run_lm_finetuning.py on bert-base-uncased with wikitext-2-raw does not work | {
"login": "abhijith-athreya",
"id": 387274,
"node_id": "MDQ6VXNlcjM4NzI3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/387274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhijith-athreya",
"html_url": "https://github.com/abhijith-athreya",
"followers_url": "https://api.github.com/users/abhijith-athreya/followers",
"following_url": "https://api.github.com/users/abhijith-athreya/following{/other_user}",
"gists_url": "https://api.github.com/users/abhijith-athreya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhijith-athreya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhijith-athreya/subscriptions",
"organizations_url": "https://api.github.com/users/abhijith-athreya/orgs",
"repos_url": "https://api.github.com/users/abhijith-athreya/repos",
"events_url": "https://api.github.com/users/abhijith-athreya/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhijith-athreya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, did you manage to fix your issue?",
"Hi,\r\nYes, I took the latest build, and it worked without any changes. "
] | 1,580 | 1,580 | 1,580 | NONE | null | # π Bug
## Running run_lm_finetuning.py on bert-base-uncased with wikitext-2-raw does not work.
Model I am using (Bert, XLNet ...): Bert - bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [*] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [*] an official GLUE/SQUaD task: train language model on wikitext
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Installed Transformers from the source (git pull and then pip install). Downloaded Wikitext-2 raw dataset.
2. Ran this command ""python run_lm_finetuning.py --output_dir=output --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.train.raw --do_eval --eval_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.test.raw --mlm""
3. This fails in train() method. I haven't touched the code. Stacktraces below:
python run_lm_finetuning.py --output_dir=output --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.train.raw --do_eval --eval_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.test.raw --mlm
2020-01-31 21:51:38.831236: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
01/31/2020 21:51:40 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
01/31/2020 21:51:40 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json not found in cache or force_download set to True, downloading to C:\Users\athre\AppData\Local\Temp\tmp91_kkef0
01/31/2020 21:51:40 - INFO - transformers.file_utils - copying C:\Users\athre\AppData\Local\Temp\tmp91_kkef0 to cache at C:\Users\athre\.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.8f56353af4a709bf5ff0fbc915d8f5b42bfff892cbb6ac98c3c45f481a03c685
01/31/2020 21:51:40 - INFO - transformers.file_utils - creating metadata file for C:\Users\athre\.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.8f56353af4a709bf5ff0fbc915d8f5b42bfff892cbb6ac98c3c45f481a03c685
01/31/2020 21:51:40 - INFO - transformers.file_utils - removing temp file C:\Users\athre\AppData\Local\Temp\tmp91_kkef0
01/31/2020 21:51:40 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at C:\Users\athre\.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.8f56353af4a709bf5ff0fbc915d8f5b42bfff892cbb6ac98c3c45f481a03c685
01/31/2020 21:51:40 - INFO - transformers.configuration_utils - Model config {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}
01/31/2020 21:51:40 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache or force_download set to True, downloading to C:\Users\athre\AppData\Local\Temp\tmpx8hth4qr
01/31/2020 21:51:41 - INFO - transformers.file_utils - copying C:\Users\athre\AppData\Local\Temp\tmpx8hth4qr to cache at C:\Users\athre\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/31/2020 21:51:41 - INFO - transformers.file_utils - creating metadata file for C:\Users\athre\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/31/2020 21:51:41 - INFO - transformers.file_utils - removing temp file C:\Users\athre\AppData\Local\Temp\tmpx8hth4qr
01/31/2020 21:51:41 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at C:\Users\athre\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/31/2020 21:51:41 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin not found in cache or force_download set to True, downloading to C:\Users\athre\AppData\Local\Temp\tmpy8kf8hkd
01/31/2020 21:54:14 - INFO - transformers.file_utils - copying C:\Users\athre\AppData\Local\Temp\tmpy8kf8hkd to cache at C:\Users\athre\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
01/31/2020 21:54:14 - INFO - transformers.file_utils - creating metadata file for C:\Users\athre\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
01/31/2020 21:54:14 - INFO - transformers.file_utils - removing temp file C:\Users\athre\AppData\Local\Temp\tmpy8kf8hkd
01/31/2020 21:54:14 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin from cache at C:\Users\athre\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
01/31/2020 21:54:16 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
01/31/2020 21:54:18 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='E:\\\\Code\\\\data\\\\wikitext-2-raw\\\\wiki.test.raw', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='bert-base-uncased', model_type='bert', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='E:\\\\Code\\\\data\\\\wikitext-2-raw\\\\wiki.train.raw', warmup_steps=0, weight_decay=0.0)
01/31/2020 21:54:18 - INFO - __main__ - Loading features from cached file E:\\Code\\data\\wikitext-2-raw\bert_cached_lm_510_wiki.train.raw
01/31/2020 21:54:18 - INFO - __main__ - ***** Running training *****
01/31/2020 21:54:18 - INFO - __main__ - Num examples = 4664
01/31/2020 21:54:18 - INFO - __main__ - Num Epochs = 1
01/31/2020 21:54:18 - INFO - __main__ - Instantaneous batch size per GPU = 4
01/31/2020 21:54:18 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4
01/31/2020 21:54:18 - INFO - __main__ - Gradient Accumulation steps = 1
01/31/2020 21:54:18 - INFO - __main__ - Total optimization steps = 1166
Epoch: 0%| | 0/1 [00:00<?, ?it/s]C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. | 0/1166 [00:00<?, ?it/s]
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "run_lm_finetuning.py", line 790, in <module>
main()
File "run_lm_finetuning.py", line 740, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 356, in train
loss.backward()
File "E:\Code\torch_env\lib\site-packages\torch\tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "E:\Code\torch_env\lib\site-packages\torch\autograd\__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA error: device-side assert triggered
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%|
## Expected behavior
Training should start.
## Environment
* OS: Windows 10
* Python version: 3.7
* PyTorch version: 1.4 stable
* `transformers` version (or branch): Latest (Jan-31-2020)
* Using GPU ? Yes
* Distributed or parallel setup ? Only 1 GPU
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2703/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2702/comments | https://api.github.com/repos/huggingface/transformers/issues/2702/events | https://github.com/huggingface/transformers/issues/2702 | 558,450,963 | MDU6SXNzdWU1NTg0NTA5NjM= | 2,702 | DistilBERT does not support token type ids, but the tokenizers produce them | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
}
] | [
"RoBERTa accepts token typen ids because RoBERTa is basically the same architecture as BERT. (The \"innovation\" lies in how it's pretrained, not architectural changes.) It's literally nothing more than this:\r\n\r\nhttps://github.com/huggingface/transformers/blob/ddb6f9476b58ed9bf4433622ca9aa49932929bc0/src/transformers/modeling_roberta.py#L149-L169\r\n\r\nDistilbert's changes are more intricate.\r\n\r\nLooking at your example, I agree that it'd be nice that all forward methods have the same signature for easier use of the `AutoModel`s.\r\n\r\nT5 does something different, it just accepts `**kwargs`. That would solve the issue that there is now for Distilbert, but it has some adverse, non-pythonic side effects (imo): less readability, no IDE autocomplete, default values need to be set inside the method rather than declaration (in `pop`). I'm not a big fan of this.\r\n\r\nIt's best that the maintainers make a suggestion on how to continue with this.",
"> RoBERTa accepts token typen ids because RoBERTa is basically the same architecture as BERT. (The \"innovation\" lies in how it's pretrained, not architectural changes.)\r\n\r\nThe fairseq roBERTa doesn't accepts token typ ids and doesn't even has a layer for those:\r\n```\r\nTransformerSentenceEncoder(\r\n (embed_tokens): Embedding(50265, 768, padding_idx=1)\r\n (embed_positions): LearnedPositionalEmbedding(514, 768, padding_idx=1)\r\n```\r\nThe huggingface implementation of RoBERTa accepts token typ ids because RobertaModel inherits from BertModel and the layer is inherited by RobertaEmbeddings from BertEmbeddings:\r\n```\r\nRobertaEmbeddings(\r\n (word_embeddings): Embedding(50265, 768, padding_idx=1)\r\n (position_embeddings): Embedding(514, 768, padding_idx=1)\r\n (token_type_embeddings): Embedding(1, 768)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n)\r\n```\r\nThe huggingface RoBERTa It is still along with the fairseq implementation due to the dictionary size of the token_type_embeddings layer. It only accepts one value (e.g. 0) while Bert accepts two values (e.g. 0 and 1). \r\n\r\nBack to the original topic. Call [encode_plus](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus) with return_token_type_ids=False and you won't get them.",
"Yes, I was talking about the transformers implementation where Roberta is subclassing the Bertmodel.\r\n\r\nOf course it's possible to just change the argument when encoding, but you'd want a unified approach so that you can just use automodel/autotokenizer, encode your input, and feed the encoded inputs to the forward method *for any input to automodel without having to change anything else*. In that respect this is more a usability question.\r\n\r\nAs an alternative to unifying the signature of all models, Distilbert's Tokenizer can be changed to not return the token type ids. ",
"Okay, in case the OP is looking for a generic solution I think it is cleaner to get the parameters from the model itself by calling `model.forward.__code__.co_varnames`. This will return a tuple of parameters names and can be used with a dictionary comprehension like below:\r\n```\r\nfrom transformers import DistilBertModel\r\ntokenized = {'input_ids': [101, 1045, 8823, 1037, 5119, 7483, 1012, 102, 2009, 2001, 2200, 2051, 15077, 1012, 102],\r\n 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],\r\n 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n\r\nmodel = DistilBertModel.from_pretrained('distilbert-base-uncased-distilled-squad')\r\n\r\ntokenized = {key:value for (key,value) in tokenized.items() if key in model.forward.__code__.co_varnames}\r\n\r\nmodel(**tokenized)\r\n```",
"That's cool, but it would be a lot better if this were streamlined on the library-side rather than users having to implement this themselves. Options are, as far as I can see:\r\n\r\n- make sure the signature for all models' forward models are the same, with None-values where unexpected values occur\r\n- ensure that tokenizers only return the features that their respective models use\r\n\r\nThe second one seems like the way to go imo.",
"On the fact that our RoBERTa implem takes (inoperant by default) `token_type_ids`, maybe we should actually remove them from the implem. If you want to train some, you can always subclass RoBERTa and add them back (but I'm not 100% sure a lot of people use them). Thoughts?",
"> On the fact that our RoBERTa implem takes (inoperant by default) `token_type_ids`, maybe we should actually remove them from the implem. If you want to train some, you can always subclass RoBERTa and add them back (but I'm not 100% sure a lot of people use them). Thoughts?\r\n\r\nAgreed. Sticking close to original implementations or particularly their descriptions in paper (i.e. no token_type_ids in this case) seems a good idea. Users who read the paper or saw examples in other implementations would expect that. As you say, if required it's not that hard to add them again.\r\n\r\nOn top of that, a one-on-one relationship between the output of `encode_plus` and the input of the corresponding model seems a neat improvement, too, so that the issue of OP doesn't ever occur. What this means is that using an `AutoModel` and `AutoTokenizer` can always be used like this without running into type errors.\r\n\r\n```python\r\nencoded = tokenizer.encode_plus(...)\r\nout = model(**encoded )\r\n```\r\n\r\nAs I mentioned before, I am not a big fan of how this is done in T5, which accepts anything in its forward (`**kwargs`). It is easier to implement, but it has many drawbacks in terms of usability and perhaps maintenance.",
"I use type ids. I just recently built a model that relies on them. I even monkey-patched a bigger embedding matrix into RoBERTa to get the ability back. But maybe a cleaner implementation would be if `forward()` took another tensor of shape `(batch, tokens, hidden_size)` that just gets added to the word piece embedding.\r\n\r\nEither way though, it's more important to me that the output of the tokenizer matches the input of the model.",
"@julien-c : I'm not sure how you handle that, but I would like to work on both issues (RoBERTa token_type embedding layer and encode_plus should only output model related tokens). Can you please assign this issue to me? Should I create a separate issue regarding RoBERTa? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This is an old issue, but I thought I'd try to ask here.\r\n\r\nAs of `transformers>=4.18`, the `**kwargs` argument was removed from the `call` methods of all models. Thus, an error occurs if you supply `token_type_ids` to `TFDistilbertForSequenceClassification.call` method.\r\n\r\nWhat is the recommended way to programmatically determine whether or not a model accepts the `token_type_id` parameter in the latest version of **transformers**?",
"You can just inspect its signature with the `inspect` module.",
"Thanks, Sylvain.\r\n\r\nFor anyone stumbling on this issue, the recommended solution continues to be to simply examine the signature of the `call` method (for TensorFlow models) or the `forward` method (in PyTorch models) with the `inspect` module or (as shown above in 2020) with something like this:\r\n```python\r\nfrom transformers import TFAutoModelForSequenceClassification\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased')\r\nuses_token_type_ids = (\"token_type_ids\" in model.call.__code__.co_varnames)\r\nprint(uses_token_type_ids)\r\n# prints False\r\n```"
] | 1,580 | 1,673 | 1,587 | CONTRIBUTOR | null | ```Python
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad")
>>> tokenized = tokenizer.encode_plus("I ate a clock yesterday.", "It was very time consuming.")
>>> tokenized
{'input_ids': [101, 1045, 8823, 1037, 5119, 7483, 1012, 102, 2009, 2001, 2200, 2051, 15077, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
>>> model = transformers.AutoModel.from_pretrained("distilbert-base-uncased-distilled-squad")
>>> model(**tokenized)
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
```
In contrast, RoBERTa also does not support token type ids, but its forward method still takes the parameter, and its tokenizer produces type ids that are all zero. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2702/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2701/comments | https://api.github.com/repos/huggingface/transformers/issues/2701/events | https://github.com/huggingface/transformers/pull/2701 | 558,400,986 | MDExOlB1bGxSZXF1ZXN0MzY5NzkzMDE3 | 2,701 | Store Model cards in the repo | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for importing the readme files :heart: ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=h1) Report\n> Merging [#2701](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d426b58b9e32a2ffc8c8a1196143270e22054a46?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2701 +/- ##\n=======================================\n Coverage 74.25% 74.25% \n=======================================\n Files 92 92 \n Lines 15216 15216 \n=======================================\n Hits 11298 11298 \n Misses 3918 3918\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=footer). Last update [d426b58...d126da9](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,580 | 1,580 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2701/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2701/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2701",
"html_url": "https://github.com/huggingface/transformers/pull/2701",
"diff_url": "https://github.com/huggingface/transformers/pull/2701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2701.patch",
"merged_at": 1580513950000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2700/comments | https://api.github.com/repos/huggingface/transformers/issues/2700/events | https://github.com/huggingface/transformers/pull/2700 | 558,248,770 | MDExOlB1bGxSZXF1ZXN0MzY5Njc0ODY5 | 2,700 | Add TF2 version of FlauBERT | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@bae644c`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `75%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2700 +/- ##\n=========================================\n Coverage ? 73.78% \n=========================================\n Files ? 93 \n Lines ? 15351 \n Branches ? 0 \n=========================================\n Hits ? 11326 \n Misses ? 4025 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.32% <75%> (ΓΈ)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=footer). Last update [bae644c...a39b8de](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Did we mean to delete camembert imports @LysandreJik . That's why the tests on HEAD are breaking afaict",
"I think they were a duplicate"
] | 1,580 | 1,589 | 1,584 | CONTRIBUTOR | null | Hello,
I worked today to add the new FlauBERT model in TF2 version. Translated models are available in:
```
jplu/tf-flaubert-base-cased
jplu/tf-flaubert-large-cased
jplu/tf-flaubert-small-cased
jplu/tf-flaubert-base-uncased
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2700/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2700",
"html_url": "https://github.com/huggingface/transformers/pull/2700",
"diff_url": "https://github.com/huggingface/transformers/pull/2700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2700.patch",
"merged_at": 1584365362000
} |
https://api.github.com/repos/huggingface/transformers/issues/2699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2699/comments | https://api.github.com/repos/huggingface/transformers/issues/2699/events | https://github.com/huggingface/transformers/pull/2699 | 558,243,948 | MDExOlB1bGxSZXF1ZXN0MzY5NjcxMDU2 | 2,699 | CLI script to gather environment info | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is really cool!",
"> This is really cool!\r\n\r\nProps to spaCy, since I basically stole [the idea](https://github.com/explosion/spaCy/blob/master/spacy/cli/info.py) from them. ",
"LGTM but I've also pinged @mfuntowicz as he will have good insight",
"Is there a way to see which tests are run in `check_code_quality`? I'm curious as to why it fails.",
"It fails because the file `/home/circleci/transformers/src/transformers/commands/info.py` would be reformatted by black.\r\n\r\nYou can run `make style` at the root to set everything to black style.",
"> It fails because the file `/home/circleci/transformers/src/transformers/commands/info.py` would be reformatted by black.\r\n> \r\n> You can run `make style` at the root to set everything to black style.\r\n\r\nThanks. Is that different from running `black .`? I did that, and it formats all files (not only info.py). What I mean is that that suggests that all previously committed files must have also failed the test (since black changes them when I run the command) but during the test only info.py fails. Perhaps you are using a specific stylesheet.\r\n\r\n**EDIT**: never mind, found the actual command in the [Makefile](https://github.com/huggingface/transformers/blob/master/Makefile). It's the same result indeed.",
"The black command we used uses a specific line-length which is different to the default (we use a line length of 119, we like it better). We also set it to be based on Python 3.5.\r\n\r\nI believe the test now fails because of isort; that's weird, it should have been triggered by the `make style` as well and should have fixed the imports on its own.",
"> The black command we used uses a specific line-length which is different to the default (we use a line length of 119, we like it better). We also set it to be based on Python 3.5.\r\n> \r\n> I believe the test now fails because of isort; that's weird, it should have been triggered by the `make style` as well and should have fixed the imports on its own.\r\n\r\nI am now manually running black/isort (without the 'check' flag) and pushing those commits in hopes that the tests will then pass. But, correct me if I'm wrong, isn't circleCI supposed to apply these runs (black, isort etc) before running the tests?",
"Did you install isort with the exact version that's pinned in `CONTRIBUTING.md`?\r\n\r\nIf you do, both `make style` and `make quality` should reliably pass.",
"It's a very cool feature to provide through the CLI.\r\n\r\nI may suggest to rename the command from `info` to `env` as we may want to keep `info` for exposing information about models through cards / config.\r\n\r\nWhat do you think ? @BramVanroy @julien-c @LysandreJik ",
"> Did you install isort with the exact version that's pinned in `CONTRIBUTING.md`?\r\n> \r\n> If you do, both `make style` and `make quality` should reliably pass.\r\n\r\nAh, I missed the note on isort. Fixed it now.\r\n\r\n\r\n> It's a very cool feature to provide through the CLI.\r\n> \r\n> I may suggest to rename the command from `info` to `env` as we may want to keep `info` for exposing information about models through cards / config.\r\n> \r\n> What do you think ? @BramVanroy @julien-c @LysandreJik\r\n\r\nGood suggestion and future-proof! I renamed the CLI method to env, so the full command is \r\n\r\n```\r\npython transformers-cli env\r\n```\r\n\r\nIssue templates have been update and this command has been added to CONTRIBUTING.md as well.\r\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=h1) Report\n> Merging [#2699](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/161c88f0861e71e757bd4516369e836555cd3ded?src=pr&el=desc) will **decrease** coverage by `0.15%`.\n> The diff coverage is `56.78%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2699 +/- ##\n==========================================\n- Coverage 74.24% 74.09% -0.16% \n==========================================\n Files 92 93 +1 \n Lines 15215 15247 +32 \n==========================================\n Hits 11297 11297 \n- Misses 3918 3950 +32\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0% <0%> (ΓΈ)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.23% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `34.56% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.69% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=footer). Last update [161c88f...da9cf7c](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM, thanks a lot @BramVanroy "
] | 1,580 | 1,580 | 1,580 | COLLABORATOR | null | I noticed that all too often people leave the "Environment" section in their issue empty. However, things such as the version number of PT/TF and `transformers` itself are very useful to know when trying to debug things.
This PR adds a small script to the existing CLI workflow. Running `python transformers-cli info` will output something like this:
```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 2.4.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.8
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
Note that GPUs being available through the DL framework (`GPU?`) are retrieved automatically, but that users still have to specify whether or not they are actually using the GPU.
In addition, the relevant issue templates have been updated to direct users to the script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2699/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2699/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2699",
"html_url": "https://github.com/huggingface/transformers/pull/2699",
"diff_url": "https://github.com/huggingface/transformers/pull/2699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2699.patch",
"merged_at": 1580571495000
} |
https://api.github.com/repos/huggingface/transformers/issues/2698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2698/comments | https://api.github.com/repos/huggingface/transformers/issues/2698/events | https://github.com/huggingface/transformers/pull/2698 | 558,205,362 | MDExOlB1bGxSZXF1ZXN0MzY5NjQwNjI2 | 2,698 | Typo on markdown link in README.md | {
"login": "arnaudmiribel",
"id": 7164864,
"node_id": "MDQ6VXNlcjcxNjQ4NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7164864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudmiribel",
"html_url": "https://github.com/arnaudmiribel",
"followers_url": "https://api.github.com/users/arnaudmiribel/followers",
"following_url": "https://api.github.com/users/arnaudmiribel/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudmiribel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudmiribel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudmiribel/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudmiribel/orgs",
"repos_url": "https://api.github.com/users/arnaudmiribel/repos",
"events_url": "https://api.github.com/users/arnaudmiribel/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudmiribel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=h1) Report\n> Merging [#2698](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0aa40e9569a71306036de3a217eed55521699604?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2698 +/- ##\n=======================================\n Coverage 74.24% 74.24% \n=======================================\n Files 92 92 \n Lines 15215 15215 \n=======================================\n Hits 11297 11297 \n Misses 3918 3918\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=footer). Last update [0aa40e9...9a11e6f](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks!"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2698/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2698",
"html_url": "https://github.com/huggingface/transformers/pull/2698",
"diff_url": "https://github.com/huggingface/transformers/pull/2698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2698.patch",
"merged_at": 1580486330000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2697/comments | https://api.github.com/repos/huggingface/transformers/issues/2697/events | https://github.com/huggingface/transformers/issues/2697 | 558,183,121 | MDU6SXNzdWU1NTgxODMxMjE= | 2,697 | Albert language model fine tuning not running run_lm_finetuning.py | {
"login": "abdallah197",
"id": 28394606,
"node_id": "MDQ6VXNlcjI4Mzk0NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/28394606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdallah197",
"html_url": "https://github.com/abdallah197",
"followers_url": "https://api.github.com/users/abdallah197/followers",
"following_url": "https://api.github.com/users/abdallah197/following{/other_user}",
"gists_url": "https://api.github.com/users/abdallah197/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdallah197/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdallah197/subscriptions",
"organizations_url": "https://api.github.com/users/abdallah197/orgs",
"repos_url": "https://api.github.com/users/abdallah197/repos",
"events_url": "https://api.github.com/users/abdallah197/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdallah197/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am facing the same problem with BERT fine tuning for a masked language modeling fine tuning task. Can someone please help? I am exactly following https://github.com/huggingface/transformers/tree/master/examples/language-modeling"
] | 1,580 | 1,593 | 1,586 | NONE | null | # π Bug
## Information
Model I am using (Albert(all types)):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
the code returns memory allocation problems when run with any version from albert. i tried to reduce the sequence length and batch size to a minum setting, but the issue still arises. my setting and the minimized setting both run normally with bert or roberta, the issue arises only when i change the model to Albert.
an example:
`tcmalloc: large alloc 1951195136 bytes == 0x7f750f664000 @ 0x7f76efbf8887 0x7f764c2a1b79 0x7f764c29fb0f 0x7f764c29fc33 0x7f764c26a155 0x7f764c26837e 0x7f764c26bbb1 0x7f764c2606df 0x50a8af 0x50c5b9 0x509d48 0x50aa7d 0x50c5b9 0x508245 0x509642 0x595311 0x5a067e 0x50d966 0x58efc9 0x4c9546 0x5886f4 0x58892e 0x551b81 0x5aa6ec 0x50abb3 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50c5b9 0x508245`
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
language model finetuning for albert
## To reproduce
Steps to reproduce the behavior:
1. in run_lm_finetuning add:
` from transformers import (AlbertConfig,
AlbertForMaskedLM,
AlbertTokenizer,
)`
2.add to MODEL_CLASSES dictionary:
` "albert": (AlbertConfig, AlbertForMaskedLM, AlbertTokenizer),`
3. add file text.txt, a similar txt file to the wiki dataset that's mentioned in the docs.
4.run the finetuning script:
`python transformers/examples/run_lm_finetuning.py \
--output_dir=output \
--model_type=albert \
--model_name_or_path=albert-base-v1 \
--do_train \
--train_data_file test.txt \
--block_size 50 \
--per_gpu_train_batch_size 2 \
--max_steps 520000 \
--weight_decay 0.01 \
--logging_steps 5000 \
--mlm`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS: Google colab
* Python version: 3.7
* PyTorch version: 1.3.1
* `transformers` version (or branch): latest
* Using GPU ? yes
* Distributed or parallel setup ? no
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2697/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2696/comments | https://api.github.com/repos/huggingface/transformers/issues/2696/events | https://github.com/huggingface/transformers/issues/2696 | 558,178,924 | MDU6SXNzdWU1NTgxNzg5MjQ= | 2,696 | Missing `do_sample` argument for run_generation example | {
"login": "jzhoubu",
"id": 20299401,
"node_id": "MDQ6VXNlcjIwMjk5NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/20299401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzhoubu",
"html_url": "https://github.com/jzhoubu",
"followers_url": "https://api.github.com/users/jzhoubu/followers",
"following_url": "https://api.github.com/users/jzhoubu/following{/other_user}",
"gists_url": "https://api.github.com/users/jzhoubu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzhoubu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzhoubu/subscriptions",
"organizations_url": "https://api.github.com/users/jzhoubu/orgs",
"repos_url": "https://api.github.com/users/jzhoubu/repos",
"events_url": "https://api.github.com/users/jzhoubu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzhoubu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're absolutely correct, I pushed a fix with 7365f01!",
"Ran into this issue myself by accident.\r\n\r\nIMO, `do_sample=True` should be the default behavior for `generate()` since that's more in line with user expectations.",
"I agree with you, the default should be set to `True`. I've changed the default in 6c1b235.",
"Decision reverted in #3298 (see the PR for discussion and details).\r\nNew default to `do_sample==False`."
] | 1,580 | 1,584 | 1,581 | NONE | null | # β Questions & Help
It seems the arguments `k`, `p`, `temperature` are disabled because `do_sample` is set to False by default. Thus, [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) will always use greedy decoding no matter how the `k`, `p`, `temperature` are set, which is kind of misleading.
I think `do_sample` argument should be included in the code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2696/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2695/comments | https://api.github.com/repos/huggingface/transformers/issues/2695/events | https://github.com/huggingface/transformers/issues/2695 | 558,170,031 | MDU6SXNzdWU1NTgxNzAwMzE= | 2,695 | get_linear_schedule_with_warmup method can't be found in optimization.py | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There clearly is:\r\n\r\nhttps://github.com/huggingface/transformers/blob/0aa40e9569a71306036de3a217eed55521699604/src/transformers/optimization.py#L47-L59\r\n\r\nPlease fill out the complete template - it's there for a reason. If you had shown us which version you're working with, we could probably tell you that your version is too late, or at least dig further. Now we can't help at all, it's just guess work.\r\n\r\nGive code, full error trace, and your PyTorch/Tensorflow version.",
"I thought my transformers was the lastest version, but I found it's not when I checked it on the anaconda cloud \"https://anaconda.org/conda-forge/transformers\". The problem has been solved after reinstalling, Thanks for your reply :)",
"In the future, please fill out the complete template. Please close this question if you don't have any more questions.",
"OK"
] | 1,580 | 1,580 | 1,580 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
My transformer was downloaded from anaconda cloud by using command "conda install -c conda-forge transformers", when I tried to use AdamW as what you show in the example, I found there is no **get_linear_schedule_with_warmup** in the **transformers.optimization.py** file for me to create a scheduler.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* `transformers` version (or branch):
* Using GPU ?
* Distributed or parallel setup ?
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2695/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2694/comments | https://api.github.com/repos/huggingface/transformers/issues/2694/events | https://github.com/huggingface/transformers/issues/2694 | 558,137,584 | MDU6SXNzdWU1NTgxMzc1ODQ= | 2,694 | AutoModel fails to load FlauBERT with `output_hidden_states` | {
"login": "LoicGrobol",
"id": 14248012,
"node_id": "MDQ6VXNlcjE0MjQ4MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/14248012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoicGrobol",
"html_url": "https://github.com/LoicGrobol",
"followers_url": "https://api.github.com/users/LoicGrobol/followers",
"following_url": "https://api.github.com/users/LoicGrobol/following{/other_user}",
"gists_url": "https://api.github.com/users/LoicGrobol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoicGrobol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoicGrobol/subscriptions",
"organizations_url": "https://api.github.com/users/LoicGrobol/orgs",
"repos_url": "https://api.github.com/users/LoicGrobol/repos",
"events_url": "https://api.github.com/users/LoicGrobol/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoicGrobol/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! the `output_hidden_states` should be specified in the configuration when loading from `AutoModel` classes. Doing the following is necessary to instantiate a class with hidden states:\r\n\r\n```py\r\nimport transformers\r\n\r\nconfig = transformers.AutoConfig.from_pretrained(\"flaubert-base-cased\", output_hidden_states=True)\r\nmodel = transformers.AutoModel.from_pretrained(\"flaubert-base-cased\", config=config)\r\n```\r\n\r\nHowever, your issue showed me there was a bug with the loading of FlauBERT models with AutoModels, which I patched in https://github.com/huggingface/transformers/commit/ff6f1492e8296f511682fd56fcf62be0854723a2.\r\nPlease install from source to have the fix: `pip install git+https://github.com/huggingface/transformers`, I'll push a pypi patch for this soon.",
"Oh, okay, thanks. From what I understood of [AutoModel](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoModel.from_pretrained) doc, I thought all `**kwargs` in `AutoModel.from_pretrained` were passed to the config.",
"Indeed, the documentation seems misleading in that regard. I'm updating it.",
"`AutoTokenizer` seems to have the same problem as the one you fixed in `AutoModel`\r\n\r\n```python\r\ntransformers.AutoTokenizer.from_pretrained(\"flaubert-base-uncased\")\r\n```\r\n\r\nresults in\r\n\r\n```console\r\nOSError: Model name 'flaubert-base-uncased' was not found in tokenizers model name list (xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-17-1280, xlm-mlm-100-1280)\r\n```",
"Indeed it does, thanks @Evpok !",
"Should have been patched and tested with 1e82cd8.",
"Thanks for the quick response β₯",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I think this change should be bolder in documentations somehow as I had this problem too."
] | 1,580 | 1,587 | 1,586 | NONE | null | MWE:
```python
import transformers
model = transformers.AutoModel.from_pretrained("flaubert-base-cased", output_hidden_states=True)
```
Tested on rev 5a6b138 fails with
```console
Traceback (most recent call last):
File "mwe.py", line 3, in <module>
model = transformers.AutoModel.from_pretrained("flaubert-base-cased", output_hidden_states=True)
File "<redacted>/transformers/modeling_auto.py", line 377, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "<redacted>/transformers/modeling_utils.py", line 463, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() got an unexpected keyword argument 'output_hidden_states'
```
This works when loading directly from `transformers.FlaubertModel`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2694/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2694/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2693/comments | https://api.github.com/repos/huggingface/transformers/issues/2693/events | https://github.com/huggingface/transformers/issues/2693 | 558,052,909 | MDU6SXNzdWU1NTgwNTI5MDk= | 2,693 | Input file format for examples/run_lm_finetuning.py | {
"login": "nminds",
"id": 39691980,
"node_id": "MDQ6VXNlcjM5NjkxOTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/39691980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nminds",
"html_url": "https://github.com/nminds",
"followers_url": "https://api.github.com/users/nminds/followers",
"following_url": "https://api.github.com/users/nminds/following{/other_user}",
"gists_url": "https://api.github.com/users/nminds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nminds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nminds/subscriptions",
"organizations_url": "https://api.github.com/users/nminds/orgs",
"repos_url": "https://api.github.com/users/nminds/repos",
"events_url": "https://api.github.com/users/nminds/events{/privacy}",
"received_events_url": "https://api.github.com/users/nminds/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, two datasets are available in `run_lm_finetuning.py`:\r\n\r\n- `TextDataset`, which just splits your data into chunks with no attention whatsoever to the line returns or separators\r\n- `LineByLineTextDataset`, which splits your data into chunks, being careful not to overstep line returns as each line is interpreted as a document.\r\n\r\nNone of those datasets, nor the run_lm_finetuning script in itself handle the next sentence prediction objective. It handles masked language modeling when the `--mlm` flag option is passed, and the causal language modeling when no `--mlm` flag option is passed.",
"@LysandreJik \r\n[this](https://github.com/huggingface/transformers/blob/33d3072e1c54bcd235447b98c6dea1b4cb71234c/examples/run_lm_finetuning.py#L135) will drop tokens beyond len of `512`?",
"> Hi, two datasets are available in `run_lm_finetuning.py`:\r\n> \r\n> * `TextDataset`, which just splits your data into chunks with no attention whatsoever to the line returns or separators\r\n> * `LineByLineTextDataset`, which splits your data into chunks, being careful not to overstep line returns as each line is interpreted as a document.\r\n> \r\n> None of those datasets, nor the run_lm_finetuning script in itself handle the next sentence prediction objective. It handles masked language modeling when the `--mlm` flag option is passed, and the causal language modeling when no `--mlm` flag option is passed.\r\n\r\nHi, Is there any particular reason to exclude the next sentence prediction objective? ",
"@nauman-chaudhary it will drop the tokens beyond the maximum input size of the model. For BERT, it is indeed 512. Feel free to implement a more complex behavior if your dataset has a lot lines that go over the 512 token limit.\r\n\r\n@fajri91 Yes, for a couple of reasons:\r\n\r\n- Having a simple MLM/CLM objective is simpler, both to understand (user) and to maintain (maintainer)\r\n- The RoBERTa paper has proven that the NSP objective was not particularly helpful\r\n- Only BERT has the class (`BertForPreTraining`) to manage the NSP objective, whereas `run_lm_finetuning` supports several models available in the library\r\n- If anyone wants to implement the NSP objective, it is very easy for them to change the dataset/training loop to do so.",
"@LysandreJik given the fact that tokenizer drops input size above 512 is it worth to prepare the input dataset by using sliding window over documents? What I mean by that is instead of dropping a lot of text, I will transform long document into i.e 4 sentences in each line, with sliding window over the whole document.",
"Yes, this is a reasonable strategy.",
"Thanks for clarification, it's super helpful to know this!",
"> Hi, two datasets are available in `run_lm_finetuning.py`:\r\n> \r\n> * `TextDataset`, which just splits your data into chunks with no attention whatsoever to the line returns or separators\r\n> * `LineByLineTextDataset`, which splits your data into chunks, being careful not to overstep line returns as each line is interpreted as a document.\r\n> \r\n> None of those datasets, nor the run_lm_finetuning script in itself handle the next sentence prediction objective. It handles masked language modeling when the `--mlm` flag option is passed, and the causal language modeling when no `--mlm` flag option is passed.\r\n\r\n\r\nhello @LysandreJik \r\n\r\nif i realized TextDataset correctly , it makes a long sentence of all the corpus and cut them 512 by 512 and gives it to the model (if the max_len is supposed to be 512 ) and then we will have no padding\r\n\r\nbut LineByLineTextDataset , pad each line to reach 512 and gives it to model \r\n\r\nby which one we will get better results in downstream tasks ?\r\n(and in downstream tasks we are unforced to do padding) \r\n\r\nthanks!",
"@marrrcin @LysandreJik thank you for the comments! I have a domain specific corpus; Geology, which is about 2GB text file. I prepared an input file where, I scanned a 4-sentences window on my raw text, and wrote each 4-sentence window onto a new line. \r\n\r\nSo, my modified input file has 4 sentences per line ending with \\n. Now I am training BERT MLM training with BertWordPieceTokenizer from scratch. run_language_model.py gets to LineByLineTextDataSet and takes almost 2 hours to process my input file. I feel this is quite slow for only 2GB file.\r\n\r\nMy input looks something like this, there is no space between lines and there are 2 millions of lines:\r\n{\r\nfirst sentence starts here. second sentence. now third sentence. and forth sentence\r\nnew line with fifth sentence. sixth sentence here, then seventh sentence. finally eight sentence\r\nnew line with night sentence, then tenth sentence ...\r\n...\r\n}\r\n\r\nis there anyway to speed this up? ",
"just upgraded HF to v.2.9 and now it took 37 minutes instead of 108 minutes. thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,595 | 1,595 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I wanted to employ the `examples/run_lm_finetuning.py` from the transformers repository on a pretrained Bert model. However, from following the documentation it is not evident how a corpus file should be structured (apart from referencing the Wiki-2 dataset). I've tried
- One document per line (multiple sentences)
- One sentence per line. Documents are separated by a blank line (this I found in some older pytorch-transformers documentation)
By looking at the code of `examples/run_lm_finetuning.py` it is not directly evident how sequence pairs for the Next Sentence Prediction objective are formed. Would the --line-by-line option help here? I'd be grateful, if someone could give me some hints how a text corpus file should look like.
Many thanks and cheers,
nminds
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
[SO link](https://stackoverflow.com/questions/60001698/how-exactly-should-the-input-file-be-formatted-for-the-language-model-finetuning) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2693/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2693/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2692/comments | https://api.github.com/repos/huggingface/transformers/issues/2692/events | https://github.com/huggingface/transformers/issues/2692 | 558,027,005 | MDU6SXNzdWU1NTgwMjcwMDU= | 2,692 | Regarding distlbert uncased model's size | {
"login": "divyag11",
"id": 39218807,
"node_id": "MDQ6VXNlcjM5MjE4ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/39218807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyag11",
"html_url": "https://github.com/divyag11",
"followers_url": "https://api.github.com/users/divyag11/followers",
"following_url": "https://api.github.com/users/divyag11/following{/other_user}",
"gists_url": "https://api.github.com/users/divyag11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyag11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyag11/subscriptions",
"organizations_url": "https://api.github.com/users/divyag11/orgs",
"repos_url": "https://api.github.com/users/divyag11/repos",
"events_url": "https://api.github.com/users/divyag11/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyag11/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"please reply",
"You're comparing two formats that are different, so the comparison doesn't really make sense. The BERT model weighs 410MB when saved as a HDF5 file, whereas DistilBERT weighs 810MB when saved as a SavedModel, which also contains the graph and variables.\r\n\r\nSaving both files in HDF5:\r\n\r\n```py\r\nfrom transformers import TFBertModel, TFDistilBertModel\r\n\r\nbert = TFBertModel.from_pretrained(\"bert-base-uncased\")\r\ndistilbert = TFDistilBert.from_pretrained(\"distilbert-base-uncased\")\r\n\r\nbert.save_pretrained(\"bert\")\r\ndistilbert.savePretrained(\"distilbert\")\r\n```\r\n\r\n`ls` in \"bert\" -> 414MB for the `tf_model.h5`\r\n`ls` in \"distilbert\" -> 254MB for the `tf_model.h5`",
"I got your answer, that's correct.But, since I need to serve the model using tfserving,so I need a SavedModel format only.\r\n1)Or is there any way to serve .h5 models through tfserving?\r\n2)Or is there any way i can remove adam momentum and such trainig variables from my model to reduce the size in tensorflow 2.1?,because i have seen in tf 2.x freezing graph is deprecated\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | I finetuned the distlbert uncased model, and I had a thought that since it is a lower layer model, it should have less weight. But, to my surprise, I find that the model generated after finetuning, I saved it as :
tf.saved_model.save(model, "./tempdir/distilbert/2/")
and the tf model got saved.
this model has a very high size(of 810 Mb), although it should be less .And the original bert mdoel which is a large model has a size of 410 Mb.
Please look into the matter | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2692/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2691/comments | https://api.github.com/repos/huggingface/transformers/issues/2691/events | https://github.com/huggingface/transformers/issues/2691 | 558,010,630 | MDU6SXNzdWU1NTgwMTA2MzA= | 2,691 | how can i finetune BertTokenizer? | {
"login": "raj5287",
"id": 11444890,
"node_id": "MDQ6VXNlcjExNDQ0ODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/11444890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raj5287",
"html_url": "https://github.com/raj5287",
"followers_url": "https://api.github.com/users/raj5287/followers",
"following_url": "https://api.github.com/users/raj5287/following{/other_user}",
"gists_url": "https://api.github.com/users/raj5287/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raj5287/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raj5287/subscriptions",
"organizations_url": "https://api.github.com/users/raj5287/orgs",
"repos_url": "https://api.github.com/users/raj5287/repos",
"events_url": "https://api.github.com/users/raj5287/events{/privacy}",
"received_events_url": "https://api.github.com/users/raj5287/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can add new words to the tokenizer with [add_tokens](https://huggingface.co/transformers/main_classes/tokenizer.html?highlight=add_tokens#transformers.PreTrainedTokenizer.add_tokens):\r\n`tokenizer.add_tokens(['newWord', 'newWord2'])`\r\nAfter that you need to resize the dictionary size of the embedding layer with:\r\n`model.resize_token_embeddings(len(tokenizer))`\r\n",
"> You can add new words to the tokenizer with [add_tokens](https://huggingface.co/transformers/main_classes/tokenizer.html?highlight=add_tokens#transformers.PreTrainedTokenizer.add_tokens):\r\n> `tokenizer.add_tokens(['newWord', 'newWord2'])`\r\n> After that you need to resize the dictionary size of the embedding layer with:\r\n> `model.resize_token_embeddings(len(tokenizer))`\r\n\r\nNote that this simply adds a new token to the vocabulary but doesn't train its embedding (obviously). This implies that your results will be quite poor if your training data contains a lot of newly added (untrained) tokens.",
"@cronoik once the dictionary is resized don't I have to train the tokenizer model again?\r\n\r\n\r\n@BramVanroy umm.. so what could be the probable solution if I am having a custom data set? How can I can retrain this BertTokenizer Model to get new vocab.txt file?",
"What do you mean with tokenizer model? The tokenizer in simple terms is a class which splits your text in tokens from a huge dictionary. What you have to train is the embedding layer of your model because the weights of the new tokens will be random. This will happen during the training of your model (but it could be undertrained for the new tokens).\r\n\r\nIn case you have a plenty of new words (e.g. technical terms) or even a different language, it might makes sense to start from scratch (definitely for the later). Here is blogpost from huggingface which shows you how to train a tokenizer+model for Esperanto: [link](https://github.com/huggingface/blog/blob/master/how-to-train.md). It really depends on your data (e.g. number of new tokens, importance of new tokens, relation between the tokens...).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @cronoik,\r\n\r\nI tried replacing `RobertaTokenizerFast` with `DistilBertTokenizerFast`\r\n\r\n```\r\nfrom transformers import RobertaConfig\r\n\r\nconfig = RobertaConfig(\r\n vocab_size=52_000,\r\n max_position_embeddings=514,\r\n num_attention_heads=12,\r\n num_hidden_layers=6,\r\n type_vocab_size=1,\r\n)\r\n\r\nfrom transformers import RobertaTokenizerFast\r\ntokenizer = RobertaTokenizerFast.from_pretrained(\"/content/EsperBERTo\", max_len=512)\r\n```\r\n\r\nworked absolutely fine. But,\r\n\r\n```\r\nfrom transformers import DistilBertConfig\r\n\r\nconfig = DistilBertConfig(\r\n vocab_size=52_000,\r\n max_position_embeddings=514,\r\n #num_attention_heads=12,\r\n #num_hidden_layers=6,\r\n #type_vocab_size=1,\r\n)\r\n\r\nfrom transformers import DistilBertTokenizerFast\r\n\r\ntokenizer = DistilBertTokenizerFast.from_pretrained(\"/content/EsperBERTo\", max_len=512)\r\n```\r\n\r\nthrows error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nOSError Traceback (most recent call last)\r\n<ipython-input-17-7f80e1d47bf5> in <module>()\r\n 1 from transformers import DistilBertTokenizerFast\r\n 2 \r\n----> 3 tokenizer = DistilBertTokenizerFast.from_pretrained(\"/content/EsperBERTo\", max_len=512)\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\r\n 1772 f\"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing relevant tokenizer files\\n\\n\"\r\n 1773 )\r\n-> 1774 raise EnvironmentError(msg)\r\n 1775 \r\n 1776 for file_id, file_path in vocab_files.items():\r\n\r\nOSError: Can't load tokenizer for '/content/EsperBERTo'. Make sure that:\r\n\r\n- '/content/EsperBERTo' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or '/content/EsperBERTo' is the correct path to a directory containing relevant tokenizer files\r\n```\r\n\r\nCan I know what is the best way to add vocabulary into **DistilBertTokenizer**?",
"@rakesh4real What is `/content/EsperBERTo`? Which files are in this directory? Please keep in mind that Roberta uses a BPE tokenizer, while Bert a WordpieceTokenizer. You can't simply use different kinds of tokenization with the same configuration files.",
"Thank you @cronoik. I did not know tokenizers needed to be changed. Are there any references where I can learn what tokenizers must be used for a given model / task.\r\n\r\nAnd I had to use different special tokens as well. Kindly let me know where to find what special tokens must be used (when and why)\r\n\r\nUsing `BertWordpieceTokenizer` the code runs just perfect. Added code [here](https://colab.research.google.com/gist/rakesh4real/9783c37f89bc599fb1bf8faf1287cceb/01_how-to-train-distilbert-lm-scratch.ipynb)",
"@rakesh4real This site [1] gives you a general overview of different tokenization approaches and the site for each model tells you which tokenization algorithm was used (e.g. [2] for BERT).\r\n\r\n[1] https://huggingface.co/transformers/tokenizer_summary.html\r\n[2] https://huggingface.co/transformers/model_doc/bert.html#berttokenizer",
"> What do you mean with tokenizer model? The tokenizer in simple terms is a class which splits your text in tokens from a huge dictionary. What you have to train is the embedding layer of your model because the weights of the new tokens will be random. This will happen during the training of your model (but it could be undertrained for the new tokens).\r\n> \r\n> In case you have a plenty of new words (e.g. technical terms) or even a different language, it might makes sense to start from scratch (definitely for the later). Here is blogpost from huggingface which shows you how to train a tokenizer+model for Esperanto: [link](https://github.com/huggingface/blog/blob/master/how-to-train.md). It really depends on your data (e.g. number of new tokens, importance of new tokens, relation between the tokens...).\r\n\r\nAs far as I understand there are two options mentioned. The first one is training from scratch using `tokenizer.train(files, trainer)`. But this method requires training the Bert model from scratch too, as mentioned in #747. And the second option is extending the vocabulary as @cronoik said, but this leads to the problem @BramVanroy mentioned. \r\n\r\nThe options are either train from scratch or, randomly initialize embeddings of the tokenizer and hope for a good performance. Isn't it possible to finetune the model to train the embeddings of these newly added tokens? Why does it have to be either using random embeddings, or training from scratch? Am I missing something?\r\n\r\nThanks in advance. ",
"> The options are either train from scratch or, randomly initialize embeddings of the tokenizer and hope for a good performance. Isn't it possible to finetune the model to train the embeddings of these newly added tokens? Why does it have to be either using random embeddings, or training from scratch? Am I missing something?\r\n\r\nresize_token_embeddings does not reset the embedding layer, it just extends it. The new tokens are randomly initialized and you need to train them:\r\n- In case you have only a few new tokens, you can do it during finetuning\r\n- In case you have a lot of new tokens, you should probably train your model with the pretraining objective that was used to train the model the first time. You might want to add an additional word_embedding layer for the new tokens and freeze all other layers to save some time.\r\n- in case you have a lot a lot new tokens (like a new language that is not related to the original language of your model), you should probably train a model from scratch.\r\n\r\n@tolgayan",
"Thank you for the nice and clear explanation! @cronoik ",
"Hello,\r\nI know it is has been a long period since the last comment in this issue but I couldn't hold it and I have ti ask @cronoik.\r\nCould you, please, explain more what do you mean by...\r\n> In case you have only a few new tokens, you can do it during finetuning\r\n\r\nby 'during finetuning', you mean the new tokens will be randomly initialised first and then the embedding with update during the model training?\r\n \r\nFor my cas I have a list of emojis (all the emojis that we have so it is a size of 3,633 emojis) and the vocab_size of my tokenizer is 32005. does this make the 'few new tokens' of not? should I consider training my model from scratch?\r\n\r\nthanks in advance!",
"I still don't know how one can finetune tokenizer - by finetuning I don't mean just adding words to the dictionary - but also updating the embedding. \r\n\r\nI am dealing with a text classification - since the text uses informal language (Arabic) e.g. `salam`, vs `saloom` or `sssaam` - a lot of vowels spell out differently. do I have to train a new language model from scrach ?! or I can use the existing model and finetune ?",
"Tokenizers are nothing but _seperators_. They split the sentences into subparts. The most common splitting method is using the whitespaces. When we say \"training a tokenizer\", it actually creates a vocabulary from a given text data. It assigns an id to each token so that you can feed these tokens as numbers to a BERT model. When you tokenize a sentence with a so-called \"pretrained\" tokenizer, it splits the sentence with its splitting algorithm, and assigns ids to each token from its vocabulary. Sometimes, it encounters with unknown words. In this case, it further splits that word further to meaningful subparts, that the subparts are in the vocabulary. They generally looks like: \"Hou## ##se\". The purpose of this \"training\" operation is to prevent the tokenizer from splitting important or domain-specific tokens so that the meaning will be kept.\r\n\r\nBack to your question. When you have some specific words that need to be in the vocabulary, you can directly add them to the vocabulary, and they will be assigned with ids, continuing from the last id in the vocabulary I guess (I would be happy if somebody verify me here.) But the main problem is your model does not know what to do with these new numbers. In this case, the embeddings will be created randomly for these tokens. Here you have three options as @cronoik suggested to train the embedding layer for these new tokens.\r\n\r\n- You can leave them, and while finetuning, the model figure out what to do with these new tokens by updating the embedding layer.\r\n- You can add a new embedding layer, and freeze all the previous layers. Then finetune the model with the same task of the base model so that the new layer will cover your new embeddings.\r\n- You can start from scratch, adding your tokens to the training corpus, initializing the tokenizer from ground, and pretrain a language model from scratch.\r\n",
"> Tokenizers are nothing but _seperators_. They split the sentences into subparts. The most common splitting method is using the whitespaces. When we say \"training a tokenizer\", it actually creates a vocabulary from a given text data. It assigns an id to each token so that you can feed these tokens as numbers to a BERT model. When you tokenize a sentence with a so-called \"pretrained\" tokenizer, it splits the sentence with its splitting algorithm, and assigns ids to each token from its vocabulary. Sometimes, it encounters with unknown words. In this case, it further splits that word further to meaningful subparts, that the subparts are in the vocabulary. They generally looks like: \"Hou## ##se\". The purpose of this \"training\" operation is to prevent the tokenizer from splitting important or domain-specific tokens so that the meaning will be kept.\r\n> \r\n> Back to your question. When you have some specific words that need to be in the vocabulary, you can directly add them to the vocabulary, and they will be assigned with ids, continuing from the last id in the vocabulary I guess (I would be happy if somebody verify me here.) But the main problem is your model does not know what to do with these new numbers. In this case, the embeddings will be created randomly for these tokens. Here you have three options as @cronoik suggested to train the embedding layer for these new tokens.\r\n> \r\n> * You can leave them, and while finetuning, the model figure out what to do with these new tokens by updating the embedding layer.\r\n> * You can add a new embedding layer, and freeze all the previous layers. Then finetune the model with the same task of the base model so that the new layer will cover your new embeddings.\r\n> * You can start from scratch, adding your tokens to the training corpus, initializing the tokenizer from ground, and pretrain a language model from scratch.\r\n\r\nThanks a lot for your explanation. I suppose, if I go for the first approach where I fine-tune my embedding layer, it would be a good idea to fine-tune the entire embedding layer not just the newly added entries that correspond to my new tokens? Or perhaps I should only allow gradients for those newly added entries?"
] | 1,580 | 1,692 | 1,587 | NONE | null | Is it possible to fine tune BertTokenizer so that the new vocab.txt file which it uses gets updated on my custom dataset? or do i need to retrain the bert model from scratch for the same? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2691/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2691/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2690/comments | https://api.github.com/repos/huggingface/transformers/issues/2690/events | https://github.com/huggingface/transformers/issues/2690 | 557,782,618 | MDU6SXNzdWU1NTc3ODI2MTg= | 2,690 | Hardware requirements for BERT QA inference | {
"login": "tothniki",
"id": 17712138,
"node_id": "MDQ6VXNlcjE3NzEyMTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/17712138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tothniki",
"html_url": "https://github.com/tothniki",
"followers_url": "https://api.github.com/users/tothniki/followers",
"following_url": "https://api.github.com/users/tothniki/following{/other_user}",
"gists_url": "https://api.github.com/users/tothniki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tothniki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tothniki/subscriptions",
"organizations_url": "https://api.github.com/users/tothniki/orgs",
"repos_url": "https://api.github.com/users/tothniki/repos",
"events_url": "https://api.github.com/users/tothniki/events{/privacy}",
"received_events_url": "https://api.github.com/users/tothniki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik wrote an article about benchmarking transformers (focusing on inference) that might be of interest to you.\r\n\r\nhttps://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2",
"Thanks for the reply!\r\nI checked the benchmarks, there you tested with 16GB GPU and bert-base model. I suppose then I have to try different, smaller size of GPUs, and figure out which can handle my task.\r\nIt is really informative though, nice work."
] | 1,580 | 1,580 | 1,580 | NONE | null | Hi,
I am using **bert-large-uncased-whole-word-masking-finetuned-squad** model for QA inference.
I used my laptop's CPU to build the pipeline and try it out.
Now I want to deploy it, and so I would like to know what is the minimum hardware requirement?(If I use the same settings as in your usage example script)?
I am more interested in the **minimum size of the GPU**.
Of course I don't want it to be too slow. Is there any studies on this matter, or measurement?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2690/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2689/comments | https://api.github.com/repos/huggingface/transformers/issues/2689/events | https://github.com/huggingface/transformers/pull/2689 | 557,779,358 | MDExOlB1bGxSZXF1ZXN0MzY5MzA4NjYw | 2,689 | Correct PyTorch distributed training command in examples/README.md | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's cool, thanks @jarednielsen !"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | Running the command currently detailed in the documentation yields
`58.4/70.3 EM/F1` with Ubuntu 18.04 and torch 1.4.0, not `86.9/93.1` as promised. It also looks wrong because we're using a cased model with `--do_lower_case`.
Switching it to match the PyTorch distributed training example given in the main README gives me the approximately-correct `86.68/93.03` results. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2689/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2689",
"html_url": "https://github.com/huggingface/transformers/pull/2689",
"diff_url": "https://github.com/huggingface/transformers/pull/2689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2689.patch",
"merged_at": 1580427685000
} |
https://api.github.com/repos/huggingface/transformers/issues/2688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2688/comments | https://api.github.com/repos/huggingface/transformers/issues/2688/events | https://github.com/huggingface/transformers/pull/2688 | 557,775,795 | MDExOlB1bGxSZXF1ZXN0MzY5MzA1NzEz | 2,688 | Config: reference array of architectures | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2688?src=pr&el=h1) Report\n> Merging [#2688](https://codecov.io/gh/huggingface/transformers/pull/2688?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b43cb09aaa6d81f4e1f4a2537764e37aa823b30b?src=pr&el=desc) will **decrease** coverage by `1.08%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2688?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2688 +/- ##\n=========================================\n- Coverage 74.09% 73% -1.09% \n=========================================\n Files 92 92 \n Lines 15172 15173 +1 \n=========================================\n- Hits 11241 11077 -164 \n- Misses 3931 4096 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2688?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `28.67% <ΓΈ> (-0.53%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.46% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `96.87% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.15% <100%> (+0.06%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `55.39% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.94% <0%> (-2.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.06% <0%> (-1.33%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2688?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2688?src=pr&el=footer). Last update [b43cb09...1cdc6d3](https://codecov.io/gh/huggingface/transformers/pull/2688?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,580 | 1,580 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2688/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2688",
"html_url": "https://github.com/huggingface/transformers/pull/2688",
"diff_url": "https://github.com/huggingface/transformers/pull/2688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2688.patch",
"merged_at": 1580430420000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2687/comments | https://api.github.com/repos/huggingface/transformers/issues/2687/events | https://github.com/huggingface/transformers/issues/2687 | 557,634,614 | MDU6SXNzdWU1NTc2MzQ2MTQ= | 2,687 | Issue about pipeline of sentiment-analysis | {
"login": "icmpnorequest",
"id": 33535608,
"node_id": "MDQ6VXNlcjMzNTM1NjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/33535608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icmpnorequest",
"html_url": "https://github.com/icmpnorequest",
"followers_url": "https://api.github.com/users/icmpnorequest/followers",
"following_url": "https://api.github.com/users/icmpnorequest/following{/other_user}",
"gists_url": "https://api.github.com/users/icmpnorequest/gists{/gist_id}",
"starred_url": "https://api.github.com/users/icmpnorequest/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/icmpnorequest/subscriptions",
"organizations_url": "https://api.github.com/users/icmpnorequest/orgs",
"repos_url": "https://api.github.com/users/icmpnorequest/repos",
"events_url": "https://api.github.com/users/icmpnorequest/events{/privacy}",
"received_events_url": "https://api.github.com/users/icmpnorequest/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can't reproduce this. It works fine for me. Can you try deleting the cache directory and trying again?",
"@BramVanroy \r\nYes, I tried deleting the cached file under the `.cached` directory, but it still doesn't work for me.",
"Works for me too. Are you sure you deleted the right file?\r\n\r\n",
"@julien-c \r\nYes, I deleted all the files under the `.cached/torch` directory and run the code again. It has been downloading the file for 12+ hours but still doesn't show any results. Could you please give me some advice?",
"Finally, it works. Solved by deleting all the cached files under the `.cached/torch` directory. I guessed the reason for the failure of downloading or lasting for a long time is the network speed.π\r\nThank you so much for your guidance and patience!",
"@icmpnorequest \r\nCould you please show me where is the **.cached/torch** directory, I got the same problem and I'd like to try your solutions to delete this directory.\r\n\r\nThanks for your guidance in advance.",
"@stepbystep88 \r\nI use macOS and it is `/USERNAME/.cached/torch` (USERNAME should be replaced by your own). May it would help."
] | 1,580 | 1,619 | 1,580 | NONE | null | Hi,
I tried the `pipeline` code on the README:
```
from transformers import pipeline
nlp = pipeline('sentiment-analysis')
print(nlp('We are very happy to include pipeline into the transformers repository.'))
```
However, it shows the following error:
```
I0131 01:02:23.627610 4420611520 file_utils.py:35] PyTorch version 1.1.0.post2 available.
I0131 01:02:28.316742 4420611520 file_utils.py:362] https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache or force_download set to True, downloading to /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmp_lgvnb3l
I0131 01:02:30.348834 4420611520 file_utils.py:377] copying /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmp_lgvnb3l to cache at /Users/yantong/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0131 01:02:30.349704 4420611520 file_utils.py:381] creating metadata file for /Users/yantong/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0131 01:02:30.350285 4420611520 file_utils.py:390] removing temp file /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmp_lgvnb3l
I0131 01:02:30.350615 4420611520 tokenization_utils.py:398] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /Users/yantong/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0131 01:02:31.742653 4420611520 file_utils.py:362] https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-finetuned-sst-2-english-config.json not found in cache or force_download set to True, downloading to /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmp7wk7u8_r
I0131 01:02:32.793608 4420611520 file_utils.py:377] copying /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmp7wk7u8_r to cache at /Users/yantong/.cache/torch/transformers/437d6b3001e14ea1853bcee09a1b2557f230862c5a03d3ebd78a4cdb94a79020.7a412cd94061214ced4285ea8f65100868e4c9757c85781d11a83acd01fa14a4
I0131 01:02:32.794027 4420611520 file_utils.py:381] creating metadata file for /Users/yantong/.cache/torch/transformers/437d6b3001e14ea1853bcee09a1b2557f230862c5a03d3ebd78a4cdb94a79020.7a412cd94061214ced4285ea8f65100868e4c9757c85781d11a83acd01fa14a4
I0131 01:02:32.794360 4420611520 file_utils.py:390] removing temp file /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmp7wk7u8_r
I0131 01:02:32.794815 4420611520 configuration_utils.py:185] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-finetuned-sst-2-english-config.json from cache at /Users/yantong/.cache/torch/transformers/437d6b3001e14ea1853bcee09a1b2557f230862c5a03d3ebd78a4cdb94a79020.7a412cd94061214ced4285ea8f65100868e4c9757c85781d11a83acd01fa14a4
I0131 01:02:32.795076 4420611520 configuration_utils.py:199] Model config {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"finetuning_task": "sst-2",
"hidden_dim": 3072,
"id2label": {
"0": "NEGATIVE",
"1": "POSITIVE"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"NEGATIVE": 0,
"POSITIVE": 1
},
"max_position_embeddings": 512,
"n_heads": 12,
"n_layers": 6,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 30522
}
I0131 01:02:33.892992 4420611520 file_utils.py:362] https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-finetuned-sst-2-english-modelcard.json not found in cache or force_download set to True, downloading to /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmpgd57b64v
I0131 01:02:35.001976 4420611520 file_utils.py:377] copying /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmpgd57b64v to cache at /Users/yantong/.cache/torch/transformers/57ded08a298ef01c397973781194aa0abf6176e6f720f660a2b93e8199dc0bc7.455d944f3d1572ab55ed579849f751cf37f303e3388980a42d94f7cd57a4e331
I0131 01:02:35.002447 4420611520 file_utils.py:381] creating metadata file for /Users/yantong/.cache/torch/transformers/57ded08a298ef01c397973781194aa0abf6176e6f720f660a2b93e8199dc0bc7.455d944f3d1572ab55ed579849f751cf37f303e3388980a42d94f7cd57a4e331
I0131 01:02:35.002778 4420611520 file_utils.py:390] removing temp file /var/folders/bg/126twh7d29bdtfqxjf_s2kr40000gn/T/tmpgd57b64v
I0131 01:02:35.003092 4420611520 modelcard.py:154] loading model card file https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-finetuned-sst-2-english-modelcard.json from cache at /Users/yantong/.cache/torch/transformers/57ded08a298ef01c397973781194aa0abf6176e6f720f660a2b93e8199dc0bc7.455d944f3d1572ab55ed579849f751cf37f303e3388980a42d94f7cd57a4e331
I0131 01:02:35.003378 4420611520 modelcard.py:192] Model card: {
"caveats_and_recommendations": {},
"ethical_considerations": {},
"evaluation_data": {},
"factors": {},
"intended_use": {},
"metrics": {},
"model_details": {},
"quantitative_analyses": {},
"training_data": {}
}
I0131 01:02:36.353755 4420611520 modeling_utils.py:406] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-finetuned-sst-2-english-pytorch_model.bin from cache at /Users/yantong/.cache/torch/transformers/f62a0baccbff4fbb83b3b6c63168af997d5aea02fc1a8ea2ab0a26dd79ac6517.461f3160566473d3587f9f4776a5131b1ed527b0d5fccb4b5f06003f457154bc
Traceback (most recent call last):
File "/Users/yantong/Library/Python/3.7/lib/python/site-packages/transformers/modeling_utils.py", line 415, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/serialization.py", line 581, in _load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 75086145 more bytes. The file might be corrupted.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/yantong/PycharmProjects/Attribute_Inference_Attack_Reviews/transformer_pipelines.py", line 5, in <module>
nlp = pipeline('sentiment-analysis')
File "/Users/yantong/Library/Python/3.7/lib/python/site-packages/transformers/pipelines.py", line 905, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File "/Users/yantong/Library/Python/3.7/lib/python/site-packages/transformers/modeling_auto.py", line 601, in from_pretrained
return DistilBertForSequenceClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
File "/Users/yantong/Library/Python/3.7/lib/python/site-packages/transformers/modeling_utils.py", line 417, in from_pretrained
raise OSError("Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
```
I tried downloading the `distilbert-base-uncased-finetuned-sst-2-english-pytorch_model.bin` and copied to the `.cache/torch/transformers` directory, but it still doesn't work.
OS: MacOS
transformers version: 2.3.0
Could somebody help me fix this issue? Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2687/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2686/comments | https://api.github.com/repos/huggingface/transformers/issues/2686/events | https://github.com/huggingface/transformers/pull/2686 | 557,592,714 | MDExOlB1bGxSZXF1ZXN0MzY5MTU2MjY5 | 2,686 | Add layerdrop to Flaubert | {
"login": "formiel",
"id": 41543169,
"node_id": "MDQ6VXNlcjQxNTQzMTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/41543169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/formiel",
"html_url": "https://github.com/formiel",
"followers_url": "https://api.github.com/users/formiel/followers",
"following_url": "https://api.github.com/users/formiel/following{/other_user}",
"gists_url": "https://api.github.com/users/formiel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/formiel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/formiel/subscriptions",
"organizations_url": "https://api.github.com/users/formiel/orgs",
"repos_url": "https://api.github.com/users/formiel/repos",
"events_url": "https://api.github.com/users/formiel/events{/privacy}",
"received_events_url": "https://api.github.com/users/formiel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2686?src=pr&el=h1) Report\n> Merging [#2686](https://codecov.io/gh/huggingface/transformers/pull/2686?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df27648bd942d59481a13842904f8cb500136e31?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `16.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2686?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2686 +/- ##\n==========================================\n- Coverage 74.1% 74.09% -0.02% \n==========================================\n Files 92 92 \n Lines 15168 15172 +4 \n==========================================\n+ Hits 11240 11241 +1 \n- Misses 3928 3931 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2686?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `75% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `29.19% <16.66%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2686?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2686?src=pr&el=footer). Last update [df27648...15f8b5d](https://codecov.io/gh/huggingface/transformers/pull/2686?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks!",
"Thanks so much for the quick merge, @LysandreJik!"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | This PR adds `layerdrop` to Flaubert. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2686/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2686",
"html_url": "https://github.com/huggingface/transformers/pull/2686",
"diff_url": "https://github.com/huggingface/transformers/pull/2686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2686.patch",
"merged_at": 1580403902000
} |
https://api.github.com/repos/huggingface/transformers/issues/2685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2685/comments | https://api.github.com/repos/huggingface/transformers/issues/2685/events | https://github.com/huggingface/transformers/issues/2685 | 557,530,160 | MDU6SXNzdWU1NTc1MzAxNjA= | 2,685 | German Bert tokenizer does not recognize (some) special characters (!,?,...) | {
"login": "andrey999333",
"id": 29929303,
"node_id": "MDQ6VXNlcjI5OTI5MzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/29929303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andrey999333",
"html_url": "https://github.com/andrey999333",
"followers_url": "https://api.github.com/users/andrey999333/followers",
"following_url": "https://api.github.com/users/andrey999333/following{/other_user}",
"gists_url": "https://api.github.com/users/andrey999333/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andrey999333/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andrey999333/subscriptions",
"organizations_url": "https://api.github.com/users/andrey999333/orgs",
"repos_url": "https://api.github.com/users/andrey999333/repos",
"events_url": "https://api.github.com/users/andrey999333/events{/privacy}",
"received_events_url": "https://api.github.com/users/andrey999333/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Sounds like this was fixed by @Timoeller in #3618, @andrey999333 ",
"Thanks for referencing. This bug should be fixed with the changes we applied to the vocabulary.\r\nFind more infos in the separate issue here: deepset-ai/FARM/issues/60"
] | 1,580 | 1,586 | 1,586 | NONE | null | # π Bug
## Information
Model I am using (Bert, XLNet ...):
**Bert**
Language I am using the model on (English, Chinese ...):
**German**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
`tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')`
`tokenizer.encode("Hallo!", add_special_tokens=False)`
returns:
`[5850, 26910, 2]`
where `2` is the index of '[UNK]' (unknown) token. Same would happen with 'Hallo?'. In the vocab.txt we have tokens '##!' and '##?' but not '?' or '!'. On the other hand some special characters are recognized, like ':' or ';'.
## Expected behavior
At least such common characters as '!' and '?' should be recognized by tokenizer
## Environment
* OS: Ubuntu
* Python version: 3.7
* `transformers` version (or branch): 2.3 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2685/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2684/comments | https://api.github.com/repos/huggingface/transformers/issues/2684/events | https://github.com/huggingface/transformers/issues/2684 | 557,525,848 | MDU6SXNzdWU1NTc1MjU4NDg= | 2,684 | distilbert_multilingual_cased model for multiple language | {
"login": "divyag11",
"id": 39218807,
"node_id": "MDQ6VXNlcjM5MjE4ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/39218807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyag11",
"html_url": "https://github.com/divyag11",
"followers_url": "https://api.github.com/users/divyag11/followers",
"following_url": "https://api.github.com/users/divyag11/following{/other_user}",
"gists_url": "https://api.github.com/users/divyag11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyag11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyag11/subscriptions",
"organizations_url": "https://api.github.com/users/divyag11/orgs",
"repos_url": "https://api.github.com/users/divyag11/repos",
"events_url": "https://api.github.com/users/divyag11/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyag11/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you try to install a newer version of the library? The `distilbert-base-multilingual-cased` checkpoint is available in recent transformers versions, as you can see in the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L42).",
"thanks a lot",
"hi,\r\nI am trying to finetune distilbert multilingual cased model, but i am getting error while training the model:\r\nerror is :\r\n```\r\nValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 8 array(s), for inputs ['output_1', 'output_2', 'output_3', 'output_4', 'output_5', 'output_6', 'output_7', 'output_8'] but instead got the following list of 1 arrays: [<tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=int64>]...\r\n\r\nwhile with the same code using distilbert uncased , there is no such error.\r\nCan you please check if there is some problem with distilbert multilingual cased model?",
"please open the issue",
"Hello, please open a new issue with information that can help us help you. Your software versions as well as the situation where this error happens being the minimum for us to help you, and a reproducible code example being the most useful."
] | 1,580 | 1,580 | 1,580 | NONE | null | By when distilbert_multilingual_cased model is going to be released.because I tried to finetune the
distilbert_multilingual_cased model , but it said "OSError: file distilbert-base-multilingual-cased not found", which means the above-mentioned model is not included in the list. Or let me know if I am doing any wrong assumption? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2684/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2683/comments | https://api.github.com/repos/huggingface/transformers/issues/2683/events | https://github.com/huggingface/transformers/issues/2683 | 557,496,356 | MDU6SXNzdWU1NTc0OTYzNTY= | 2,683 | TFCamembertModel | {
"login": "jkintzinger",
"id": 47453786,
"node_id": "MDQ6VXNlcjQ3NDUzNzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/47453786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jkintzinger",
"html_url": "https://github.com/jkintzinger",
"followers_url": "https://api.github.com/users/jkintzinger/followers",
"following_url": "https://api.github.com/users/jkintzinger/following{/other_user}",
"gists_url": "https://api.github.com/users/jkintzinger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jkintzinger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jkintzinger/subscriptions",
"organizations_url": "https://api.github.com/users/jkintzinger/orgs",
"repos_url": "https://api.github.com/users/jkintzinger/repos",
"events_url": "https://api.github.com/users/jkintzinger/events{/privacy}",
"received_events_url": "https://api.github.com/users/jkintzinger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, the CamemBERT model for tensorflow was merged yesterday, and is therefore available from the master branch right now. You can install it with \r\n\r\n```py\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nIt will be in the next transformers release (2.4.0 or 2.3.1), which should be released later today.",
"Awesome, thank you !",
"Hi again @LysandreJik ,\r\n\r\nI have checked the new version of Transformers and an error occurred when I tried to load TFCamembertModel:\r\n\r\n`TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType`\r\n\r\nI think I found what was missing. The pretrained models are not available, I mean the list is empty when I try to load a pretrained model.\r\n\r\n`OSError: Model name 'test_list' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/test_list/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.`\r\n\r\nSo I checked the pretrained model archives avaible in \"modeling_tf_camembert.py\" and, indeed, the dictionnary is empty :\r\n\r\n`TF_CAMEMBERT_PRETRAINED_MODEL_ARCHIVE_MAP = {}`\r\n\r\nIs it on purpose ?\r\n\r\nThanks\r\n",
"Hi, indeed there are not checkpoints in the archive map. The contributor (@jplu) that contributed the tensorflow architecture has uploaded checkpoints that you can use: https://huggingface.co/jplu/tf-camembert-base.\r\n\r\nWe're currently working on our website so that it better reflects the following:\r\n- The CamemBERT model has official weights that are usable in PyTorch, but do not currently have any TensorFlow equivalent\r\n- There are community models (cf. jplu/tf-camembert-base) which can be used instead.\r\n\r\nWe're trying to make sure that contributing models is easy and that weights are easily identifiable for users. This is still a work in progress.\r\n\r\ncc @julien-c @joshchagani \r\n\r\nTLDR: in transformers v2.4.0, the following should work:\r\n\r\n```py\r\nfrom transformers import TFCamembertModel\r\nmodel = TFCamembertModel.from_pretrained(\"jplu/tf-camembert-base\")\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | Hi,
I wanted to use the tensorflow version of Camembert : TFCamembertModel, but the implememtation is not available with the v2.3.0 version : https://huggingface.co/transformers/v2.3.0/model_doc/camembert.html.
But TFCamembertModel seems to be available with another version of transformers : https://huggingface.co/transformers/model_doc/camembert.html.
Is this a new or old version of the library ?
Any way, have you already succeeded importing TFCamembertModel ?
Thanks a lot !
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2683/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2682/comments | https://api.github.com/repos/huggingface/transformers/issues/2682/events | https://github.com/huggingface/transformers/issues/2682 | 557,423,583 | MDU6SXNzdWU1NTc0MjM1ODM= | 2,682 | Issue with my profile on the upload/share models webpage | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Should be fixed.",
"Nice! Thanks :)",
"By the way, you should also add a README.md to the same `pretrained_model` folders so that it's displayed on the model pages (see [this one](https://huggingface.co/dbmdz/bert-base-german-cased) for instance)\r\n\r\nI'll document this feature better today.\r\n\r\n",
"Should `jplu/camembert-base` be `jplu/tf-camembert-base`?",
"The name should be ok now, it was an issue in the naming.\r\n\r\nI will work on a README file, this is a super cool feature!!",
"(note that the READMEs will be in this repo in the future β that way we can collaborate on them/link them together/etc)\r\n\r\n(see https://github.com/huggingface/transformers/issues/2520#issuecomment-579009439 if you haven't already)\r\n\r\nThanks a lot @jplu!"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | Hello,
The URL of my [profile](https://huggingface.co/jplu) on the upload/share models webpage looks like if there was a model called `jplu`. Any idea why?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2682/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2681/comments | https://api.github.com/repos/huggingface/transformers/issues/2681/events | https://github.com/huggingface/transformers/issues/2681 | 557,376,935 | MDU6SXNzdWU1NTczNzY5MzU= | 2,681 | How to add a fc classification head to BertForQA to make a MTL-BertForQA model? | {
"login": "LukasMut",
"id": 25636832,
"node_id": "MDQ6VXNlcjI1NjM2ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/25636832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LukasMut",
"html_url": "https://github.com/LukasMut",
"followers_url": "https://api.github.com/users/LukasMut/followers",
"following_url": "https://api.github.com/users/LukasMut/following{/other_user}",
"gists_url": "https://api.github.com/users/LukasMut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LukasMut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LukasMut/subscriptions",
"organizations_url": "https://api.github.com/users/LukasMut/orgs",
"repos_url": "https://api.github.com/users/LukasMut/repos",
"events_url": "https://api.github.com/users/LukasMut/events{/privacy}",
"received_events_url": "https://api.github.com/users/LukasMut/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am currently trying to figure out how to add an additional classification head to the BertForQA model. I am not sure which is the best / most efficient way to do that. Should I rewrite the source code for BertForQA and inherit from BertPretrainedModel, or should I rather inherit from BertForQA and change the forward pass? The forward pass needs to be fairly similar though... Any help is appreciated!",
"This is a matter of taste. I would write my own class which inherits from BertModel but this is completely up to you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2681/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2680/comments | https://api.github.com/repos/huggingface/transformers/issues/2680/events | https://github.com/huggingface/transformers/issues/2680 | 557,370,163 | MDU6SXNzdWU1NTczNzAxNjM= | 2,680 | Does loss function in the run_tf_ner.py takes logits or probabilities? | {
"login": "andrey999333",
"id": 29929303,
"node_id": "MDQ6VXNlcjI5OTI5MzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/29929303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andrey999333",
"html_url": "https://github.com/andrey999333",
"followers_url": "https://api.github.com/users/andrey999333/followers",
"following_url": "https://api.github.com/users/andrey999333/following{/other_user}",
"gists_url": "https://api.github.com/users/andrey999333/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andrey999333/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andrey999333/subscriptions",
"organizations_url": "https://api.github.com/users/andrey999333/orgs",
"repos_url": "https://api.github.com/users/andrey999333/repos",
"events_url": "https://api.github.com/users/andrey999333/events{/privacy}",
"received_events_url": "https://api.github.com/users/andrey999333/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you are right, I also find this bug, see [πBugs in run_tf_ner.py](https://github.com/huggingface/transformers/issues/3389)",
"Hey @andrey999333! I answered to the question just here https://github.com/huggingface/transformers/issues/3389",
"Hey @jplu Thanks a lot, i have not noticed that line for some reason. I was wandering coz my code was working fine with the flag and yours without. Now the mystery is solved :)"
] | 1,580 | 1,585 | 1,585 | NONE | null | The model used for ner model is TFBertForTokenClassification in case of Tensorflow. According to documentation this model produces logits for every token. But the loss function that is used in `run_tf_ner.py` is `tf.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)` which has default argument `from_logits=False`. In this case the loss function expects probabilities but not the logits. So i would think that correct loss function should have additional argument `from_logits=True`. I'm building my own code using the `run_ner_tf.py` as an example, that is why i want to understand it. Am i missing something here? Because it looks like people can train the model and get some reasonable results... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2680/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2679/comments | https://api.github.com/repos/huggingface/transformers/issues/2679/events | https://github.com/huggingface/transformers/pull/2679 | 557,334,954 | MDExOlB1bGxSZXF1ZXN0MzY4OTQyMjU1 | 2,679 | Add classifier dropout in ALBERT | {
"login": "peteriz",
"id": 232524,
"node_id": "MDQ6VXNlcjIzMjUyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/232524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peteriz",
"html_url": "https://github.com/peteriz",
"followers_url": "https://api.github.com/users/peteriz/followers",
"following_url": "https://api.github.com/users/peteriz/following{/other_user}",
"gists_url": "https://api.github.com/users/peteriz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peteriz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peteriz/subscriptions",
"organizations_url": "https://api.github.com/users/peteriz/orgs",
"repos_url": "https://api.github.com/users/peteriz/repos",
"events_url": "https://api.github.com/users/peteriz/events{/privacy}",
"received_events_url": "https://api.github.com/users/peteriz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2679?src=pr&el=h1) Report\n> Merging [#2679](https://codecov.io/gh/huggingface/transformers/pull/2679?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/83446a88d902661fab12bf8c37a1aa2845cdca5f?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2679?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2679 +/- ##\n==========================================\n+ Coverage 74.59% 74.59% +<.01% \n==========================================\n Files 89 89 \n Lines 14971 14972 +1 \n==========================================\n+ Hits 11168 11169 +1 \n Misses 3803 3803\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2679?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `79.14% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100% <100%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2679?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2679?src=pr&el=footer). Last update [83446a8...12c7809](https://codecov.io/gh/huggingface/transformers/pull/2679?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's fantastic, thank you for taking the time to do this @peteriz !",
"The configuration files were updated. The type of GELU activation function used was also changed to \"gelu_new\", which is the appropriate activation function that is used in the google-research repository.\r\n\r\n[Original gelu](https://github.com/google-research/ALBERT/blob/e8f8339b003cf2ddbb5ee9fc34a32651b33dd64e/modeling.py#L296-L309)\r\n\r\n[Our gelu new](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L138-L142)"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | As mentioned in the original [paper](https://arxiv.org/pdf/1909.11942.pdf), they separated the dropout rates of the transformer cells and the classifier, moreover, in V2 the dropouts are 0 (expect for the classifier, again).
Current implementation does not supports this and models are not training well (can't reproduce results of GLUE benchmark using V2 models). I manually updated these values and got V2 models converging.
This issue was raised in #2337 and also mentioned in https://github.com/google-research/ALBERT/issues/23
I added a separate parameter in the config file and update the sequence classification head.
Please also update the configuration of ALBERT V2 models ([base](https://huggingface.co/albert-base-v2), [large](https://huggingface.co/albert-large-v2), [xlarge](https://huggingface.co/albert-xlarge-v2)) in your repository.
More specifically, the configuration of the **attention and hidden dropout rates** of ALBERT V2 models in your repository as well (see as in https://tfhub.dev/google/albert_base/3, https://tfhub.dev/google/albert_large/3, https://tfhub.dev/google/albert_xlarge/3 and https://tfhub.dev/google/albert_xxlarge/3)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2679/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2679",
"html_url": "https://github.com/huggingface/transformers/pull/2679",
"diff_url": "https://github.com/huggingface/transformers/pull/2679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2679.patch",
"merged_at": 1580395956000
} |
https://api.github.com/repos/huggingface/transformers/issues/2678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2678/comments | https://api.github.com/repos/huggingface/transformers/issues/2678/events | https://github.com/huggingface/transformers/issues/2678 | 557,084,278 | MDU6SXNzdWU1NTcwODQyNzg= | 2,678 | Bug in consecutive creation of tokenizers with different parameters | {
"login": "hawkeoni",
"id": 27156990,
"node_id": "MDQ6VXNlcjI3MTU2OTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/27156990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hawkeoni",
"html_url": "https://github.com/hawkeoni",
"followers_url": "https://api.github.com/users/hawkeoni/followers",
"following_url": "https://api.github.com/users/hawkeoni/following{/other_user}",
"gists_url": "https://api.github.com/users/hawkeoni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hawkeoni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hawkeoni/subscriptions",
"organizations_url": "https://api.github.com/users/hawkeoni/orgs",
"repos_url": "https://api.github.com/users/hawkeoni/repos",
"events_url": "https://api.github.com/users/hawkeoni/events{/privacy}",
"received_events_url": "https://api.github.com/users/hawkeoni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just to be clear: I understand that specifying `do_lower_case=True` for a cased model is wrong. The point is in overwriting or somewhat caching the parameter for future calls of the class constructor.",
"Here is the full output with logger.\r\n```\r\nI0129 23:25:47.064881 140331667420992 file_utils.py:38] PyTorch version 1.2.0 available.\r\nI0129 23:25:48.127733 140331667420992 file_utils.py:54] TensorFlow version 2.0.0-rc1 available.\r\nI0129 23:25:48.948472 140331667420992 configuration_utils.py:253] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /home/hawkeoni/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.d7a3af18ce3a2ab7c0f48f04dc8daff45ed9a3ed333b9e9a79d012a0dedf87a6\r\nI0129 23:25:48.949648 140331667420992 configuration_utils.py:289] Model config BertConfig {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"do_sample\": false,\r\n \"eos_token_ids\": 0,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-12,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_beams\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 2,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"pruned_heads\": {},\r\n \"repetition_penalty\": 1.0,\r\n \"temperature\": 1.0,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 28996\r\n}\r\n\r\nI0129 23:25:49.533885 140331667420992 tokenization_utils.py:418] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /home/hawkeoni/.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1\r\n['Hello', 'there', '!']\r\nI0129 23:25:50.130188 140331667420992 configuration_utils.py:253] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /home/hawkeoni/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.d7a3af18ce3a2ab7c0f48f04dc8daff45ed9a3ed333b9e9a79d012a0dedf87a6\r\nI0129 23:25:50.130913 140331667420992 configuration_utils.py:289] Model config BertConfig {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"do_sample\": false,\r\n \"eos_token_ids\": 0,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-12,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_beams\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 2,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"pruned_heads\": {},\r\n \"repetition_penalty\": 1.0,\r\n \"temperature\": 1.0,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 28996\r\n}\r\n\r\nI0129 23:25:50.711892 140331667420992 tokenization_utils.py:418] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /home/hawkeoni/.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1\r\n['hello', 'there', '!']\r\nI0129 23:25:51.325906 140331667420992 configuration_utils.py:253] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /home/hawkeoni/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.d7a3af18ce3a2ab7c0f48f04dc8daff45ed9a3ed333b9e9a79d012a0dedf87a6\r\nI0129 23:25:51.326717 140331667420992 configuration_utils.py:289] Model config BertConfig {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"do_sample\": false,\r\n \"eos_token_ids\": 0,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-12,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_beams\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 2,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"pruned_heads\": {},\r\n \"repetition_penalty\": 1.0,\r\n \"temperature\": 1.0,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 28996\r\n}\r\n\r\nI0129 23:25:51.991283 140331667420992 tokenization_utils.py:418] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /home/hawkeoni/.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1\r\n['hello', 'there', '!']\r\n```",
"Actually even this code has the same trouble, so the problem is probably not in the creation of a new object.\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\n\r\ntext = \"Hello there!\"\r\n\r\ntokenizer_first = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\nprint(tokenizer_first.tokenize(text))\r\n\r\ntokenizer_forced_lowercase = AutoTokenizer.from_pretrained(\"bert-base-cased\", do_lower_case=True)\r\nprint(tokenizer_forced_lowercase.tokenize(text))\r\nprint(tokenizer_first.tokenize(text))\r\n```\r\noutputs:\r\n```\r\n['Hello', 'there', '!']\r\n['hello', 'there', '!']\r\n['hello', 'there', '!']\r\n```\r\n\r\n",
"Indeed, I could reproduce and patch. I'm adding a unit test and will push the fix in a bit.",
"Should have been patched with 2173490!",
"Thanks, this seems to resolve the issue."
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | # π Bug
Creating a tokenizer with do_lower_case set to True overwrites it for the consecutive creation.
## Information
Model I am using: bert-base-cased
Language I am using the model on (English, Chinese ...):
English
## To reproduce
```python
from transformers import AutoTokenizer
text = "Hello there!"
tokenizer_first = AutoTokenizer.from_pretrained("bert-base-cased")
print(tokenizer_first.tokenize(text))
tokenizer_forced_lowercase = AutoTokenizer.from_pretrained("bert-base-cased", do_lower_case=True)
print(tokenizer_forced_lowercase.tokenize(text))
tokenizer_second = AutoTokenizer.from_pretrained("bert-base-cased")
print(tokenizer_second.tokenize(text))
```
The output on my machine ubuntu 18.04, transformers 2.3.0 installed just now from the repo:
```
['Hello', 'there', '!']
['hello', 'there', '!']
['hello', 'there', '!']
```
Steps to reproduce the behavior:
Execute code snippet from above.
## Expected behavior
Expected output:
```
['Hello', 'there', '!']
['hello', 'there', '!']
['Hello', 'there', '!']
```
## Environment
* OS: ubuntu 18.04
* Python version: Python 3.7.3
* PyTorch version: 1.2.0
* `transformers` version (or branch): 2.3.0 just installed from master
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2678/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2678/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2677/comments | https://api.github.com/repos/huggingface/transformers/issues/2677/events | https://github.com/huggingface/transformers/pull/2677 | 557,066,887 | MDExOlB1bGxSZXF1ZXN0MzY4NzI1NjYz | 2,677 | Flaubert | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2677?src=pr&el=h1) Report\n> Merging [#2677](https://codecov.io/gh/huggingface/transformers/pull/2677?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/adb8c93134f02fd0eac2b52189364af21977004c?src=pr&el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `36.22%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2677?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2677 +/- ##\n========================================\n- Coverage 74.59% 74.1% -0.5% \n========================================\n Files 89 92 +3 \n Lines 14971 15167 +196 \n========================================\n+ Hits 11168 11239 +71 \n- Misses 3803 3928 +125\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2677?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `29.32% <29.32%> (ΓΈ)` | |\n| [src/transformers/tokenization\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZmxhdWJlcnQucHk=) | `40.42% <40.42%> (ΓΈ)` | |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `75% <75%> (ΓΈ)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2677?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2677?src=pr&el=footer). Last update [adb8c93...924cb7e](https://codecov.io/gh/huggingface/transformers/pull/2677?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Is the `layerdrop` configuration argument used anywhere? I don't see any usage in the modeling file.",
"Hi @LysandreJik, \r\n\r\nThanks a lot for working on my PR!\r\n\r\nGood catch on layerdrop! It is currently not used for inference (it might be in a future version), so I decided to remove it from the code. As it may be useful for fine-tuning, let me add it and create a new PR. Sorry for the inconvenience!",
"Alright, I've completely updated the documentation as well as the tests. I'm merging this PR, feel free to open a new one concerning the `layerdrop`.",
"I've just opened a new PR for `layerdrop`. Thank you so much for your kind support to the integration of our model into your library!",
"A pleasure!"
] | 1,580 | 1,580 | 1,580 | MEMBER | null | From PR #2632 by [formiel](https://github.com/formiel).
This PR adds [FlauBERT](https://github.com/getalp/Flaubert). Most of the code is derived from XLM (there are some new features in FlauBERT such as pre_norm and layerdrop).
The failing tests were fixed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2677/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2677",
"html_url": "https://github.com/huggingface/transformers/pull/2677",
"diff_url": "https://github.com/huggingface/transformers/pull/2677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2677.patch",
"merged_at": 1580396659000
} |
https://api.github.com/repos/huggingface/transformers/issues/2676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2676/comments | https://api.github.com/repos/huggingface/transformers/issues/2676/events | https://github.com/huggingface/transformers/issues/2676 | 557,006,163 | MDU6SXNzdWU1NTcwMDYxNjM= | 2,676 | Trouble fine tuning Huggingface GPT-2 on Colab β Assertion error | {
"login": "texturejc",
"id": 24894080,
"node_id": "MDQ6VXNlcjI0ODk0MDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/24894080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/texturejc",
"html_url": "https://github.com/texturejc",
"followers_url": "https://api.github.com/users/texturejc/followers",
"following_url": "https://api.github.com/users/texturejc/following{/other_user}",
"gists_url": "https://api.github.com/users/texturejc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/texturejc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/texturejc/subscriptions",
"organizations_url": "https://api.github.com/users/texturejc/orgs",
"repos_url": "https://api.github.com/users/texturejc/repos",
"events_url": "https://api.github.com/users/texturejc/events{/privacy}",
"received_events_url": "https://api.github.com/users/texturejc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The error raised means that it cannot find the files you gave it. Do you manage to load the files using the \r\n`with open` syntax without using the script?",
"> Do you manage to load the files using the `with open` syntax without using the script?\r\n\r\nThanks for the reply. I guess the answer is 'no', as I'm not sure what you mean. The files are in the same directory as the script. Should I interpret what you say to mean that I open the files as text and then upload the result? I.e.\r\n\r\n```\r\nopen(\"wiki.train.raw\", \"rb\") as file:\r\n data = file.read()\r\nwith open(\"wiki_train.txt\") as f:\r\n f.write(data)\r\n```\r\nThen upload wiki_train.txt to the Colab and use the CLI to access that when fine tuning?\r\n",
"LysandreJik wants to know if you can open and read the files with 'regular' python on Colab (i.e. just read the file and print some lines in another cell). The error message tells us that the script can't find the train dataset (i.e. wiki.train.raw) and this suggests somekind of a path issue. Maybe you can resolve this with by using the absolute path to the file. ",
">LysandreJik wants to know if you can open and read the files with 'regular' python on Colab (i.e. just read the file and print some lines in another cell). \r\n\r\nAh, sorry. But yes, I can do this no problem:\r\n\r\n```\r\nwith open(\"/content/wiki.test.raw\") as file:\r\n data = file.read()\r\n\r\ndata[:100]\r\n\r\n' \\n = Robert Boulter = \\n \\n Robert Boulter is an English film , television and theatre actor . He had '\r\n```\r\n\r\nI then attempted to use absolute filepaths, just to be sure, but no joy:\r\n\r\n```\r\n!export TRAIN_FILE=/content/wiki.train.raw\r\n!export TEST_FILE=/content/wiki.test.raw\r\n\r\n!python /content/transformers/examples/run_lm_finetuning.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE\r\n```\r\n```\r\nTraceback (most recent call last):\r\n File \"/content/transformers/examples/run_lm_finetuning.py\", line 790, in <module>\r\n main()\r\n File \"/content/transformers/examples/run_lm_finetuning.py\", line 735, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"/content/transformers/examples/run_lm_finetuning.py\", line 149, in load_and_cache_examples\r\n return TextDataset(tokenizer, args, file_path=file_path, block_size=args.block_size)\r\n File \"/content/transformers/examples/run_lm_finetuning.py\", line 88, in __init__\r\n assert os.path.isfile(file_path)\r\nAssertionError\r\n```\r\nI am entirely stumped by this. Any further ideas what might be happening?",
"Is there a way for you to share your colab notebook so that I can take a look?",
"> Is there a way for you to share your colab notebook so that I can take a look?\r\n\r\nAbsolutely; please use the link below. This has all the steps outlined in my previous replies. I'll keep the Colab active for as long as I can.\r\n\r\nhttps://colab.research.google.com/drive/1qx2t0KleLyY_EncLyM1leSRFz7VooP0e",
"Environment variables exported with !export are not registered by google colab in them same shell (they are registered in a sub-shell). Just set them with %env like\r\n\r\n```\r\n%env TRAIN_FILE=/content/wiki.train.raw\r\n%env TEST_FILE=/content/wiki.test.raw\r\n```\r\nor avoid them by setting them directly:\r\n```\r\n!python /content/transformers/examples/run_lm_finetuning.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=/content/wiki.train.raw \\\r\n --do_eval \\\r\n --eval_data_file=/content/wiki.test.raw\r\n```",
">Environment variables exported with !export are not registered by google colab in them same shell (they are registered in a sub-shell). Just set them with %env \r\n\r\nThis is fantastic, thanks; it has completely resolved the issue with respect to the initial error. However, I'm now having a different set of issues. When I run the fine tuning script, training doesn't seem to occur; it stops before the first epoch and iteration is complete:\r\n\r\n```\r\n01/30/2020 18:25:44 - INFO - __main__ - ***** Running training *****\r\n01/30/2020 18:25:44 - INFO - __main__ - Num examples = 244\r\n01/30/2020 18:25:44 - INFO - __main__ - Num Epochs = 1\r\n01/30/2020 18:25:44 - INFO - __main__ - Instantaneous batch size per GPU = 4\r\n01/30/2020 18:25:44 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4\r\n01/30/2020 18:25:44 - INFO - __main__ - Gradient Accumulation steps = 1\r\n01/30/2020 18:25:44 - INFO - __main__ - Total optimization steps = 61\r\nEpoch: 0% 0/1 [00:00<?, ?it/s]\r\nIteration: 0% 0/61 [00:00<?, ?it/s]^C\r\n```\r\nBut there is nevertheless a model saved: `content/gpt2_cached_lm_1024_GE_train.txt`\r\n\r\nHowever, when I run the text generation script, \r\n```\r\n!python /content/transformers/examples/run_generation.py \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=content/gpt2_cached_lm_1024_GE_train.txt\r\n```\r\nI get the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/content/transformers/examples/run_generation.py\", line 237, in <module>\r\n main()\r\n File \"/content/transformers/examples/run_generation.py\", line 200, in main\r\n tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py\", line 309, in from_pretrained\r\n return cls._from_pretrained(*inputs, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py\", line 410, in _from_pretrained\r\n list(cls.vocab_files_names.values()),\r\nOSError: Model name 'content/gpt2_cached_lm_1024_GE_train.txt' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'content/gpt2_cached_lm_1024_GE_train.txt' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\r\n```\r\nNow, I don't know if this is because the training hasn't occurred successfully, or if I'm loading the model incorrectly. Sorry for being such a pain on this, and thanks for the help so far!",
"> content/gpt2_cached_lm_1024_GE_train.txt\r\n\r\nThat is not the model, but the features generated from your training data. I assume that GE_train.txt is a file which contains your training data (environment variable $TRAIN_FILE)?\r\n\r\nDoes the script work for you with the WikiText-2 dataset? It works for me on colab with a reduced batch size (i.e. --per_gpu_train_batch_size=2).\r\n\r\n\r\n",
">Does the script work for you with the WikiText-2 dataset? It works for me on colab with a reduced batch size (i.e. --per_gpu_train_batch_size=2).\r\n\r\nYes, reducing the batch size works nicely, thanks. \r\n\r\nTwo final questions and I'll close this, if that's OK.\r\n\r\n1. So when I fine tune in the colab, I don't need to load the fine tuned model separately? As in, my local GPT-2 model _is_ the fine tuned model, and I can call it in the usual way?\r\n\r\n2. Is there any way of increasing the length of both the prompt text and the generated text from `run_generation.py`? As of now, the prompt text just gets reproduced if it's too long and the generated text is usually just one sentence length piece. I assume that this is the line of code responsible for this,\r\n`text = text[: text.find(args.stop_token) if args.stop_token else None]`\r\nbut i wonder is there any parameter I can adjust in the CLI without digging into the bowels of the script itself?",
"Glad you could get the script to work! Concerning your questions:\r\n\r\n1. You can load the fine-tuned model as you would any model, just point the `model_name_or_path` from `run_generation` to the directory containing your finetuned model.\r\n\r\n2. You can increase the length by specifying the `--length` argument to `run_generation`. Up until this morning there was an issue with the script where it wouldn't sample from the generation, instead always taking the argmax of all tokens generated. This generally results in some repetition, which might be what you were facing as you say the `prompt text just gets reproduced`.\r\n\r\nI would try to pull the repository once again, making sure you have the last version so that it samples correctly according to the `--k` and `--p` arguments, which you can modify to generate different completions.\r\n\r\nIf you don't specify a stop token, it should not stop at the end of a sentence. For example, with the following arguments for `run_generation`:\r\n```\r\n--model_type=gpt2 \r\n--model_name_or_path=gpt2 \r\n--k=50 \r\n--p=0.9 \r\n--length=200\r\n```\r\n\r\nWith the following sample text: `The horse is`, I get the following completion:\r\n\r\n```\r\nThe horse is just barely alive as he's been playing with the dog. The dog is trying to get its life back. She's been waiting for him for a long time. It's been long and hard. I have to move her to the backseat of my car.\r\n\r\nWhen we start talking, he looks over at us like it's about to move off the road. He tells me he's been trying to get a good driver since he moved in. I don't know what to think. The only way he can drive me like that is to ride his horse to get his life back.\r\n\r\nMy sister has been following my life. I am her daughter now. She has always been my mother and always has been.\r\n\r\nMy dad has been the only one in my family. He never told me. He was a nice guy, but he was very controlling. He didn't tell anyone. He always would take a picture of me with his horse.\r\n!\r\n\r\n```",
">Glad you could get the script to work! Concerning your questions:\r\n\r\nThanks so much for all the help! This is a really comprehensive answer and therefore very usefulββand not just for me either, I'll wager. Great, I'll get cracking with my project so with all these provisos duly noted!",
"Happy to help!"
] | 1,580 | 1,580 | 1,580 | NONE | null | [Cross posted from SO]
I wish to fine tune Huggingface's GPT-2 transformer model on my own text data. I want to do this on a Google Colab notebook. However, it doesn't seem to work.
I install the various bits and pieces via the Colab:
```
!git clone https://github.com/huggingface/transformers
%cd transformers
!pip install .
!pip install -r ./examples/requirements.txt
```
Following the example, I upload the suggested WikiText sample data to the for training and run the suggested CLI commands in the notebook.
```
!export TRAIN_FILE=wiki.train.raw
!export TEST_FILE=wiki.test.raw
!python run_lm_finetuning.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```
This chugs along for a bit, but then I get an assertion error:
```
Traceback (most recent call last):
File "run_lm_finetuning.py", line 790, in <module>
main()
File "run_lm_finetuning.py", line 735, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)
File "run_lm_finetuning.py", line 149, in load_and_cache_examples
return TextDataset(tokenizer, args, file_path=file_path, block_size=args.block_size)
File "run_lm_finetuning.py", line 88, in __init__
assert os.path.isfile(file_path)
AssertionError
```
When I run this script via CLI on my own machine it works fine, with the problem that it takes forever to do anything. Why does Colab present this specific problem? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2676/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2675/comments | https://api.github.com/repos/huggingface/transformers/issues/2675/events | https://github.com/huggingface/transformers/issues/2675 | 556,992,566 | MDU6SXNzdWU1NTY5OTI1NjY= | 2,675 | Best weights/models after fine-tuning gpt2 | {
"login": "sb1992",
"id": 10261100,
"node_id": "MDQ6VXNlcjEwMjYxMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10261100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sb1992",
"html_url": "https://github.com/sb1992",
"followers_url": "https://api.github.com/users/sb1992/followers",
"following_url": "https://api.github.com/users/sb1992/following{/other_user}",
"gists_url": "https://api.github.com/users/sb1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sb1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sb1992/subscriptions",
"organizations_url": "https://api.github.com/users/sb1992/orgs",
"repos_url": "https://api.github.com/users/sb1992/repos",
"events_url": "https://api.github.com/users/sb1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/sb1992/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Indeed saving checkpoints after every 50 iterations is quite a lot, therefore we've upped this value to 500 yesterday in 335dd5e. Concerning your questions:\r\n\r\n1. You can use the `--evaluate_during_training` flag to evaluate the model every `--logging_step`, or you can use the `--evaluate_all_checkpoints` to evaluate all the checkpoints at the end. There is no feature to save only the best model, but you could easily do it by modifying the script to save only when the evaluation yields better results than the previous one.\r\n2. For the `run_generation.py` script, pass it the folder which contains the weights you want to use for generation. The folder must contain a `pytorch_model.bin`, a `config.json`, as well as a tokenizer object.\r\n3. Yes, just mention the checkpoint folder in `run_generation.py`. ",
"Thank you. My bad i did not pay full attention to the arguments present in the generation script. That answers my queries."
] | 1,580 | 1,580 | 1,580 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
I am fine-tuning gpt2 on a new dataset and it checkpoints after every 50 iterations and saves the model. It stretches my local storage to extreme and would want to delete redundant models. So my queries are following:
1) Is it possible to only save the best weights (which gave the lowest perplexity/loss on evaluation data)?
2) When we run_generation.py and passed the directory of our fine-tuned model, which model weights are actually used for generation? ( as there are so many checkpoint folders with model weights).
3) And hence, related to above 2 questions how does the model consider "the best" model/weights from the fine-tuned model directory we pass as arguments? and can we just mention the chekcpoint folder as well in run_generation.py?
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2675/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2674/comments | https://api.github.com/repos/huggingface/transformers/issues/2674/events | https://github.com/huggingface/transformers/pull/2674 | 556,976,156 | MDExOlB1bGxSZXF1ZXN0MzY4NjUxNTg2 | 2,674 | Integrate fast tokenizers library inside transformers | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"only took a superficial look, but looks very clean π \r\n\r\nExcited to use fast tokenizers by default!",
"Current CI issues are real and \"normal\" we need to release the next version of tokenizers lib which will bring all the dependencies.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2674?src=pr&el=h1) Report\n> Merging [#2674](https://codecov.io/gh/huggingface/transformers/pull/2674?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20fc18fbda3669c2f4a3510e0705b2acd54bff07?src=pr&el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `83.01%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2674?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2674 +/- ##\n=========================================\n+ Coverage 75% 75.3% +0.29% \n=========================================\n Files 94 94 \n Lines 15288 15424 +136 \n=========================================\n+ Hits 11467 11615 +148 \n+ Misses 3821 3809 -12\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2674?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.92% <100%> (+0.3%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `70.88% <100%> (+0.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.85% <100%> (+0.58%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.22% <100%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `37.91% <51.42%> (+5.04%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.27% <81.57%> (+0.46%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.08% <87.23%> (+3.98%)` | :arrow_up: |\n| ... and [30 more](https://codecov.io/gh/huggingface/transformers/pull/2674/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2674?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2674?src=pr&el=footer). Last update [20fc18f...56748e8](https://codecov.io/gh/huggingface/transformers/pull/2674?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,582 | 1,582 | MEMBER | null | Integrate the BPE-based tokenizers inside transformers.
- [x] Bert (100% match)
- [x] DistilBert (100% match)
- [x] OpenAI GPT (100% match)
- [x] GPT2 (100% match if no trailing \n)
- [x] Roberta (100% match if no trailing \n)
- [x] TransformerXL
- [x] CTRL (No binding will be provided).
Added priority for Tokenizer with fast implementation in `AutoTokenizer` this is done through a new mapping (name: class) -> (name: Tuple[class, class]) which represents both the Python and Rust implementation classes. if no Rust implementation is available, it is simply set to None. AutoTokenizer will try to pick the Rust class if not None, otherwise it defaults to the Python one.
Added some matching tests which basically checks that there is a huge % of element wise matching tokens. This is set arbitrary to 0.05 (5%) _[i.e. at max, 5% of differences between Python and Rust]_.
Added parameter `return_offsets_mapping=False` over encoding methods which will return the offset mapping if using a Rust tokenizer. If using a Python tokenizer, a warning message is displayed through the module logger and the argument is discarded.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2674/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2674",
"html_url": "https://github.com/huggingface/transformers/pull/2674",
"diff_url": "https://github.com/huggingface/transformers/pull/2674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2674.patch",
"merged_at": 1582130141000
} |
https://api.github.com/repos/huggingface/transformers/issues/2673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2673/comments | https://api.github.com/repos/huggingface/transformers/issues/2673/events | https://github.com/huggingface/transformers/issues/2673 | 556,950,222 | MDU6SXNzdWU1NTY5NTAyMjI= | 2,673 | Fine tuning XLMRoberta for Question Answering | {
"login": "houdaM97",
"id": 43147098,
"node_id": "MDQ6VXNlcjQzMTQ3MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43147098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/houdaM97",
"html_url": "https://github.com/houdaM97",
"followers_url": "https://api.github.com/users/houdaM97/followers",
"following_url": "https://api.github.com/users/houdaM97/following{/other_user}",
"gists_url": "https://api.github.com/users/houdaM97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/houdaM97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/houdaM97/subscriptions",
"organizations_url": "https://api.github.com/users/houdaM97/orgs",
"repos_url": "https://api.github.com/users/houdaM97/repos",
"events_url": "https://api.github.com/users/houdaM97/events{/privacy}",
"received_events_url": "https://api.github.com/users/houdaM97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, we don't currently have an implementation of XLM RoBERTa in tensorflow.",
"I guess this one can now be closed since TensorFlow XLM-RoBERTa was released with 2.4.0. Thanks @LysandreJik @jplu . Quick question though: I guess you are not retraining the LM but convert the pytorch weighs. Is there any script in huggingface to do this?",
"Right now the easiest way to convert the PyTorch weights to TensorFlow when the two implementations are in huggingface/transformers is the following, for e.g. XLM-R:\r\n\r\n```py\r\nfrom transformers import XLMRobertaModel, TFXLMRobertaModel\r\n\r\npytorch_model = XLMRobertaModel.from_pretrained(\"xlm-roberta-base\") # Checkpoint on S3\r\npytorch_model.save_pretrained(\"pytorch_checkpoint_directory\") # Save it to a directory\r\n\r\ntensorflow_model = TFXLMRobertaModel.from_pretrained(\"pytorch_checkpoint_directory\", from_pt=True) # Load from directory in TF\r\n```\r\n\r\nYou can then save that TensorFlow model using the `save_pretrained` method, and you can do it the other way around too to conver TensorFlow models to PyTorch models.",
"@houdaM97, as @nchocho said XLM-R in TensorFlow was released in v2.4.0 last week. There are no official checkpoints on our s3 however, but there are contributed community checkpoints from @jplu you can use instead:\r\n\r\n```py\r\nfrom transformers import TFXLMRobertaModel\r\n\r\nmodel = TFXLMRobertaModel.from_pretrained(\"jplu/tf-xlm-roberta-base\")\r\n```"
] | 1,580 | 1,580 | 1,580 | NONE | null | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
I'm trying to fine tune XLM Roberta for Question Answering in tensorflow version, and the question is do i need to convert the pytorch pretrained model to tensorflow because 'xlm-roberta-case' = "https://s3.amazonaws.com/models.huggingface.co/bert/xlm-roberta-base-pytorch_model.bin", if so how can i do it? i tried to use load_pytorch_model_in_tf2_model() but i had errors!!
thank you for help in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2673/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2672/comments | https://api.github.com/repos/huggingface/transformers/issues/2672/events | https://github.com/huggingface/transformers/issues/2672 | 556,930,415 | MDU6SXNzdWU1NTY5MzA0MTU= | 2,672 | bert-base-uncased have weird result on Squad 2.0 | {
"login": "f422661",
"id": 25716095,
"node_id": "MDQ6VXNlcjI1NzE2MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/25716095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f422661",
"html_url": "https://github.com/f422661",
"followers_url": "https://api.github.com/users/f422661/followers",
"following_url": "https://api.github.com/users/f422661/following{/other_user}",
"gists_url": "https://api.github.com/users/f422661/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f422661/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f422661/subscriptions",
"organizations_url": "https://api.github.com/users/f422661/orgs",
"repos_url": "https://api.github.com/users/f422661/repos",
"events_url": "https://api.github.com/users/f422661/events{/privacy}",
"received_events_url": "https://api.github.com/users/f422661/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, running the exact same command but specifying you're using the version 2 with `--version_2_with_negative` gives me the following results:\r\n\r\n```\r\n01/29/2020 13:20:35 - INFO - __main__ - Results: {'exact': 73.29234397372188, 'f1': 76.50792180947842, 'total': 11873, 'HasAns_exact': 71.94669365721997, 'HasAns_f1': 78.38707079013807, 'HasAns_total': 5928, 'NoAns_exact': 74.63414634146342, 'NoAns_f1': 74.63414634146342, 'NoAns_total': 5945, 'best_exact': 73.29234397372188, 'best_exact_thresh': 0.0, 'best_f1': 76.50792180947839, 'best_f1_thresh': 0.0}\r\n```\r\n\r\nHere are the exact arguments I used:\r\n```\r\n--model_type=bert --model_name_or_path=bert-base-uncased --do_train --do_eval --do_lower_case --version_2_with_negative --train_file=../../datasets/squad-v2.0/train-v2.0.json --predict_file=../../datasets/squad-v2.0/dev-v2.0.json --per_gpu_train_batch_size=12 --learning_rate=3e-5 --num_train_epochs=2.0 --max_seq_length=384 --doc_stride=128 --save_steps=10000 --output_dir=output_pt --overwrite_output_dir\r\n```",
"@LysandreJik thanks for your reply.\r\nI will try again.",
"My result is the same as yours. Maybe we not use the argument `--version_2_with_negative` ?"
] | 1,580 | 1,602 | 1,580 | NONE | null | I followed the example to fine-tuning BERT on SQuAD2.0:
https://huggingface.co/transformers/examples.html#fine-tuning-bert-on-squad1-0
I run the code as follow:
```
python /content/drive/My\ Drive/squad2/run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file /content/drive/My\ Drive/squad2/train-v2.0.json \
--predict_file /content/drive/My\ Drive/squad2/dev-v2.0.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/drive/My\ Drive/squad2_model/
```
However, I got weird results as follow:
```
Results: {'exact': 40.722648024930514, 'f1': 44.3783712849203, 'total': 11873, 'HasAns_exact': 81.5620782726046, 'HasAns_f1': 88.88400847939587, 'HasAns_total': 5928, 'NoAns_exact': 0.0, 'NoAns_f1': 0.0, 'NoAns_total': 5945, 'best_exact': 50.11370336056599, 'best_exact_thresh': 0.0, 'best_f1': 50.11370336056599, 'best_f1_thresh': 0.0}
```
'NoAns_exact' and 'NoAns_f1' are zero.
Do I miss anything when running the example code?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2672/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2671/comments | https://api.github.com/repos/huggingface/transformers/issues/2671/events | https://github.com/huggingface/transformers/issues/2671 | 556,649,762 | MDU6SXNzdWU1NTY2NDk3NjI= | 2,671 | is SOP(sentence order prediction) implemented? | {
"login": "jinkilee",
"id": 6321520,
"node_id": "MDQ6VXNlcjYzMjE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6321520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinkilee",
"html_url": "https://github.com/jinkilee",
"followers_url": "https://api.github.com/users/jinkilee/followers",
"following_url": "https://api.github.com/users/jinkilee/following{/other_user}",
"gists_url": "https://api.github.com/users/jinkilee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinkilee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinkilee/subscriptions",
"organizations_url": "https://api.github.com/users/jinkilee/orgs",
"repos_url": "https://api.github.com/users/jinkilee/repos",
"events_url": "https://api.github.com/users/jinkilee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinkilee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, the layer that was used for SOP is the pooler layer, which is available in the base `AlbertModel`. When doing a forward pass, the model returns the `pooled_output` as a second value in the returned tuple. You can use this for doing a SOP task.",
"Oh I see! Thank you so much :)",
"Sorry for reopening this issue.\r\n\r\nAs you suggested, I have checked the `pooled_output` which is the second value in the returned tuple at `AlbertModel`\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AlbertTokenizer, AlbertModel\r\nmodel_nm = 'albert-large-v1'\r\n\r\n# Load pre-trained model tokenizer (vocabulary)\r\ntokenizer = AlbertTokenizer.from_pretrained(model_nm)\r\n\r\n# SOP label should be 1\r\nsent_1 = 'I want to eat'\r\nsent_2 = 'because I am hungry'\r\n\r\n# Tokenized input\r\ntext = ' '.join(['[CLS]', sent_1, '[SEP]', sent_2, '[SEP]'])\r\ntokenized_text = tokenizer.tokenize(text)\r\n\r\n# Convert token to vocabulary indices\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\nsegments_ids = (len(tokenizer.tokenize(sent_1))+2)*[0] + (len(tokenizer.tokenize(sent_2))+1)*[1]\r\n\r\n# Convert inputs to PyTorch tensors\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nsegments_tensors = torch.tensor([segments_ids])\r\n\r\n# Load pre-trained model (weights)\r\nmodel = AlbertModel.from_pretrained(model_nm)\r\nmodel.eval()\r\n\r\noutput = model(tokens_tensor, segments_tensors)\r\noutput[0].shape, output[1].shape\r\n```\r\nWhen I check `output[1].shape` it is just a vector of `torch.Size([1, 1024]))`.\r\nHow can I do SOP with this? Unlike `BertModel` in `modeling_bert.py`, there is no code like\r\n\r\n```python\r\nself.classifier = nn.Linear(config.hidden_size, self.config.num_labels)\r\n```",
"From `src/transformers/modeling_albert.py`\r\n```python\r\n# No ALBERT model currently handles the next sentence prediction task\r\nif \"seq_relationship\" in name:\r\n continue\r\n```\r\nI think current ALBERT model does not handle SOP. Let me know if I am wrong :)",
"ALBERT doesn't do NSP but SOP - as you said. I think the following is a copy-paste error (@LysandreJik could you confirm?). It should refer to SOP and not NSP.\r\n\r\nhttps://github.com/huggingface/transformers/blob/ddb6f9476b58ed9bf4433622ca9aa49932929bc0/src/transformers/modeling_albert.py#L496-L500\r\n\r\nI am not sure about the seq_relationship line.\r\n\r\nhttps://github.com/huggingface/transformers/blob/ddb6f9476b58ed9bf4433622ca9aa49932929bc0/src/transformers/modeling_albert.py#L113-L115\r\n\r\nPerhaps the final relationship classification isn't implemented in transformers. Shouldn't be too hard to implement by yourself, though. You can use `AlbertForSequenceClassification` for that.",
"Yeah, implementing SOP by my self is not difficult one. As you suggested, I can just use `AlbertForSequenceClassification`. By the way, what I really want to know is ...\r\n\r\nwhen I load AlbertModel by `model = AlbertModel.from_pretrained(model_nm)`, does pre-trained model has already learnt SOP? or not? I think it has not learnt yet, because `AlbertModel` has never used `AlbertForSequenceClassification`.",
"The model names are just abstractions of the weights and layers. They weren't trained with this library. That being said, I would expect AlbertModel to load only the weights and layers except the last classifying layers.\r\n\r\nYou can see for instance that\r\n\r\n```python\r\nfrom transformers import AlbertForSequenceClassification\r\nimport logging\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\nmodel = AlbertForSequenceClassification.from_pretrained('albert-base-v1')\r\n```\r\n\r\nwill log:\r\n\r\n```\r\nINFO:transformers.modeling_utils:Weights of AlbertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']\r\nINFO:transformers.modeling_utils:Weights from pretrained model not used in AlbertForSequenceClassification: ['predictions.bias', 'predictions.LayerNorm.weight', 'predictions.LayerNorm.bias', 'predictions.dense.weight', 'predictions.dense.bias', 'predictions.decoder.weight', 'predictions.decoder.bias']\r\n```\r\n\r\nIndicating that the pretrained weights that you are loading haven't all been loaded (the prediction layer) because that layer doesn't exist in this architecture. On the other hand, the classification layer that is present in the XXXSequenceClassification model has not been pretrained, so its weights are not in the pretrained weights.\r\n\r\nI would have expected to see a similar message indicating that not all weights could be loaded in AlbertModel because it doesn't contain the prediction layer, but I don't get any such message - which seems odd to me.\r\n\r\n```python\r\nfrom transformers import AlbertModel\r\nimport logging\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\nmodel = AlbertModel.from_pretrained('albert-base-v1')\r\n# doesn't log any info messages\r\n```\r\n",
"Oh I see thank you so much.\r\n\r\nYour answer is clear to me :)\r\n\r\nThanks",
"Hi all! What situation with Albert's SOP now?\r\n@jinkilee do you have worked approach for SOP?\r\nThank you!",
"@jinkilee Hi, please\r\n\r\nI went through the discussion and tried to use \"AlbertForSequenceClassification\" instead but I cannot understand what does logits exactly indicates! \r\n\r\n```\r\nimport torch\r\nfrom transformers import AlbertTokenizer, AlbertModel\r\nfrom transformers import AlbertForSequenceClassification\r\nimport logging\r\nmodel_nm = 'albert-large-v1'\r\n\r\n# Load pre-trained model tokenizer (vocabulary)\r\ntokenizer = AlbertTokenizer.from_pretrained(model_nm)\r\n\r\n# SOP label should be 1\r\nsent_1 = 'I was having cough and headache'\r\nsent_2 = 'so I went to the doctor'\r\n\r\n# Tokenized input\r\ntext = ' '.join(['[CLS]', sent_2, '[SEP]', sent_1, '[SEP]'])\r\ntokenized_text = tokenizer.tokenize(text)\r\n\r\n# Convert token to vocabulary indices\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\nsegments_ids = (len(tokenizer.tokenize(sent_1))+2)*[0] + (len(tokenizer.tokenize(sent_2))+1)*[1]\r\n\r\n# Convert inputs to PyTorch tensors\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nsegments_tensors = torch.tensor([segments_ids])\r\n\r\n# Load pre-trained model (weights)\r\nlogging.basicConfig(level=logging.INFO)\r\nmodel = AlbertForSequenceClassification.from_pretrained('albert-base-v1')\r\nmodel.eval()\r\n\r\noutput = model(tokens_tensor, segments_tensors)\r\noutput#[0].shape, output[1].shape\r\n```\r\n\r\nThe output value is:\r\nSequenceClassifierOutput(loss=None, logits=tensor([[ 1.0192, -0.3174]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)\r\n\r\nThanks in advance.\r\n"
] | 1,580 | 1,645 | 1,580 | NONE | null | # β Questions & Help
I am reviewing huggingface's version of Albert.
However, I cannot find any code or comment about SOP.
I can find NSP(Next Sentence Prediction) implementation from modeling_from src/transformers/modeling_bert.py.
Is SOP inherited from here with SOP-style labeling? or Is there anything I am missing?
## Details
https://stackoverflow.com/questions/59961023/is-sopsentence-order-prediction-implemented | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2670/comments | https://api.github.com/repos/huggingface/transformers/issues/2670/events | https://github.com/huggingface/transformers/pull/2670 | 556,585,774 | MDExOlB1bGxSZXF1ZXN0MzY4MzI5NDY3 | 2,670 | Remove unnecessary `del` in run_tf_glue.py example | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2670?src=pr&el=h1) Report\n> Merging [#2670](https://codecov.io/gh/huggingface/transformers/pull/2670?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d87eafd118739a4c121d69d7cff425264f01e1c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2670?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2670 +/- ##\n=======================================\n Coverage 74.51% 74.51% \n=======================================\n Files 87 87 \n Lines 14920 14920 \n=======================================\n Hits 11117 11117 \n Misses 3803 3803\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2670?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2670?src=pr&el=footer). Last update [9d87eaf...ff1a4b3](https://codecov.io/gh/huggingface/transformers/pull/2670?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed, thanks for catching that and removing it!"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | Platform: Ubuntu 18.04 (Linux-4.15.0-1054-aws-x86_64-with-Ubuntu-18.04-bionic)
Python: 3.6.9
PyTorch: 1.4.0
TensorFlow: 2.0.0
Running `./examples/run_tf_glue.py` gives `KeyError: 'special_tokens_mask'`.
Diving into the code, it looks like there's an optional keyword argument in [`encode_plus()`](https://github.com/huggingface/transformers/blob/9d87eafd118739a4c121d69d7cff425264f01e1c/src/transformers/tokenization_utils.py#L834) named `return_special_tokens_mask` that defaults to False. I'm guessing that this argument was added recently and the example just needs to be updated? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2670/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2670",
"html_url": "https://github.com/huggingface/transformers/pull/2670",
"diff_url": "https://github.com/huggingface/transformers/pull/2670.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2670.patch",
"merged_at": 1580324477000
} |
https://api.github.com/repos/huggingface/transformers/issues/2669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2669/comments | https://api.github.com/repos/huggingface/transformers/issues/2669/events | https://github.com/huggingface/transformers/issues/2669 | 556,551,029 | MDU6SXNzdWU1NTY1NTEwMjk= | 2,669 | models and tokenizers trained with pytorch_pretrained_bert are not compatible with transformers | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you share your vocabulary so we can have a look at the differences?",
"@thomwolf I took the `vocab.json` located inside `runs/mymodel` and the `openai-gpt-vocab.json` hosted in the Hugging Face [S3](https://s3.amazonaws.com/models.huggingface.co/bert/openai-gpt-vocab.json) bucket and compared the two as follows:\r\n\r\n```\r\n>>> with open(\"openai-gpt-vocab.json\", \"r\") as f:\r\n... a = json.load(f)\r\n... \r\n>>> with open(\"vocab.json\", \"r\") as f:\r\n... b = json.load(f)\r\n... \r\n>>> a == b\r\nTrue\r\n```\r\n\r\nSo the `vocab.json` inside `runs/mymodel` is exactly the same as that hosted in the S3 bucket.\r\n\r\n------\r\n\r\nSome of the other files located in `runs/mymodel` include `config.json`, `merges.txt` and `special_tokens.txt`. If you're interested in the contents of the last file, it is the following:\r\n\r\n```\r\n<bos>\r\n<eos>\r\n<speaker1>\r\n<speaker2>\r\n<pad>\r\n```",
"From looking at the different releases, I would assume that in the current master branch, more tokens are added (don't have the time to dig through to see where they come from). It happens in `__len__`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/5a6b138b00eef2506e0fc2c6088fb81c064161bf/src/transformers/tokenization_utils.py#L535-L537\r\n\r\nThat being said, to only get the size of the base vocabulary, there is another method called `vocab_size` which is implemented in the OpenAI tokenizer like so:\r\n\r\nhttps://github.com/huggingface/transformers/blob/5a6b138b00eef2506e0fc2c6088fb81c064161bf/src/transformers/tokenization_openai.py#L115-L117\r\n\r\nIn the 0.6.2 release, the length is the encoder + the special tokens.\r\n\r\nhttps://github.com/huggingface/transformers/blob/b832d5bb8a6dfc5965015b828e577677eace601e/pytorch_pretrained_bert/tokenization_openai.py#L157-L158\r\n\r\nSo it seems that the added special tokens are different in the releases, that is `self.added_tokens_encoder` (current master) vs `self.special_tokens` (0.6.2).\r\n\r\nPerhaps when I have more time I can look more closely into this.",
"@BramVanroy yes indeed, but this is a more serious, systemic issue since it occurs not just for tokenizers but for models as well, as I noted in the original post.\r\n\r\nBasically anyone who trained models with pytorch_pretrained_bert 0.6.2 and later upgraded their entire code-base (both training and inference) to transformers 2.3.0 will have to discard those models and train fresh ones.",
"Can we get an update on this @thomwolf and @LysandreJik ? Is this expected behavior? Should we have to re-train all models?",
"Hi, I'm having a hard time replicating this on my end. Could you try specifying explicitly to the model that you're trying to load a state dict of a certain size, by using the configuration? If I understand correctly, you have a directory `runs/mymodel `, which must contain a `config.json` file and a `pytorch_model.bin` file.\r\n\r\nLoading the model as such fails:\r\n\r\n```py\r\nfrom transformers import OpenAIGPTModel\r\n\r\nmodel = OpenAIGPTModel.from_pretrained(\"runs/mymodel\")\r\n```\r\n\r\nwith the error you mentioned above. Can you try by loading the configuration separately, and then instantiating the model with it, as follows:\r\n\r\n```py\r\nfrom transformers import OpenAIGPTModel, OpenAIGPTConfig\r\n\r\nconfig = OpenAIGPTConfig.from_pretrained(\"runs/mymodel\", vocab_size=40483)\r\nmodel = OpenAIGPTModel.from_pretrained(\"runs/mymodel\", config=config)\r\n```",
"@LysandreJik when I try loading the model like you mentioned above, I don't get any errors. However, `vocab_size=40483` is something that `from_pretrained()` is supposed to figure out from the contents of `runs/mymodel`, right?\r\n\r\nAnd yes, `runs/mymodel` contains all of the following:\r\n```\r\nmodel_training_args.bin\r\nconfig.json\r\nvocab.json\r\nspecial_tokens.txt\r\nmerges.txt\r\ncheckpoint_mymodel_1.pth\r\ncheckpoint_mymodel_2.pth\r\npytorch_model.bin\r\n```\r\n\r\nI trained a separate model with transformers 2.3.0 and its trained model directory within `runs/` didn't have `special_tokens.txt`, but instead I see `special_tokens_map.json` and `added_tokens.json`. These files didn't exist when training models with pytorch_pretrained_bert 0.6.2, understandably because you guys changed the special tokens from a list to a dictionary when you upgraded from 0.6.2.\r\n\r\nHowever, this upgrade should not mean that old models are no longer supported.\r\n\r\nAlso, while what you are suggesting works for loading the model, can you tell me what would ensure that both tokenizers return the exact same length, i.e., 40483?\r\n\r\nBasically what the `OpenAIGPTTokenizer` needs to ensure is that it not only reads the contents of `vocab.json` from `runs/mymodel`, but also the contents of `special_tokens.txt`. This was happening with pytorch_pretrained_bert 0.6.2, and it should continue to happen in transformers 2.3.0. I understand that you guys wanted to change the special tokens from a list to a dictionary for 2.3.0, but that change should be backward compatible.\r\n\r\nLook for a `special_tokens.txt` if it exists, and use it if it does.\r\n\r\nSee how this was being done in 0.6.2 (I'm only pasting the example of the tokenizer, but the same is applicable to the model as well): https://github.com/huggingface/transformers/blob/v0.6.2/pytorch_pretrained_bert/tokenization_openai.py#L128L132\r\n",
"@thomwolf @LysandreJik Any updates?\r\n\r\nIt would be great if y'all could let everyone know if you're working to fix this.\r\n\r\nAlternately, please make a recommendation to the community on how to handle this scenario. Should we revert back to pytorch_pretrained_bert 0.6.2 for our old models? Should we just re-train all old models?",
"Hi @g-karthik we **don't** plan to assure backward compatibility between `pytorch-pretrained-bert` and `transformers`'s tokenizers.\r\n\r\nThere were deep changes in the way we handle added tokens from `pytorch-pretrained-bert` (in which it was basic, specific to BERT and broken on some hedge cases) to `transformers` (in which it is more reliable and unified across models).\r\n\r\nSo in your case, my recommendation is thus to stick with `pytorch_pretrained_bert` indeed.",
"@thomwolf Thanks for letting us know!\r\n\r\nJust to clarify, it's not just the tokenizers, it's also the models trained with `pytorch_pretrained_bert` 0.6.2 (which includes more than just BERT, btw) that will not be compatible with `transformers` 2.3.0 at run-time/inference-time."
] | 1,580 | 1,581 | 1,581 | NONE | null | # π Migration
## Information
The models and tokenizers in transformers 2.3.0 are backward incompatible with pytorch_pretrained_bert 0.6.2.
## Details
```
>>> import transformers
>>> transformers.__version__
'2.3.0'
>>> import pytorch_pretrained_bert
>>> pytorch_pretrained_bert.__version__
'0.6.2'
>>> from pytorch_pretrained_bert import OpenAIGPTTokenizer
>>> tokenizer = OpenAIGPTTokenizer.from_pretrained("runs/mymodel")
ftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy.
>>> len(tokenizer)
40483
>>> from transformers import OpenAIGPTTokenizer
>>> tokenizer = OpenAIGPTTokenizer.from_pretrained("runs/mymodel")
ftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy.
>>> len(tokenizer)
40478
>>>
```
`runs/mymodel` contains a model that was trained using pytorch_pretrained_bert 0.6.2, specifically with the `transfer-learning-conv-ai` repo.
Expected behavior from transformers 2.3.0: `len(tokenizer)` must be 40483, like with pytorch_pretrained_bert 0.6.2.
## Environment
* OS: Amazon Linux
* Python version: 3.6
* PyTorch version:
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): 0.6.2
* `transformers` version (or branch): 2.3.0
* Using GPU? Yes
* Distributed or parallel setup? N/A
## Checklist
- [Y] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [Y] I checked if a related official extension example runs on my machine.
-------
UPDATE: this issue is also the case for models, see below:
```
>>> from pytorch_pretrained_bert import OpenAIGPTLMHeadModel
>>> model = OpenAIGPTLMHeadModel.from_pretrained("runs/mymodel")
>>> from transformers import OpenAIGPTLMHeadModel
>>> model = OpenAIGPTLMHeadModel.from_pretrained("runs/mymodel")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 486, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for OpenAIGPTLMHeadModel:
size mismatch for transformer.tokens_embed.weight: copying a param with shape torch.Size([40483, 768]) from checkpoint, the shape in current model is torch.Size([40478, 768]).
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2669/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2668/comments | https://api.github.com/repos/huggingface/transformers/issues/2668/events | https://github.com/huggingface/transformers/issues/2668 | 556,473,673 | MDU6SXNzdWU1NTY0NzM2NzM= | 2,668 | How to get .ckpt files for tensorflow DistilBERT model | {
"login": "JKP0",
"id": 48640299,
"node_id": "MDQ6VXNlcjQ4NjQwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/48640299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JKP0",
"html_url": "https://github.com/JKP0",
"followers_url": "https://api.github.com/users/JKP0/followers",
"following_url": "https://api.github.com/users/JKP0/following{/other_user}",
"gists_url": "https://api.github.com/users/JKP0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JKP0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JKP0/subscriptions",
"organizations_url": "https://api.github.com/users/JKP0/orgs",
"repos_url": "https://api.github.com/users/JKP0/repos",
"events_url": "https://api.github.com/users/JKP0/events{/privacy}",
"received_events_url": "https://api.github.com/users/JKP0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello JKP0,\r\n\r\nWhat do you need the .ckpt files for?",
"@Poaz Dear,\r\nWe are working on NLG models for coreference resolution. We started our project with BERT, so our implementations are dependent with the pre-trained [ BERT-model](https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip) available from Google-API. Now we want to do study for the same with DistilBERT. Our implementation is based on TensorFlow 1.14.0\r\n\r\nActually our requirement is something like bellow \r\n```\r\nassignment_map, initialized_variable_names = modeling.get_assignment_map_from_checkpoint(tvars, config['tf_checkpoint']) # essential, unresolved \r\n\r\ninit_from_checkpoint = tf.train.init_from_checkpoint if config['init_checkpoint'].endswith('ckpt') else load_from_pytorch_checkpoint # essential, unresolved \r\n\r\nmodel.get_all_encoder_layers() # this is our essential, right now completely unresolved for us\r\nmodel.get_sequence_output() # this is our essential, right now completely unresolved for us\r\n```\r\nbut any method (e.g. `get_all_encoder_layers(); get_sequence_output(); get_assignment_map_from_checkpoint(); ...`) implemented in `DistilBertModel` class to get this kind of thing is out-of my knowledge. I have checked a loat. In our earlier implementation, we have defined this method where we have used `tf.train.list_variables(init_checkpoint)` and other tf-1 API to meet the need for which .ckpt files are essential. \r\n\r\nAnd most of the tf-1 API uses checkpoint configuration (or serialized object), but we are unable to resolve it with the non-sequential .h5 model file by TFDistiBertModel. So we are in need to the same file for DistilBert which provided [here ](https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip) for BERT.\r\n\r\nIf you or anyone can suggest a way to come out from it or possible convenient way to get .ckpt files for DistilBERT, I have lots of thanks in advance. Thanks! ",
"Okay, thanks for the context. If you in anyway able to use PyTorch for your implementation you can get outputs from all layers using the following code:\r\n```\r\nfrom transformers import DistilBertTokenizer, DistilBertModel, DistilBertConfig\r\nimport torch\r\n\r\nconfig = DistilBertConfig.from_pretrained('distilbert-base-uncased', output_hidden_states=True)\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\nmodel = DistilBertModel.from_pretrained('distilbert-base-uncased', config=config)\r\nmodel.eval()\r\ninput_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0) \r\noutputs = model(input_ids)\r\n```\r\nThe output will then be outputs[0] (batch_size, seq_length, hidden_state) for the final layer\r\nand outputs[1] (batch_size, seq_length, hidden_state) for each layer in the model, with index 0 being the last layer.\r\n\r\nIf that is not an option, it is possible to convert the .h5 file to .ckpt using Keras and Tensorflow\r\n\r\nFor tf 1.x\r\n```\r\nsaver = tf.train.Saver()\r\nmodel = keras.models.load_model(\"model.h5\")\r\nsess = keras.backend.get_session()\r\nsave_path = saver.save(sess, \"model.ckpt\")\r\n```\r\nfor tf 2.x\r\n```\r\nsaver = tf.train.Checkpoint()\r\nmodel = keras.models.load_model('model.hdf5', compile=False)\r\nsess = tf.compat.v1.keras.backend.get_session()\r\nsave_path = saver.save('model.ckpt')\r\n```\r\n\r\nHope it helps!",
"@Poaz your first idea is good, but it will cost us for other changes.\r\nAnd second one giving error we have tried a lot, as DistilBERT model saved by `model.save_pretrained('dir')` is not a sequential or serialized object and `keras.models.load_model(\"model.h5\")` only loads sequential and serialized .h5 model. \r\n\r\n> to save model \r\n```\r\nimport tensorflow as tf\r\nfrom transformers import DistilBertTokenizer, TFDistilBertModel\r\n\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\nmodel = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\ninput_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"), dtype=\"int32\")[None, :] # Batch size 1\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0]\r\n\r\nmodel.save_pretrained(\"./DSB/\")\r\nmodel.save_weights(\"./DSB/DistDistilBERT_weights.h5\")\r\n```\r\n\r\n> tf-1.14.0\r\n```\r\nimport tensorflow as tf\r\nfrom keras.models import load_model\r\n```\r\n\r\n```\r\nsaver = tf.train.Saver()\r\nmodel = keras.models.load_model(\"DSB/tf_model.h5\")\r\nsess = keras.backend.get_session()\r\nsave_path = saver.save(sess, \"/tmp/model.ckpt\")\r\n```\r\n>\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-3-01f1268a6c60> in <module>()\r\n----> 1 saver = tf.train.Saver()\r\n 2 model = load_model(\"DSB/tf_model.h5\")\r\n 3 sess = keras.backend.get_session()\r\n 4 save_path = saver.save(sess, \"model.ckpt\")\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in __init__(self, var_list, reshape, sharded, max_to_keep, keep_checkpoint_every_n_hours, name, restore_sequentially, saver_def, builder, defer_build, allow_empty, write_version, pad_step_number, save_relative_paths, filename)\r\n 823 time.time() + self._keep_checkpoint_every_n_hours * 3600)\r\n 824 elif not defer_build:\r\n--> 825 self.build()\r\n 826 if self.saver_def:\r\n 827 self._check_saver_def()\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in build(self)\r\n 835 if context.executing_eagerly():\r\n 836 raise RuntimeError(\"Use save/restore instead of build in eager mode.\")\r\n--> 837 self._build(self._filename, build_save=True, build_restore=True)\r\n 838 \r\n 839 def _build_eager(self, checkpoint_path, build_save, build_restore):\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in _build(self, checkpoint_path, build_save, build_restore)\r\n 860 return\r\n 861 else:\r\n--> 862 raise ValueError(\"No variables to save\")\r\n 863 self._is_empty = False\r\n 864 \r\n\r\nValueError: No variables to save\r\n\r\n> tf-2.0.0\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom tensorflow.keras.models import load_model\r\n\r\n```\r\n\r\n```\r\nsaver = tf.train.Checkpoint()\r\nmodel = load_model('DSB/tf_model.h5', compile=False)\r\nsess = tf.compat.v1.keras.backend.get_session()\r\nsave_path = saver.save('model.ckpt')\r\n```\r\n>\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-13-13dd44da36a5> in <module>()\r\n 1 saver = tf.train.Checkpoint()\r\n----> 2 model = load_model('DSB/tf_model.h5', compile=False)\r\n 3 sess = tf.compat.v1.keras.backend.get_session()\r\n 4 save_path = saver.save('model.ckpt')\r\n\r\n1 frames\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/save.py in load_model(filepath, custom_objects, compile)\r\n 144 if (h5py is not None and (\r\n 145 isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))):\r\n--> 146 return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)\r\n 147 \r\n 148 if isinstance(filepath, six.string_types):\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile)\r\n 163 model_config = f.attrs.get('model_config')\r\n 164 if model_config is None:\r\n--> 165 raise ValueError('No model found in config file.')\r\n 166 model_config = json.loads(model_config.decode('utf-8'))\r\n 167 model = model_config_lib.model_from_config(model_config,\r\n\r\nValueError: No model found in config file.\r\n\r\n> tf-2.0.0\r\n```\r\nimport tensorflow as tf\r\nfrom keras.models import load_model\r\n\r\n```\r\n\r\n```\r\nsaver = tf.train.Checkpoint()\r\nmodel = load_model('DSB/tf_model.h5', compile=False)\r\nsess = tf.compat.v1.keras.backend.get_session()\r\nsave_path = saver.save('model.ckpt')\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-15-13dd44da36a5> in <module>()\r\n 1 saver = tf.train.Checkpoint()\r\n----> 2 model = load_model('DSB/tf_model.h5', compile=False)\r\n 3 sess = tf.compat.v1.keras.backend.get_session()\r\n 4 save_path = saver.save('model.ckpt')\r\n\r\n3 frames\r\n/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in load_wrapper(*args, **kwargs)\r\n 456 os.remove(tmp_filepath)\r\n 457 return res\r\n--> 458 return load_function(*args, **kwargs)\r\n 459 \r\n 460 return load_wrapper\r\n\r\n/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in load_model(filepath, custom_objects, compile)\r\n 548 if H5Dict.is_supported_type(filepath):\r\n 549 with H5Dict(filepath, mode='r') as h5dict:\r\n--> 550 model = _deserialize_model(h5dict, custom_objects, compile)\r\n 551 elif hasattr(filepath, 'write') and callable(filepath.write):\r\n 552 def load_function(h5file):\r\n\r\n/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in _deserialize_model(h5dict, custom_objects, compile)\r\n 237 return obj\r\n 238 \r\n--> 239 model_config = h5dict['model_config']\r\n 240 if model_config is None:\r\n 241 raise ValueError('No model found in config.')\r\n\r\n/usr/local/lib/python3.6/dist-packages/keras/utils/io_utils.py in __getitem__(self, attr)\r\n 316 else:\r\n 317 if self.read_only:\r\n--> 318 raise ValueError('Cannot create group in read-only mode.')\r\n 319 val = H5Dict(self.data.create_group(attr))\r\n 320 return val\r\n\r\nValueError: Cannot create group in read-only mode.",
"I see.. The h5 does not contain the model structure, therefore it can not be recreated. That means that it is necessary to rebuild the model in Keras for that method to work. That is simply not feasible for you I think. ",
"hey,you can load the model as :\r\nloaded_model = TFDistilBertForSequenceClassification.from_pretrained(\"directory\")",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@JKP0 were u able to solve the issue?",
" How did you solve this problem, can any one help in this. How to get .ckpt files for muril-base-cased/tf_model.h5"
] | 1,580 | 1,643 | 1,586 | NONE | null | `model.save_pretrained('dir')` tf_model.h5 how to get .ckpt files for it | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2668/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2667/comments | https://api.github.com/repos/huggingface/transformers/issues/2667/events | https://github.com/huggingface/transformers/issues/2667 | 556,286,583 | MDU6SXNzdWU1NTYyODY1ODM= | 2,667 | Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated | {
"login": "etetteh",
"id": 28512232,
"node_id": "MDQ6VXNlcjI4NTEyMjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/etetteh",
"html_url": "https://github.com/etetteh",
"followers_url": "https://api.github.com/users/etetteh/followers",
"following_url": "https://api.github.com/users/etetteh/following{/other_user}",
"gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etetteh/subscriptions",
"organizations_url": "https://api.github.com/users/etetteh/orgs",
"repos_url": "https://api.github.com/users/etetteh/repos",
"events_url": "https://api.github.com/users/etetteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/etetteh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You're trying to load a checkpoint in a tokenizer. Use `AlbertModel` to load the model, not `AlbertTokenizer`.",
"Okay. Thanks.\r\nThese two work okay\r\nmodel = TFAlbertModel.from_pretrained('albert-base-v2')\r\nmodel = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2')\r\n\r\nbecause they are being downloaded from the internet. The issue is how to load the one I have downloaded, which is loacted here, experiment/ALBERT_pretrained_models/albert_base_v2.tar.gz, on my system. Please kindly help or provide sample code ",
"what is in your tar.gz file ?",
"you should probably untar into a folder and it _should_ just work.",
"These are the content of the tar.gz:\r\n30k-clean.model albert_config.json, model.ckpt-best.index, 30k-clean.vocab, model.ckpt-best.data-00000-of-00001, model.ckpt-best.meta\r\n",
"You would need to convert it using the `convert_albert_original_tf_checkpoint_to_pytorch`, as you did in your first question. You can then load the exported dump using `AlbertModel`.",
"I converted it using model.ckpt-best.index, albert_config.json and saved it as albert_base_v2.ckpt.\r\nWhen I run\r\n`model = TFAlbertModel.from_pretrained('albert_base_v2.ckpt')`\r\n\r\nI get \r\n```\r\nUnicodeDecodeError Traceback (most recent call last)\r\n<ipython-input-2-3fd924302d83> in <module>\r\n----> 1 model = TFAlbertModel.from_pretrained('albert_base_v2.ckpt')\r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 277 force_download=force_download,\r\n 278 resume_download=resume_download,\r\n--> 279 **kwargs,\r\n 280 )\r\n 281 else:\r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 173 \r\n 174 \"\"\"\r\n--> 175 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 176 return cls.from_dict(config_dict, **kwargs)\r\n 177 \r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)\r\n 223 if resolved_config_file is None:\r\n 224 raise EnvironmentError\r\n--> 225 config_dict = cls._dict_from_json_file(resolved_config_file)\r\n 226 \r\n 227 except EnvironmentError:\r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py in _dict_from_json_file(cls, json_file)\r\n 312 def _dict_from_json_file(cls, json_file: str):\r\n 313 with open(json_file, \"r\", encoding=\"utf-8\") as reader:\r\n--> 314 text = reader.read()\r\n 315 return json.loads(text)\r\n 316 \r\n\r\n~/anaconda3/lib/python3.7/codecs.py in decode(self, input, final)\r\n 320 # decode input (taking the buffer into account)\r\n 321 data = self.buffer + input\r\n--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)\r\n 323 # keep undecoded input until the next call\r\n 324 self.buffer = data[consumed:]\r\n\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n```\r\n\r\nPlease, what am I doing wrong?",
"Does the error still happen if you use `AlbertModel` instead of `TFAlbertModel` ?",
"> Does the error still happen if you use `AlbertModel` instead of `TFAlbertModel` ?\r\n\r\nYes the error still happens. Same error report.",
"Please is there a fix for this\n\nOn Tue, Jan 28, 2020, 16:49 Lysandre Debut <[email protected]> wrote:\n\n> Does the error still happen if you use AlbertModel instead of\n> TFAlbertModel ?\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2667?email_source=notifications&email_token=AGZQ72EECOAB4HT3X7CLBTTRABO2RA5CNFSM4KMUNI72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKEB5WA#issuecomment-579346136>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AGZQ72DOAMFVHKA6WBG5XNTRABO2RANCNFSM4KMUNI7Q>\n> .\n>\n",
"This might be far-fetched but did you do any of the conversion process on Windows and now you're trying to load the model on Linux (or the other way around)? That _might_ explain encoding issues. ",
"> This might be far-fetched but did you do any of the conversion process on Windows and now you're trying to load the model on Linux (or the other way around)? That _might_ explain encoding issues.\r\n\r\nNo please. I did everything on a Linux machine. In fact, I only use Linux. Will try to find a fix. ",
"Other far-fetched idea: did you train the model on Python 2 and now try to load it in Python 3 or vice-versa?",
"> Other far-fetched idea: did you train the model on Python 2 and now try to load it in Python 3 or vice-versa?\r\n\r\nI downloaded the ALBERT official pre-trained model. I didn't train anything.",
"I had no issues loading your checkpoint in both `AlbertModel` and `TFAlbertModel`. Here is what I did:\r\n\r\n- Download your file\r\n- Untar `albert_base_v2.tar.gz` into a folder, I called mine `albert_base`\r\n- run the convert command:\r\n```\r\npython convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=albert_base/model.ckpt-best --albert_config_file=albert_base=albert_config.json --pytorch_dump_path=albert_base/pytorch_model.bin\r\n```\r\n- When loading a model from a directory, it requires `config.json` and `pytorch_model.bin`, so rename the config:\r\n```\r\ncp albert_base/albert_config.json albert_base/config.json\r\n```\r\n- Now if you `ls` the folder, here are the contents:\r\n```\r\n30k-clean.model 30k-clean.vocab albert_config.json config.json model.ckpt-best.data-00000-of-00001 model.ckpt-best.index model.ckpt-best.meta pytorch_model.bin\r\n```\r\n\r\nThen, in Python (pytorch):\r\n\r\n```py\r\nfrom transformers import AlbertModel\r\n\r\nmodel = AlbertModel.from_pretrained(\"albert_base\")\r\n```\r\n\r\nor in tf:\r\n\r\n```py\r\nfrom transformers import TFAlbertModel\r\n\r\nmodel = TFAlbertModel.from_pretrained(\"albert_base\", from_pt=True)\r\n```\r\n\r\nIf you didn't train anything, you could have loaded the albert model using the simple command:\r\n\r\n```py\r\nfrom transformers import AlbertModel\r\n\r\nmodel = AlbertModel.from_pretrained(\"albert-base-v2\")\r\n```",
"Thanks a lot for your help. I tried your fix and it worked.\nWhat I missed from my previous approaches were the .bin file, renaming the\nconfig file and the main thing was I wasn't passing model.ckpt-best but\ninstead one of the model.ckpt-best (.index).\n\nAnother question is, can I pass embeddings from a different pretrained\nmodel?\nI'm using a clinical dataset, and I'm wondering if it's possible to learn\nembeddings and pass it to the existing AlbertModel.\n\nOn Mon, Feb 3, 2020, 21:27 Lysandre Debut <[email protected]> wrote:\n\n> I had no issues loading your checkpoint in both AlbertModel and\n> TFAlbertModel. Here is what I did:\n>\n> - Download your file\n> - Untar albert_base_v2.tar.gz into a folder, I called mine albert_base\n> - run the convert command:\n>\n> python convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=albert_Base/model.ckpt-best --albert_config_file=albert_base=albert_config.json --pytorch_dump_path=albert_base/pytorch_model.bin\n>\n>\n> - When loading a model from a directory, it requires config.json and\n> pytorch_model.bin, so rename the config:\n>\n> cp albert_base/albert_config.json albert_base/config.json\n>\n>\n> - Now if you ls the folder, here are the contents:\n>\n> 30k-clean.model 30k-clean.vocab albert_config.json config.json model.ckpt-best.data-00000-of-00001 model.ckpt-best.index model.ckpt-best.meta pytorch_model.bin\n>\n> Then, in Python (pytorch):\n>\n> from transformers import AlbertModel\n>\n> model = AlbertModel.from_pretrained(\"albert_base\")\n>\n> or in tf:\n>\n> from transformers import TFAlbertModel\n>\n> model = TFAlbertModel.from_pretrained(\"albert_base\", from_pt=True)\n>\n> If you didn't train anything, you could have loaded the albert model using\n> the simple command:\n>\n> from transformers import AlbertModel\n>\n> model = AlbertModel.from_pretrained(\"albert-base-v2\")\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2667?email_source=notifications&email_token=AGZQ72AOUYFUP4SJ3IBF6DDRBCD45A5CNFSM4KMUNI72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKVOV4Y#issuecomment-581626611>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AGZQ72CSMUWJGYURTSK3IE3RBCD45ANCNFSM4KMUNI7Q>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | <!-- A clear and concise description of the question. -->
I recently downloaded the [ALBERT_base_v2](https://storage.googleapis.com/albert_models/albert_base_v2.tar.gz) TF pretrained model and converted it to a pytorch with the following code:
`(base) enoch@enoch-pc:~/dl_repos/transformers/src/transformers$ python convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path '/home/enoch/Documents/experiment/ALBERT_pretrained_models/albert_base_v2/albert_base/model.ckpt-best.index' --albert_config_file '/home/enoch/Documents/experiment/ALBERT_pretrained_models/albert_base_v2/albert_base/albert_config.json' --pytorch_dump_path '/home/enoch/Documents/experiment/albert_base_v2.ckpt'`
However, when I run
`tokenizer = AlbertTokenizer.from_pretrained('albert_base_v2.ckpt')`
I get
```
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<timed exec> in <module>
~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)
307
308 """
--> 309 return cls._from_pretrained(*inputs, **kwargs)
310
311 @classmethod
~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
456 # Instantiate tokenizer.
457 try:
--> 458 tokenizer = cls(*init_inputs, **init_kwargs)
459 except OSError:
460 raise OSError(
~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_albert.py in __init__(self, vocab_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, **kwargs)
109
110 self.sp_model = spm.SentencePieceProcessor()
--> 111 self.sp_model.Load(vocab_file)
112
113 @property
~/anaconda3/lib/python3.7/site-packages/sentencepiece.py in Load(self, filename)
116
117 def Load(self, filename):
--> 118 return _sentencepiece.SentencePieceProcessor_Load(self, filename)
119
120 def LoadOrDie(self, filename):
RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(73) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
```
Please, how do I use or load the downloaded pretrained models? Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2667/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2666/comments | https://api.github.com/repos/huggingface/transformers/issues/2666/events | https://github.com/huggingface/transformers/issues/2666 | 556,241,636 | MDU6SXNzdWU1NTYyNDE2MzY= | 2,666 | Multiple token IDs for same token | {
"login": "dakshvar22",
"id": 8708249,
"node_id": "MDQ6VXNlcjg3MDgyNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8708249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakshvar22",
"html_url": "https://github.com/dakshvar22",
"followers_url": "https://api.github.com/users/dakshvar22/followers",
"following_url": "https://api.github.com/users/dakshvar22/following{/other_user}",
"gists_url": "https://api.github.com/users/dakshvar22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakshvar22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakshvar22/subscriptions",
"organizations_url": "https://api.github.com/users/dakshvar22/orgs",
"repos_url": "https://api.github.com/users/dakshvar22/repos",
"events_url": "https://api.github.com/users/dakshvar22/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakshvar22/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Okay, I found that the symbol is not `<space>` but this `Δ `\r\nSo there exist two words in the vocab - `can` and `Δ can`\r\nWhy is this so?",
"Hi! This is indeed an intended property of the tokenizer. The GPT-2 tokenizer is a byte-level BPE that has a sufficient vocabulary size to make the distinction between tokens that are at the beginning of a sentence (not prepended by a space), and those that are in the middle of a sentence (prepended by a space). \r\n\r\nThe tokens `the` and ` the` are therefore encoded differently, however the tokenizer strips the spaces from the sequences it receives as input. You can suppress that behavior by setting the `add_prefix_space` flag to `True`:\r\n\r\n```py\r\ntokenizer.encode(\"the\")\r\n# [1169]\r\n\r\ntokenizer.encode(\"the\", add_prefix_space=True)\r\n# [262]\r\n```\r\n\r\nConcerning your question regarding why the vocabulary displays `can` and `Δ can`, you can actually see the `Δ can` as being ` can` (notice the space). The GPT-2 tokenizer converts all spaces/control characters to other tokens. These spaces and control characters could have some unwanted behavior when using a BPE tokenizer (for example if it splits on whitespace).\r\n\r\nThe space token is switched to `Δ `. You can see it being done using the `bytes_to_unicode` method in the [tokenization_gpt2.py](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_gpt2.py#L63-L85) file.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | ## β Questions & Help
I am using GPT2Tokenizer. I observed that some tokens are duplicated in the vocabulary with a space appended to them in the beginning. For example, there exist two separate tokens - `can` and `<space>can`. Both of them are mapped to different token IDs - `5171` and `460` respectively.
Although, both -
```
can_token_id = tokenizer.encode(' can', add_special_tokens=False)
```
and
```
can_token_id = tokenizer.encode('can', add_special_tokens=False)
```
return can_token_id as `[5171]`
Is this an intended property of the tokenizer? Is there a mapping available between what token IDs are just a space appended version of other tokenIDs? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2666/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2665/comments | https://api.github.com/repos/huggingface/transformers/issues/2665/events | https://github.com/huggingface/transformers/pull/2665 | 556,189,262 | MDExOlB1bGxSZXF1ZXN0MzY3OTk2NjU5 | 2,665 | standardize CTRL BPE files - upload models to S3 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also cc @keskarnitish!"
] | 1,580 | 1,651 | 1,582 | MEMBER | null | This PR:
- update CTRL BPE files (`vocab.json` and `merges.txt`) to use a single format for sub-word splitting (selected to use `</w>` at the end of words)
- upload CTRL updated vocabulary files and pytorch model (not updated) to AWS.
cc @mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2665/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2665",
"html_url": "https://github.com/huggingface/transformers/pull/2665",
"diff_url": "https://github.com/huggingface/transformers/pull/2665.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2665.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2664/comments | https://api.github.com/repos/huggingface/transformers/issues/2664/events | https://github.com/huggingface/transformers/pull/2664 | 556,173,228 | MDExOlB1bGxSZXF1ZXN0MzY3OTgzMjcy | 2,664 | Updates to the templates | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2664?src=pr&el=h1) Report\n> Merging [#2664](https://codecov.io/gh/huggingface/transformers/pull/2664?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ea2600bd5f1d36f2fb61958be21db5b901e33884?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2664?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2664 +/- ##\n=======================================\n Coverage 74.51% 74.51% \n=======================================\n Files 87 87 \n Lines 14920 14920 \n=======================================\n Hits 11117 11117 \n Misses 3803 3803\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2664?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2664?src=pr&el=footer). Last update [ea2600b...1cfc4af](https://codecov.io/gh/huggingface/transformers/pull/2664?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,582 | 1,580 | COLLABORATOR | null | This PR updates the existing GitHub templates. Main changes are:
- motivate users to post general question on Stack Overflow, tagged [huggingface-transformers](https://stackoverflow.com/questions/tagged/huggingface-transformers)
- removing the 'additional context' section as it might not add much and just bloats the template
- changed references to pytorch-transformers to transformers
closes https://github.com/huggingface/transformers/issues/2529 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2664/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2664",
"html_url": "https://github.com/huggingface/transformers/pull/2664",
"diff_url": "https://github.com/huggingface/transformers/pull/2664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2664.patch",
"merged_at": 1580226071000
} |
https://api.github.com/repos/huggingface/transformers/issues/2663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2663/comments | https://api.github.com/repos/huggingface/transformers/issues/2663/events | https://github.com/huggingface/transformers/pull/2663 | 556,125,120 | MDExOlB1bGxSZXF1ZXN0MzY3OTQzMTQw | 2,663 | Add check to verify existence of pad_token_id | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tests failed on loading the Bert Whole Word Masking model:\r\n\r\n\r\n> OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json' to download pretrained model configuration file.\r\n\r\nFetching that file from the browser does work, though.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2663?src=pr&el=h1) Report\n> Merging [#2663](https://codecov.io/gh/huggingface/transformers/pull/2663?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ea2600bd5f1d36f2fb61958be21db5b901e33884?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2663?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2663 +/- ##\n=======================================\n Coverage 74.51% 74.51% \n=======================================\n Files 87 87 \n Lines 14920 14920 \n=======================================\n Hits 11117 11117 \n Misses 3803 3803\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2663?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2663/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.69% <100%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2663?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2663?src=pr&el=footer). Last update [ea2600b...8d04f9b](https://codecov.io/gh/huggingface/transformers/pull/2663?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi Bram, thanks for opening a pull request. The problem with this approach is that checking if the `pad_token_id` is `None` will print a warning: `Using pad_token, but it is not set yet.`\r\n\r\nIt would probably be annoying for the user to be facing that warning each time they call `batch_encode_plus`. I would argue using the private attribute `_pad_token` instead would be better, as it can be used for the same purpose without raising a warning.",
"> Hi Bram, thanks for opening a pull request. The problem with this approach is that checking if the `pad_token_id` is `None` will print a warning: `Using pad_token, but it is not set yet.`\r\n> \r\n> It would probably be annoying for the user to be facing that warning each time they call `batch_encode_plus`. I would argue using the private attribute `_pad_token` instead would be better, as it can be used for the same purpose without raising a warning.\r\n\r\nAh, didn't know that. Thanks! I can make the changes tomorrow, but feel free to edit now if you want the changes faster. ",
"Great, thanks Bram !\r\n"
] | 1,580 | 1,580 | 1,580 | COLLABORATOR | null | In batch_encode_plus we have to ensure that the tokenizer has a pad_token_id so that, when padding, no None values are added as padding. That would happen with gpt2, openai, transfoxl.
closes https://github.com/huggingface/transformers/issues/2640 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2663/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2663",
"html_url": "https://github.com/huggingface/transformers/pull/2663",
"diff_url": "https://github.com/huggingface/transformers/pull/2663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2663.patch",
"merged_at": 1580337899000
} |
https://api.github.com/repos/huggingface/transformers/issues/2662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2662/comments | https://api.github.com/repos/huggingface/transformers/issues/2662/events | https://github.com/huggingface/transformers/issues/2662 | 556,068,065 | MDU6SXNzdWU1NTYwNjgwNjU= | 2,662 | 'Embedding' object has no attribute 'shape' | {
"login": "whitedelay",
"id": 38174055,
"node_id": "MDQ6VXNlcjM4MTc0MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/38174055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whitedelay",
"html_url": "https://github.com/whitedelay",
"followers_url": "https://api.github.com/users/whitedelay/followers",
"following_url": "https://api.github.com/users/whitedelay/following{/other_user}",
"gists_url": "https://api.github.com/users/whitedelay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whitedelay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whitedelay/subscriptions",
"organizations_url": "https://api.github.com/users/whitedelay/orgs",
"repos_url": "https://api.github.com/users/whitedelay/repos",
"events_url": "https://api.github.com/users/whitedelay/events{/privacy}",
"received_events_url": "https://api.github.com/users/whitedelay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can't import a .ckpt file directly in a PyTorch model. You first need to convert your obtained BERT model to our format, using the script [convert_bert_original_tf_checkpoint_to_pytorch](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py). It will then be usable in our TensorFlow/PyTorch architectures.",
"@LysandreJik \r\n\r\nThanks for the comment! I've used 'convert_bert_original_tf_checkpoint_to_pytorch' but, same issue occurred.. ;(\r\n\r\n\r\n\r\n\r\n\r\n",
"If I understand correctly, you used this `convert_bert_original_tf_checkpoint_to_pytorch` script to convert it to a PyTorch model, which was dumpes in a `HanBert-54kN` folder?\r\n\r\nDid you try using:\r\n\r\n```py\r\nmodel = BertForPreTraining.from_pretrained(\"HanBert-54kN\")\r\n```\r\n\r\n? Does it raise the same error ?",
"yes, I've tried that before.\r\nIs it because of the version conflict?\r\nI recently recognized that the HanBert is pre-trained under tensorflow-gpu 1.11.0.",
"For me, the same problem occurred and I solved it by changing the corresponding block of load_tf_weights_in_bert function as follows:\r\n\r\nOriginal:\r\n```\r\n try:\r\n assert pointer.shape == array.shape\r\n except AssertionError as e:\r\n e.args += (pointer.shape, array.shape)\r\n raise\r\n```\r\n\r\nChanged\r\n```\r\n try:\r\n if type(pointer).__name__ != 'Parameter':\r\n assert pointer.shape == array.shape\r\n else:\r\n if pointer.shape != array.shape:\r\n if pointer.shape == array.transpose().shape:\r\n array = array.transpose()\r\n assert pointer.shape == array.shape\r\n except AssertionError as e:\r\n e.args += (pointer.shape, array.shape)\r\n raise\r\n```\r\n\r\nI found that this code works without previous error, but I don't check the working of the converted parameter with this code yet... So please warn about that.",
"@whitedelay \r\n\r\nI've also confront the same issue. It's because the convert function can't skip the `optimizer parameter`. I've raised the [PR](https://github.com/huggingface/transformers/pull/2652) regarding with this issue.",
"@monologg \r\n\r\nOh, I see. Thank you so much! π",
"I have the same problem (perhaps the PR didn't solve this issue). what can I do about it?",
"Hello, i am facing this same issue. If anyone found a solution already, please share it.",
"hi how to solve itοΌ @monologg @henrique-voni "
] | 1,580 | 1,629 | 1,581 | CONTRIBUTOR | null | ## β Questions & Help
**version**
tensorflow : 2.0.0
tensorflow-gpu : 2.0.0
torch : 1.3.1
transformers : 2.3.0
Also I'm using **google Colab**
I want to convert tf pretrained-model(for Korean) to pytorch model.
I just tried below code
**config = BertConfig.from_json_file(BERT_PATH+'/config.json')
model = BertForPreTraining.from_pretrained(BERT_PATH, from_tf=True, config=config)**
However,

this error comes out.. ;(
in the BERT_PATH,
-config.json
-model.ckpt.data-00000-of-00001
-model.ckpt.meta
-model.ckpt.index
-vocab.txt
Can anyone help me?
I found some similar issues, but it didn't help...
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2662/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2661/comments | https://api.github.com/repos/huggingface/transformers/issues/2661/events | https://github.com/huggingface/transformers/pull/2661 | 555,886,364 | MDExOlB1bGxSZXF1ZXN0MzY3NzQ3ODg4 | 2,661 | [Umberto] model shortcuts | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Failing test is Heisenbug",
"@julien-c thank you we are looking at it right now.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2661?src=pr&el=h1) Report\n> Merging [#2661](https://codecov.io/gh/huggingface/transformers/pull/2661?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9ca21c838bce6a4311124eafac58ef7dbabf6a0e?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2661?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2661 +/- ##\n==========================================\n+ Coverage 74.51% 74.51% +<.01% \n==========================================\n Files 87 87 \n Lines 14920 14921 +1 \n==========================================\n+ Hits 11117 11118 +1 \n Misses 3803 3803\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2661?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2661/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2661/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2661/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `34.56% <100%> (+0.81%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2661?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2661?src=pr&el=footer). Last update [9ca21c8...27a9dd3](https://codecov.io/gh/huggingface/transformers/pull/2661?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Model pages are at:\r\n\r\n- https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1\r\n- https://huggingface.co/Musixmatch/umberto-wikipedia-uncased-v1\r\n\r\n",
"@julien-c Thanks, very happy to contribute! Is it possibile to update the profile image [here](https://huggingface.co/Musixmatch) to the right one in the model's readme [here](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1)?\r\nThanks a lot again.",
"Do you mean using this image: https://user-images.githubusercontent.com/163333/72244273-396aa380-35ee-11ea-894b-4ea48230c02b.png\r\n?\r\n\r\nWe don't have a feature for this for now, but I will change it manually.",
"@julien-c yep!"
] | 1,580 | 1,580 | 1,580 | MEMBER | null | cc @loretoparisi @simonefrancia
see #2485 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2661/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2661/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2661",
"html_url": "https://github.com/huggingface/transformers/pull/2661",
"diff_url": "https://github.com/huggingface/transformers/pull/2661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2661.patch",
"merged_at": 1580436354000
} |
https://api.github.com/repos/huggingface/transformers/issues/2660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2660/comments | https://api.github.com/repos/huggingface/transformers/issues/2660/events | https://github.com/huggingface/transformers/issues/2660 | 555,867,597 | MDU6SXNzdWU1NTU4Njc1OTc= | 2,660 | PPLM with Tensorflow | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this is a mistake on our part. cc @LysandreJik @w4nderlust ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Yes, the code I / we contributed is only for PyTorch. I think porting it to TensorFlow is feasible, but it's not in there at the moment.",
"The release notes were fixed then. Thanks!"
] | 1,580 | 1,585 | 1,585 | CONTRIBUTOR | null | ## β Questions & Help
Hello,
I am still quite new to the library, so I do apologize if the answer is straightforward.
The latest release (https://github.com/huggingface/transformers/releases/tag/v2.3.0) mentions the inclusion of PPLM as a new architecture, both as Pytorch and TF.
I can't however seem to figure out how to import **PPLM** as a **Tensorflow** model. Any help would be greatly appreciated.
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2660/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2659/comments | https://api.github.com/repos/huggingface/transformers/issues/2659/events | https://github.com/huggingface/transformers/pull/2659 | 555,828,667 | MDExOlB1bGxSZXF1ZXN0MzY3Njk5OTk5 | 2,659 | [FIX] #2658 Inconsistent values returned by batch_encode_plus and enc⦠| {
"login": "WoodyFleurant",
"id": 5831949,
"node_id": "MDQ6VXNlcjU4MzE5NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5831949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WoodyFleurant",
"html_url": "https://github.com/WoodyFleurant",
"followers_url": "https://api.github.com/users/WoodyFleurant/followers",
"following_url": "https://api.github.com/users/WoodyFleurant/following{/other_user}",
"gists_url": "https://api.github.com/users/WoodyFleurant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WoodyFleurant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WoodyFleurant/subscriptions",
"organizations_url": "https://api.github.com/users/WoodyFleurant/orgs",
"repos_url": "https://api.github.com/users/WoodyFleurant/repos",
"events_url": "https://api.github.com/users/WoodyFleurant/events{/privacy}",
"received_events_url": "https://api.github.com/users/WoodyFleurant/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2659?src=pr&el=h1) Report\n> Merging [#2659](https://codecov.io/gh/huggingface/transformers/pull/2659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e0849a66accda8aa435a3db164c373175115a5b0?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2659?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2659 +/- ##\n==========================================\n+ Coverage 74.51% 74.51% +<.01% \n==========================================\n Files 87 87 \n Lines 14920 14920 \n==========================================\n+ Hits 11117 11118 +1 \n+ Misses 3803 3802 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2659?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.85% <ΓΈ> (+0.16%)` | :arrow_up: |\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/2659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0% <ΓΈ> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2659?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2659?src=pr&el=footer). Last update [e0849a6...6efcbec](https://codecov.io/gh/huggingface/transformers/pull/2659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,587 | 1,587 | NONE | null | As ticket describe, when using batch_encode_plus, instead of encode_plus, tokens type and mask are different. They should be the same using batch processing or not. Proposed fix here solve the issue | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2659",
"html_url": "https://github.com/huggingface/transformers/pull/2659",
"diff_url": "https://github.com/huggingface/transformers/pull/2659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2659.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2658/comments | https://api.github.com/repos/huggingface/transformers/issues/2658/events | https://github.com/huggingface/transformers/issues/2658 | 555,793,096 | MDU6SXNzdWU1NTU3OTMwOTY= | 2,658 | Inconsistent values returned by batch_encode_plus and encode_plus | {
"login": "WoodyFleurant",
"id": 5831949,
"node_id": "MDQ6VXNlcjU4MzE5NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5831949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WoodyFleurant",
"html_url": "https://github.com/WoodyFleurant",
"followers_url": "https://api.github.com/users/WoodyFleurant/followers",
"following_url": "https://api.github.com/users/WoodyFleurant/following{/other_user}",
"gists_url": "https://api.github.com/users/WoodyFleurant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WoodyFleurant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WoodyFleurant/subscriptions",
"organizations_url": "https://api.github.com/users/WoodyFleurant/orgs",
"repos_url": "https://api.github.com/users/WoodyFleurant/repos",
"events_url": "https://api.github.com/users/WoodyFleurant/events{/privacy}",
"received_events_url": "https://api.github.com/users/WoodyFleurant/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've been experimenting with `batch_encode_plus` with my current project and I have found few more inconsistencies and code affected:\r\n\r\n* `batch_encode_plus` is not introduced in any tests, so it is hard to tell what was desired behavior of this method\r\n* `batch_encode_plus` is not extending `encode_plus` in context of input parameters:\r\n * `encode_plus` is using input parameters like: `text, text_pair=None, add_special_tokens=True ...`\r\n * `batch_encode_plus` is using input parameters like: `batch_text_or_text_pairs=None, add_special_tokens=False ...`\r\n* `batch_encode_plus` is not extending `encode_plus` in context of implementation logic, ie. is not reusing `prepare_for_model` as `encode_plus` (and `encode` via `encode_plus`), instead encoding is implemented in alternative way\r\n* `batch_encode_plus` was used in pipleines https://github.com/huggingface/transformers/blob/eb59e9f70513b538d2174d4ea1efea7ba8554b58/src/transformers/pipelines.py#L426 in MR https://github.com/huggingface/transformers/pull/1548 which is currently in 2.3.0 release\r\n* `batch_encode_plus` is used in https://github.com/huggingface/transformers/blob/335dd5e68a1b6ab6f51952c36a9ff6d8822c963f/examples/run_lm_finetuning.py#L135 on current master branch\r\n\r\nTaking the inconsistency with `encode` and `encode_plus` it seems that above usage of `batch_encode_plus` can produce undesired output.\r\n\r\nAlso because of inconsistency I think that `batch_encode_plus` name is highly misleading and this method should be at least renamed to something like `batch_alternative_encode`.\r\n\r\nLetting know @thomwolf, as he was reviewing #1548",
"@knuser agreed, whole implementation of batch_encode_plus is very different from encode_plus. I found it weird also but I am not aware of all different uses cases / features that it support. Anyway, I would like to you if you agree on the fact that encode_plus and batch_encode_plus should give the same results for same input as described in the ticket ?",
"I agree",
"I also experienced this inconsistency and it would be great if the problem can be fixed soon."
] | 1,580 | 1,582 | 1,582 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): bert-base-uncased
The problem arise when using:
* batch_encode_plus and encode_plus with pad_to_max_length & max_length
## To Reproduce
Minimal exemple :
I compare tokens created via batch or not, and they are different for **masks** and **types**
```
pretrained = 'bert-base-uncased'
tokenizer = FixedAutoTokenizer.from_pretrained(pretrained)
model = AutoModel.from_pretrained(pretrained)
text = "My"
text1 = "features are ok"
mylist = list()
mylist.append(text)
mylist.append(text1)
#####################################################################################################
batch_encoding = tokenizer.batch_encode_plus(mylist,
return_tensors='pt',
add_special_tokens=False)
######################################################################################################
text_encoding = tokenizer.encode_plus(text,
return_tensors='pt',
add_special_tokens=False,
max_length=3,
pad_to_max_length=True)
print("\n--Batch Encoding \n")
print(batch_encoding['input_ids'])
print(batch_encoding['token_type_ids'])
print(batch_encoding['attention_mask'])
print("\n--One-by-one encoding\n")
print(text_encoding['input_ids'])
print(text_encoding['token_type_ids'])
print(text_encoding['attention_mask'])
```
It gives
```
--Batch Encoding
tensor([[2026, 0, 0],
[2838, 2024, 7929]])
tensor([[0, 1, 1],
[0, 0, 0]])
tensor([[1, 1, 1],
[1, 1, 1]])
--One-by-one encoding
tensor([[2026, 0, 0]])
tensor([[0, 0, 0]])
tensor([[1, 0, 0]])
```
## Expected behavior
It should return the same value, like following
```
--Batch Encoding
tensor([[2026, 0, 0],
[2838, 2024, 7929]])
tensor([[0, 0, 0],
[0, 0, 0]])
tensor([[1, 0, 0],
[1, 1, 1]])
--One-by-one encoding
tensor([[2026, 0, 0]])
tensor([[0, 0, 0]])
tensor([[1, 0, 0]])
```
## Environment
* OS: MacOs Mojave (reproduced also on Ubuntu)
* Python version: 3.7
* PyTorch version: torch==1.3.1
* PyTorch Transformers version (or branch): transformers==2.3.0
* Using GPU ? no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2658/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2658/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2657/comments | https://api.github.com/repos/huggingface/transformers/issues/2657/events | https://github.com/huggingface/transformers/pull/2657 | 555,713,855 | MDExOlB1bGxSZXF1ZXN0MzY3NjA2MDkz | 2,657 | Add `return_special_tokens_mask` to `batch_encode_plus()` | {
"login": "sergicastellasape",
"id": 33417180,
"node_id": "MDQ6VXNlcjMzNDE3MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/33417180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergicastellasape",
"html_url": "https://github.com/sergicastellasape",
"followers_url": "https://api.github.com/users/sergicastellasape/followers",
"following_url": "https://api.github.com/users/sergicastellasape/following{/other_user}",
"gists_url": "https://api.github.com/users/sergicastellasape/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergicastellasape/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergicastellasape/subscriptions",
"organizations_url": "https://api.github.com/users/sergicastellasape/orgs",
"repos_url": "https://api.github.com/users/sergicastellasape/repos",
"events_url": "https://api.github.com/users/sergicastellasape/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergicastellasape/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2657?src=pr&el=h1) Report\n> Merging [#2657](https://codecov.io/gh/huggingface/transformers/pull/2657?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/875c4ae48f97af9792ab0b87b49a426ca7e7586b?src=pr&el=desc) will **decrease** coverage by `1.1%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2657?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2657 +/- ##\n==========================================\n- Coverage 74.58% 73.47% -1.11% \n==========================================\n Files 87 87 \n Lines 14892 14892 \n==========================================\n- Hits 11107 10942 -165 \n- Misses 3785 3950 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2657?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.69% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `55.39% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.94% <0%> (-2.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.06% <0%> (-1.33%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2657?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2657?src=pr&el=footer). Last update [875c4ae...1f3e2b6](https://codecov.io/gh/huggingface/transformers/pull/2657?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | Proposal to add the keyword argument `return_special_tokens_mask` to the method `batch_encode_plus()` to match the functionality of `encode_plus()`. The implementation simply adds the argument in the `encode_plus()` call, so it inherits its implementation and should be compatible with other changes to the `batch_encode_plus()` arguments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2657/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2657",
"html_url": "https://github.com/huggingface/transformers/pull/2657",
"diff_url": "https://github.com/huggingface/transformers/pull/2657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2657.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2656/comments | https://api.github.com/repos/huggingface/transformers/issues/2656/events | https://github.com/huggingface/transformers/issues/2656 | 555,683,613 | MDU6SXNzdWU1NTU2ODM2MTM= | 2,656 | Using Transformers for a Sequence with Multiple Variables at Each Step | {
"login": "mnitin73",
"id": 13919821,
"node_id": "MDQ6VXNlcjEzOTE5ODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/13919821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnitin73",
"html_url": "https://github.com/mnitin73",
"followers_url": "https://api.github.com/users/mnitin73/followers",
"following_url": "https://api.github.com/users/mnitin73/following{/other_user}",
"gists_url": "https://api.github.com/users/mnitin73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnitin73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnitin73/subscriptions",
"organizations_url": "https://api.github.com/users/mnitin73/orgs",
"repos_url": "https://api.github.com/users/mnitin73/repos",
"events_url": "https://api.github.com/users/mnitin73/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnitin73/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
I have sequence data which I want to classify and predict future sequence. However, I know that there are a few additional features which aso affect the subsequent values, each to a different extent. So I have multiple features for each step of input sequence. However, the output sequence can be just one feature for each step.
Input - [(x1, y1, z1), (x2,y2,z2), (x3,y3,z3)]
Output - [x4, x5, x6] or classification (a,b or c)
I do not want to concatenate x,y, and z as they have varying effects on the label and I think concatenating would give an erroneous result.
Need advise on whether Transformers, and specifically Huggingface Transformers can be used for this use case?
Thanks.
<!-- -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2656/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2655/comments | https://api.github.com/repos/huggingface/transformers/issues/2655/events | https://github.com/huggingface/transformers/pull/2655 | 555,683,391 | MDExOlB1bGxSZXF1ZXN0MzY3NTgxMDA4 | 2,655 | Fix AutoModelForQuestionAnswering for Roberta | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2655?src=pr&el=h1) Report\n> Merging [#2655](https://codecov.io/gh/huggingface/transformers/pull/2655?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/babd41e7fa07bdd764f8fe91c33469046ab7dbd1?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2655?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2655 +/- ##\n=======================================\n Coverage 74.58% 74.58% \n=======================================\n Files 87 87 \n Lines 14892 14892 \n=======================================\n Hits 11107 11107 \n Misses 3785 3785\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2655?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.55% <ΓΈ> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2655?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2655?src=pr&el=footer). Last update [babd41e...213877a](https://codecov.io/gh/huggingface/transformers/pull/2655?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"can you run `make style` @tholor? Seems like I can't push to your fork or I'd have done it myself.\r\n\r\nThank you!",
"Actually I'll do it myself. Thanks!"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | When using `AutoModelForQuestionAnswering()` to load a Roberta model, we are currently instantiating a `BertForQuestionAnswering` class. This is happening because `RobertaConfig` is an instance of `BertConfig` (due to inheritance) and there's no other mapping for Roberta in here:
https://github.com/huggingface/transformers/blob/bac51fba3a6b96f02f482e9a352601242b200e47/src/transformers/modeling_auto.py#L176-L184
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2655/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2655",
"html_url": "https://github.com/huggingface/transformers/pull/2655",
"diff_url": "https://github.com/huggingface/transformers/pull/2655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2655.patch",
"merged_at": 1580145167000
} |
https://api.github.com/repos/huggingface/transformers/issues/2654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2654/comments | https://api.github.com/repos/huggingface/transformers/issues/2654/events | https://github.com/huggingface/transformers/issues/2654 | 555,680,863 | MDU6SXNzdWU1NTU2ODA4NjM= | 2,654 | Add keyword arguments to batch_encode_plus() to match encode_plus() | {
"login": "sergicastellasape",
"id": 33417180,
"node_id": "MDQ6VXNlcjMzNDE3MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/33417180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergicastellasape",
"html_url": "https://github.com/sergicastellasape",
"followers_url": "https://api.github.com/users/sergicastellasape/followers",
"following_url": "https://api.github.com/users/sergicastellasape/following{/other_user}",
"gists_url": "https://api.github.com/users/sergicastellasape/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergicastellasape/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergicastellasape/subscriptions",
"organizations_url": "https://api.github.com/users/sergicastellasape/orgs",
"repos_url": "https://api.github.com/users/sergicastellasape/repos",
"events_url": "https://api.github.com/users/sergicastellasape/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergicastellasape/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This should require adding a simple `**kwargs` at the end of \r\n\r\nhttps://github.com/huggingface/transformers/blob/f1e8a51f08eeecacf0cde33d40702d70c737003b/src/transformers/tokenization_utils.py#L977"
] | 1,580 | 1,582 | 1,582 | NONE | null | ## πConsistent Keyword arguments for batch_encode_plus() to match encode_plus()
Currently, features such as `return_special_tokens_mask` that are available for the `encode_plus()` method are not available for `batch_encode_plus()`. It would be nice if all keyword arguments worked in a similar fashion.
## Motivation
`batch_encode_plus()` is extremely useful to seamlessly tokenize batches, however, the lack of features forces to fall back to encode_plus() and an _uglier_ implementation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2654/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2654/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2653/comments | https://api.github.com/repos/huggingface/transformers/issues/2653/events | https://github.com/huggingface/transformers/pull/2653 | 555,635,103 | MDExOlB1bGxSZXF1ZXN0MzY3NTQxMjQ1 | 2,653 | Fix token_type_ids for XLM-R | {
"login": "MaksymDel",
"id": 8141935,
"node_id": "MDQ6VXNlcjgxNDE5MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8141935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaksymDel",
"html_url": "https://github.com/MaksymDel",
"followers_url": "https://api.github.com/users/MaksymDel/followers",
"following_url": "https://api.github.com/users/MaksymDel/following{/other_user}",
"gists_url": "https://api.github.com/users/MaksymDel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaksymDel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaksymDel/subscriptions",
"organizations_url": "https://api.github.com/users/MaksymDel/orgs",
"repos_url": "https://api.github.com/users/MaksymDel/repos",
"events_url": "https://api.github.com/users/MaksymDel/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaksymDel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2653?src=pr&el=h1) Report\n> Merging [#2653](https://codecov.io/gh/huggingface/transformers/pull/2653?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/babd41e7fa07bdd764f8fe91c33469046ab7dbd1?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2653?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2653 +/- ##\n=======================================\n Coverage 74.58% 74.58% \n=======================================\n Files 87 87 \n Lines 14892 14892 \n=======================================\n Hits 11107 11107 \n Misses 3785 3785\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2653?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2653/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `32.91% <0%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2653?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2653?src=pr&el=footer). Last update [babd41e...56133cd](https://codecov.io/gh/huggingface/transformers/pull/2653?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,580 | 1,580 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2653/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2653",
"html_url": "https://github.com/huggingface/transformers/pull/2653",
"diff_url": "https://github.com/huggingface/transformers/pull/2653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2653.patch",
"merged_at": 1580141312000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2652/comments | https://api.github.com/repos/huggingface/transformers/issues/2652/events | https://github.com/huggingface/transformers/pull/2652 | 555,625,668 | MDExOlB1bGxSZXF1ZXN0MzY3NTMzNDUy | 2,652 | Fix importing unofficial TF models with extra optimizer weights | {
"login": "monologg",
"id": 28896432,
"node_id": "MDQ6VXNlcjI4ODk2NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/28896432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monologg",
"html_url": "https://github.com/monologg",
"followers_url": "https://api.github.com/users/monologg/followers",
"following_url": "https://api.github.com/users/monologg/following{/other_user}",
"gists_url": "https://api.github.com/users/monologg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monologg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monologg/subscriptions",
"organizations_url": "https://api.github.com/users/monologg/orgs",
"repos_url": "https://api.github.com/users/monologg/repos",
"events_url": "https://api.github.com/users/monologg/events{/privacy}",
"received_events_url": "https://api.github.com/users/monologg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2652?src=pr&el=h1) Report\n> Merging [#2652](https://codecov.io/gh/huggingface/transformers/pull/2652?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/babd41e7fa07bdd764f8fe91c33469046ab7dbd1?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2652?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2652 +/- ##\n=======================================\n Coverage 74.58% 74.58% \n=======================================\n Files 87 87 \n Lines 14892 14892 \n=======================================\n Hits 11107 11107 \n Misses 3785 3785\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2652?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2652/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.9% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2652/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `79.14% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2652/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.09% <0%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2652?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2652?src=pr&el=footer). Last update [babd41e...d338eb0](https://codecov.io/gh/huggingface/transformers/pull/2652?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,580 | 1,581 | 1,581 | CONTRIBUTOR | null | Hi:)
I was trying to convert the BERT `tf model` to `torch model`, and tf model has *extra optimizer weights* ([This file](https://drive.google.com/file/d/1mNDA-SNCsnu60wzKVe_Y3k-dq3LoDHB2/view) is the one I've tried to convert).
But it encounters the error, and I've printed the parameters' name in tf model.
<img width="686" alt="Screen Shot 2020-01-27 at 10 21 50 PM" src="https://user-images.githubusercontent.com/28896432/73183937-f87e9d00-415e-11ea-9317-2611aecdefe7.png">
I've found out the instead of "adam_v" for "adam_m, names are saved as "AdamWeightDecayOptimizer" or "AdamWeightDecayOptimizer_1".
There was a similar issue that also encountered the issue that I had. ([Issue Link from DeepPavlov repo](https://github.com/deepmipt/DeepPavlov/issues/863))
I can't find out the exact reason why the parameters are named as "AdamWeightDecayOptimizer" or "AdamWeightDecayOptimizer_1" instead of "adam_v" or "adam_m", but it might be safe to cover all the exceptional cases:) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2652/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2652",
"html_url": "https://github.com/huggingface/transformers/pull/2652",
"diff_url": "https://github.com/huggingface/transformers/pull/2652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2652.patch",
"merged_at": 1581089132000
} |
https://api.github.com/repos/huggingface/transformers/issues/2651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2651/comments | https://api.github.com/repos/huggingface/transformers/issues/2651/events | https://github.com/huggingface/transformers/issues/2651 | 555,285,818 | MDU6SXNzdWU1NTUyODU4MTg= | 2,651 | XLNET SQuAD2.0 Fine-Tuning - What May Have Changed? | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have been facing the same problem with RoBERTa finetuning for multiple choice QA datasets. I have even tried going back to the older version of transformers (version 2.1.0 from Oct 2019) and re-running my experiments but I am not able to replicate results from before anymore. The loss just varies within a range of +/- 0.1. ",
"Are you using one of the recent versions of run_squad.py? It was quite heavily refactored in december. Maybe there is a mistake now. Can you try it with the run_squad.py of the 2.1.1 release again?",
"Could it be related to 96e8350? Before november 29 there was a mistake where the script would only evaluate on 1/N_GPU of the entire evaluation set.",
"@cronoik good suggestion\r\n> Are you using one of the recent versions of run_squad.py? It was quite heavily refactored in december. Maybe there is a mistake now. Can you try it with the run_squad.py of the 2.1.1 release again?\r\n\r\nI'm attempting to recreate the environment that existed for the successful fine-turning above that was dated 26Nov2019. I have the .yml file for that environment but after re-creating & re-running the script I get errors of missing \"Albert files\" and others. Not making much sense since this is using XLNET. I'm keeping after it.\r\n\r\n@LysandreJik helpful information\r\n> Could it be related to [96e8350](https://github.com/huggingface/transformers/commit/96e83506d1ddee8e19b07118668be73d175decb6)? Before november 29 there was a mistake where the script would only evaluate on 1/N_GPU of the entire evaluation set.\r\n\r\nPerhaps, but given that the successful run was before 29Nov2019, plus my eval script uses single GPU ( CUDA_VISIBLE_DEVICES=0 ), could [96e8350] be a culprit?\r\n\r\nHow best to debug my latest, up-to-date environment?\r\nTransformers: 2.3.0\r\nPyTorch: 1.4.0\r\nTensorFlow: 2.1.0\r\nPython: 3.7.6\r\n",
"How about the cached files at .cache/torch/transformers?\r\nI have over 6GB of models cached dating back to November 2019.\r\nAny chance the wrong config.json, spiece.model, model.bin, etc. are getting loaded from the cache which don't match-up with new Transformer code/libraries?\r\nI think it's time to clear out the cache.\r\n\r\nRan single GPU0 on script above with gradient accumulation set to 48, everything else the same. Results and loss were the same. Apparently it is not a distributed processing issue.\r\n\r\n**Update 30Jan20:** Cleared the caches, ran the distributed processing script in the first post above adding `--overwrite_cache`, same results and losses.",
"Hi guys! I just run into the same issue. I fine-tuned XLNet on the squad 2 trainingset over the weekend, exactly as instructed on the examples page, and got the same inferior results:\r\n\r\n`\r\npython examples/run_squad.py \r\n --model_type xlnet \r\n --model_name_or_path xlnet-large-cased \r\n --do_train \r\n --do_eval \r\n --version_2_with_negative \r\n --train_file ./squad/train-v2.0.json \r\n --predict_file ./squad/dev-v2.0.json \r\n --learning_rate 3e-5 \r\n --num_train_epochs 4 \r\n --max_seq_length 384 \r\n --doc_stride 128 \r\n --output_dir ./xlnet_large_squad2_out/ \r\n --per_gpu_eval_batch_size=2 \r\n --per_gpu_train_batch_size=2 \r\n --save_steps 50000\r\n`\r\n\r\n`02/01/2020 00:50:47 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False`\r\n...\r\n`02/03/2020 01:50:51 - INFO - __main__ - Results: {'exact': 45.35500715910048, 'f1': 45.42776379790963, 'total': 11873, 'HasAns_exact': 0.08434547908232119, 'HasAns_f1': 0.23006740428154376, 'HasAns_total': 5928, 'NoAns_exact': 90.49621530698066, 'NoAns_f1': 90.49621530698066, 'NoAns_total': 5945, 'best_exact': 50.07159100480081, 'best_exact_thresh': 0.0, 'best_f1': 50.07159100480081, 'best_f1_thresh': 0.0}`\r\n\r\nMy versions:\r\ntransformers: `0aa40e9` (same as v2.4.0)\r\npython `3.6.8`\r\npytorch `1.2.0+cu92`\r\n\r\nI will proceed to run it again on transformers v2.1.1 and report back whether the old code still works for XLNet.",
"Hi @WilliamNurmi, thank you for taking the time to do this. Do you mind making sure that you're using `SequentialSampler` in your evaluation, even when running against transformers v2.1.1? This affects the evaluation, which should be the same as the one you did in v2.4.0.\r\n\r\nThis should only affect setups with more than 1 gpu and this does not seem to be your case, but if it is, it would be great to update the sampler.",
"Hi @LysandreJik, I'm indeed using only 1 gpu, so we should be good there!",
"No dice with XLNet on v2.1.1. I used the same parameters as @ahotrod except for slight changes for gradient_accumulation_steps (not used), max_seq_length (368) and per_gpu_train_batch_size (1).\r\n\r\n`python examples/run_squad.py --model_type xlnet --model_name_or_path xlnet-large-cased --do_train --do_eval --version_2_with_negative --train_file ./squad/train-v2.0.json --predict_file ./squad/dev-v2.0.json --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 368 --doc_stride 128 --output_dir ./xlnet_cased_finetuned_squad/ --per_gpu_eval_batch_size=2 --per_gpu_train_batch_size=2 --save_steps 63333 --logging_steps 63333 --evaluate_during_training --adam_epsilon 1e-6`\r\n\r\nInferior results:\r\n\r\n`{\r\n \"exact\": 37.45472921755243,\r\n \"f1\": 41.95943914787417,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 70.05735492577598,\r\n \"HasAns_f1\": 79.07969315160429,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 4.945332211942809,\r\n \"NoAns_f1\": 4.945332211942809,\r\n \"NoAns_total\": 5945,\r\n \"best_exact\": 50.07159100480081,\r\n \"best_exact_thresh\": 0.0,\r\n \"best_f1\": 50.07159100480081,\r\n \"best_f1_thresh\": 0.0\r\n}` \r\n\r\nI tried to mimic the setup at the time with the following versions:\r\nTransformers `v2.1.1`\r\nPython `3.6.9\r\nPytorch `1.3.1`\r\n\r\nInterestingly the first run with `v2.4.0` gave an answer to only 5% of the test questions, while this v2.1.1 version dared to an answer 90% of the questions.\r\n\r\nDoes anyone have any idea what could have changed since last November that completely broke the SQuAD2 training? Could it be the files (pretrained network, tokenization, hyperparameters etc) that transformers lib is downloading at the beginning of the training ?",
"Is the run_squad.py the 2.1.1 version?",
"@cronoik, yeah. I'm installing from source and I re-cloned the whole repo.\r\n\r\nI didn't realize to clean `~/.cache/torch/transformers/` though, but @ahotrod seems to have tried that with no luck.\r\n\r\nEDIT: and looking at the cache file timestamps, it seems it has downloaded new files anyways.",
"As noted on other issues, plain old Bert is working better, so the issue seems to be specific to XLNet, RoBERTa ~and ALBERT(?)~.\r\n\r\nOn transformers `2.4.0`\r\n`python examples/run_squad.py --model_type=bert --model_name_or_path=bert-base-uncased --do_train --do_eval --do_lower_case --version_2_with_negative --train_file=./squad/train-v2.0.json --predict_file=./squad/dev-v2.0.json --per_gpu_train_batch_size=12 --learning_rate=3e-5 --num_train_epochs=2.0 --max_seq_length=384 --doc_stride=128 --save_steps=20000 --output_dir=bert_out --overwrite_output_dir`\r\n\r\n`Results: {'exact': 73.04809231028383, 'f1': 76.29336127902307, 'total': 11873, 'HasAns_exact': 71.99730094466936, 'HasAns_f1': 78.49714549018896, 'HasAns_total': 5928, 'NoAns_exact': 74.09587888982338, 'NoAns_f1': 74.09587888982338, 'NoAns_total': 5945, 'best_exact': 73.04809231028383, 'best_exact_thresh': 0.0, 'best_f1': 76.29336127902297, 'best_f1_thresh': 0.0}`",
"After nearly two weeks of unsuccessful varied XLNet fine-tunes, I gave-up and switched to fine-tuning ALBERT for an alternative model:\r\n```\r\nalbert_xxlargev1_sqd2_512_bs48 results:\r\n{'exact': 85.65653162637918,\r\n 'f1': 89.260458954177,\r\n 'total': 11873,\r\n 'HasAns_exact': 82.6417004048583,\r\n 'HasAns_f1': 89.85989020967376,\r\n 'HasAns_total': 5928,\r\n 'NoAns_exact': 88.66274179983179,\r\n 'NoAns_f1': 88.66274179983179,\r\n 'NoAns_total': 5945,\r\n 'best_exact': 85.65653162637918,\r\n 'best_exact_thresh': 0.0,\r\n 'best_f1': 89.2604589541768,\r\n 'best_f1_thresh': 0.0}\r\n```\r\nAhhh, the beauty and flexibility of Transformers, out with one model and in with another.\r\nMy QA app is performing well with ALBERT.\r\n\r\nCurrent system configuration:\r\nOS: Linux Mint 19.3 based on Ubuntu 18.04. 3 LTS and Linux Kernel 5.0\r\nGPU/CPU: 2 x NVIDIA 1080Ti / Intel i7-8700\r\nTransformers: 2.3.0\r\nPyTorch: 1.4.0\r\nTensorFlow: 2.1.0\r\nPython: 3.7.6",
"I was originally going for ALBERT, but tried XLNet instead because many people seemed to be reporting that ALBERT doesn't work ([#202](https://github.com/deepset-ai/FARM/issues/202), [#2609](https://github.com/huggingface/transformers/issues/2609)). But looking into it more, it looks like it is only the v2 model that doesn't work!",
"> After nearly two weeks of unsuccessful varied XLNet fine-tunes, I gave-up and switched to fine-tuning ALBERT for an alternative model:\r\n> \r\n> ```\r\n> albert_xxlargev1_sqd2_512_bs48 results:\r\n> {'exact': 85.65653162637918,\r\n> 'f1': 89.260458954177,\r\n\r\nNice results @ahotrod! Better than [what you got in Dec](https://github.com/huggingface/transformers/issues/1974#issuecomment-562814997):\r\n`albert_xxlargev1_squad2_512_bs48:\r\n \"exact\": 83.65198349195654,\r\n \"f1\": 87.4736247587816,`\r\n\r\nCould you share the hyper-parameters you used?\r\nAnd ellaborate a bit whether you train it with `run_squad.py` or some custom code? `run_squad.py` doesn't seem allow us to apply 0.1 dropout for the classification layer as suggested in the [paper](https://openreview.net/pdf?id=H1eA7AEtvS).\r\n\r\n\r\n",
"@WilliamNurmi thanks for your feedback\r\n\r\nWhen Google Research released their v2 of ALBERT LMs they stated that xxlarge-v1 outperforms xxlarge-v2 and have a discussion as to why: https://github.com/google-research/ALBERT. So I've stuck with v1 for that reason plus the \"teething\" issues that have been associated with v2 LMs.\r\n\r\nYes, seems there have been transfomers revisions positively impacting ALBERT SQuAD 2.0 fine-tuning since my results Dec19 as you noted. I think including `--max_steps 8144` & `--warmup_steps 814` in my script produced the improvement listed above.\r\n\r\nAdditional ALBERT & transformers refinements, hopefully significant, are in transformers v2.4.1: `classifier dropout` and `gelu_new`, thanks to @peteriz & @LysandreJik #2679. I am 18 hours in to a 67 hour fine-tune & eval of `albert_xxlargev1_sqd2_512_bs48` with script below using transformers v2.4.1. I will post results when processing is complete.\r\n\r\nBTW the heat produced from my hardware-challenged computer, **hotrod**, is a welcome tuning by-product for my winter office, summer not so much. Hoping for a NVIDIA Ampere upgrade before this summer's heat.\r\n\r\nMy fine-tuning has been with transformer's `run_squad.py` not custom code. Here's my latest script:\r\n```\r\nalbert_xxlargev1_sqd2_512_bs48.sh:\r\n\r\n#!/bin/bash\r\n\r\nexport OMP_NUM_THREADS=8\r\nRUN_SQUAD_DIR=/media/dn/dssd/nlp/transformers/examples\r\nSQUAD_DIR=${RUN_SQUAD_DIR}/scripts/squad2.0\r\nMODEL_PATH=${RUN_SQUAD_DIR}/runs/albert_xxlargev1_squad2_512_bs48\r\n\r\npython -m torch.distributed.launch --nproc_per_node=2 ${RUN_SQUAD_DIR}/run_squad.py \\\r\n --model_type albert \\\r\n --model_name_or_path albert-xxlarge-v1 \\\r\n --do_train \\\r\n --train_file ${SQUAD_DIR}/train-v2.0.json \\\r\n --predict_file ${SQUAD_DIR}/dev-v2.0.json \\\r\n --version_2_with_negative \\\r\n --num_train_epochs 3 \\\r\n --max_steps 8144 \\\r\n --warmup_steps 814 \\\r\n --do_lower_case \\\r\n --learning_rate 3e-5 \\\r\n --max_seq_length 512 \\\r\n --doc_stride 128 \\\r\n --save_steps 1000 \\\r\n --per_gpu_train_batch_size 1 \\\r\n --gradient_accumulation_steps 24 \\\r\n --overwrite_cache \\\r\n --logging_steps 100 \\\r\n --threads 8 \\\r\n --output_dir ${MODEL_PATH}\r\n\r\nCUDA_VISIBLE_DEVICES=0 python ${RUN_SQUAD_DIR}/run_squad.py \\\r\n --model_type albert \\\r\n --model_name_or_path ${MODEL_PATH} \\\r\n --do_eval \\\r\n --train_file ${SQUAD_DIR}/train-v2.0.json \\\r\n --predict_file ${SQUAD_DIR}/dev-v2.0.json \\\r\n --version_2_with_negative \\\r\n --do_lower_case \\\r\n --max_seq_length 512 \\\r\n --per_gpu_eval_batch_size 24 \\\r\n --eval_all_checkpoints \\\r\n --overwrite_output_dir \\\r\n --output_dir ${MODEL_PATH}\r\n$@\r\n```",
"Thanks for all the details @ahotrod, I had missed the fact that classifier dropout had just been added! I restarted my run with v2.4.1. Loss seems to be going down nicely, so far so good.\r\n\r\nIt's gonna be 6 days for me since I'm on a single Ti 1080. I'm gonna have to look for some new hardware / instances soon as well. Any bigger model or sequence length and I couldn't fit a single batch on this GPU anymore :D\r\n\r\nLooking forward to the sneak peak of the results when your run finishes!",
"@ahotrod could you consider sharing trained ALBERT SQUAD trained model on https://huggingface.co/models?",
"> @ahotrod could you consider sharing trained ALBERT SQUAD trained model on https://huggingface.co/models?\r\n\r\n@knuser Absolutely, I signed-up some time ago with that intent but have yet to contribute.\r\nI'm 26 hours from this v2.4.1 `albert_xxlargev1_sqd2_512_bs48` run completion and afterwards will share the best run to date.\r\n\r\nFYI, 11 question inferencing/prediction with this 512 max_seq_length xxlarge ALBERT model takes 37 seconds CPU and 5 secs single GPU w/large batches on my computer, **hotrod**, described above.\r\n\r\nBTW, sharing can definitely save some energy & lower the carbon footprint. As an example my office electric bill doubled last month from just under $100 to over $200 with nearly constant **hotrod** fine-tuning. Perhaps the gas heater didn't need to fire-up as often though. ;-]",
"@WilliamNurmi @knuser :\r\n\r\nFine-tuning the `albert_xxlargev1_sqd2_512_bs48` script with Transformers 2.4.1 yielded the following results:\r\n```\r\n{'exact': 85.47123726101238,\r\n 'f1': 89.0856118938743,\r\n 'total': 11873,\r\n 'HasAns_exact': 82.11875843454791,\r\n 'HasAns_f1': 89.35787280971171,\r\n 'HasAns_total': 5928,\r\n 'NoAns_exact': 88.81412952060555,\r\n 'NoAns_f1': 88.81412952060555,\r\n 'NoAns_total': 5945,\r\n 'best_exact': 85.46281478985935,\r\n 'best_exact_thresh': 0.0,\r\n 'best_f1': 89.07718942272103,\r\n 'best_f1_thresh': 0.0}\r\n```\r\nwhich is no improvement over fine-tuning the same script with Transformers 2.3.0\r\n\r\nMy best model to date is now posted at: https://huggingface.co/ahotrod/albert_xxlargev1_squad2_512\r\nYou can access this albert_xxlargev1_sqd2_512 fine-tuned model with:\r\n```\r\nconfig_class, model_class, tokenizer_class = \\\r\n AlbertConfig, AlbertForQuestionAnswering, AlbertTokenizer\r\n\r\nmodel_name_or_path = \"ahotrod/albert_xxlargev1_squad2_512\"\r\nconfig = config_class.from_pretrained(model_name_or_path)\r\ntokenizer = tokenizer_class.from_pretrained(model_name_or_path, do_lower_case=True)\r\nmodel = model_class.from_pretrained(model_name_or_path, config=config)\r\n```\r\nThe AutoModels: (AutoConfig, AutoTokenizer & AutoModel) should also work, however I\r\nhave yet to use them.\r\n\r\nHope this furthers your efforts!",
"Hi guys, thanks for the great discussion. I've been trying to reproduce the XLNet fine-tuning myself, but have failed to do so so far. I stumbled upon a few issues along the way, mostly related to the padding side. \r\n\r\nThere was an issue that I fixed this morning related to the `tokens` that were used for evaluation, which were not correctly computed. I updated that in 125a75a, however it does not improve the accuracy.\r\n\r\nI'm still actively working on it and will let you know as I progress (it is quite a lengthy process as a finetuning requires a full-day of computing on my machine).",
"Hi @LysandreJik, thanks for hunting the bugs! It's going to be a great help for many people.\r\n\r\nI don't know the details of the remaining bugs, but at least the bugs I encountered were so bad that I think you should see whether or not it works very quickly after starting fine-tuning by checking if the loss is decreasing on tensorboard.",
"I can also confirm the issue after fine-tuning xlnet-large-cased on Squad 2.0 for 1 epoch. The F1 score is 46.53 although the NoAns_F1 was 89.05, probably because the model is predicting so many blanks (most with \"start_log_prob\": -1000000.0, \"end_log_prob\": -1000000.0) while HasAns_exact is close to 0.\r\n\r\nNot sure if it is related to the CLS token position mentioned in #947 and #1088. But it might be specific to the unanswerable questions in Squad 2.0. Hopefully the bug will be found and fixed soon :-)\r\n\r\nTransformers: 2.5.1\r\nPyTorch: 1.4.0\r\nPython: 3.8.1",
"@ahotrod I saw you're using a different eval script (`run_squad_II.py`) for your model at https://huggingface.co/ahotrod/xlnet_large_squad2_512 βΒ have you figured out what was wrong with `run_squad.py`? Thanks!",
"@elgeish - good eye on my eval script using `run_squad_II.py`, as posted in my model card. Unfortunately I have not figured-out what is wrong with training using the latest `run_squad.py` versions as outlined in this issue.\r\n\r\nMy https://huggingface.co/ahotrod/xlnet_large_squad2_512 model is from Nov 2019, same as the successful fine-tuned model described in my first post above. `run_squad_II.py` contained experimental code I was working on at the time trying to overcome the multi-GPU distributed processing eval limitation. Fortunately, when `run_squad_II.py` evals were run single GPU `(CUDA_VISIBLE_DEVICES=0)`, evals were the same as the Transformers v2.1.1 original `run_squad.py`, as I did not modify that portion of the code. I failed to change that eval script back to `run_squad.py`, but again since the `run_squad_II.py` eval in that script was run single GPU, it performed the same eval as the original. Sorry for the confusion.",
"@ahotrod thanks for the explanation!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any update on this issue? I am facing same issue when fine tuning custom RoBERTa.\r\nCheers",
"I'm on `4.4.0dev`"
] | 1,580 | 1,633 | 1,588 | CONTRIBUTOR | null | ## β Questions & Help
I fine-tuned XLNet_large_cased on SQuAD 2.0 last November 2019 with Transformers V2.1.1 yielding satisfactory results:
```
xlnet_large_squad2_512_bs48
{
"exact": 82.07698138633876,
"f1": 85.898874470488,
"total": 11873,
"HasAns_exact": 79.60526315789474,
"HasAns_f1": 87.26000954590184,
"HasAns_total": 5928,
"NoAns_exact": 84.54163162321278,
"NoAns_f1": 84.54163162321278,
"NoAns_total": 5945,
"best_exact": 83.22243746315169,
"best_exact_thresh": -11.112004280090332,
"best_f1": 86.88541353813282,
"best_f1_thresh": -11.112004280090332
}
```

with script:
```
#!/bin/bash
export OMP_NUM_THREADS=6
RUN_SQUAD_DIR=/media/dn/dssd/nlp/transformers/examples
SQUAD_DIR=${RUN_SQUAD_DIR}/scripts/squad2.0
MODEL_PATH=${RUN_SQUAD_DIR}/runs/xlnet_large_squad2_512_bs48
python -m torch.distributed.launch --nproc_per_node=2 ${RUN_SQUAD_DIR}/run_squad.py \
--model_type xlnet \
--model_name_or_path xlnet-large-cased \
--do_train \
--train_file ${SQUAD_DIR}/train-v2.0.json \
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
--version_2_with_negative \
--num_train_epochs 3 \
--learning_rate 3e-5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--save_steps 2000 \
--per_gpu_train_batch_size 1 \
--gradient_accumulation_steps 24 \
--output_dir ${MODEL_PATH}
CUDA_VISIBLE_DEVICES=0 python ${RUN_SQUAD_DIR}/run_squad.py \
--model_type xlnet \
--model_name_or_path ${MODEL_PATH} \
--do_eval \
--train_file ${SQUAD_DIR}/train-v2.0.json \
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
--version_2_with_negative \
--max_seq_length 512 \
--per_gpu_eval_batch_size 48 \
--output_dir ${MODEL_PATH}
$@
```
After upgrading Transformers to Version 2.3.0 I decided to see if there would be any improvements in the fine-tuning results using the same script above. I got the following results:
```
xlnet_large_squad2_512_bs48
Results: {
'exact': 45.32131727448834,
'f1': 45.52929325627209,
'total': 11873,
'HasAns_exact': 0.0,
'HasAns_f1': 0.4165483859174251,
'HasAns_total': 5928,
'NoAns_exact': 90.51303616484441,
'NoAns_f1': 90.51303616484441,
'NoAns_total': 5945,
'best_exact': 50.07159100480081,
'best_exact_thresh': 0.0,
'best_f1': 50.07229287739689,
'best_f1_thresh': 0.0}
```
No learning takes place:

Looking for potential explanation(s)/source(s) for the loss of performance. I have searched Transformer releases and issues for anything pertaining to XLNet with no clues. Are there new fine-tuning hyperparameters I've missed that now need to be assigned, or maybe didn't exist in earlier Transformer versions? Any PyTorch/Tensorflow later version issues? I may have to recreate the Nov 2019 environment for a re-run to verify the earlier results, and then incrementally update Transformers, PyTorch, Tensorflow, etc.?
Current system configuration:
OS: Linux Mint 19.3 based on Ubuntu 18.04. 3 LTS and Linux Kernel 5.0
GPU/CPU: 2 x NVIDIA 1080Ti / Intel i7-8700
Seasonic 1300W Prime Gold Power Supply
CyberPower 1500VA/1000W battery backup
Transformers: 2.3.0
PyTorch: 1.3.0
TensorFlow: 2.0.0
Python: 3.7.5
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2651/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2650/comments | https://api.github.com/repos/huggingface/transformers/issues/2650/events | https://github.com/huggingface/transformers/issues/2650 | 555,262,879 | MDU6SXNzdWU1NTUyNjI4Nzk= | 2,650 | loss function error when running run_lm_finetuning.py file | {
"login": "tanny411",
"id": 25925128,
"node_id": "MDQ6VXNlcjI1OTI1MTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/25925128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanny411",
"html_url": "https://github.com/tanny411",
"followers_url": "https://api.github.com/users/tanny411/followers",
"following_url": "https://api.github.com/users/tanny411/following{/other_user}",
"gists_url": "https://api.github.com/users/tanny411/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanny411/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanny411/subscriptions",
"organizations_url": "https://api.github.com/users/tanny411/orgs",
"repos_url": "https://api.github.com/users/tanny411/repos",
"events_url": "https://api.github.com/users/tanny411/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanny411/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I got the exact same error while trying to finetune BERT with mlm on ENRON emails dataset. This problem doesn't occur in older versions of this repo (before Jan 5th). So perhaps you can try that while they fix this issue?",
"I had your same error. Trying with different block size and batch size, with a certain configuration (I don't remember which one) the program gave me your error, with a different one it blocked on the tokenization of the training set. I followed this advice and it worked : https://github.com/huggingface/transformers/issues/2611#issuecomment-577696982\r\n\r\nDon't know if it can be solution, hope so π",
"@cgnarendiran Thanks a lot. Previous version is working fine. Is there any major update since then? In terms of bert and finetuning it, that you know of?\r\n@paulthemagno I tried --line_by_line too. It had the same issue. Also, I am testing with a small dataset, so for now dataset size isn't an issue.",
"Hi, the scripts are kept up to date with the `master` branch and not with the latest release. Do you think you could try and install from source (`pip install git+https://github.com/huggingface/transformers`) and let me know if you still have the same errors?\r\n\r\nThank you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,585 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): Multilingual model (trying to finetune with Bengali)
The problem arise when using:
* [run_lm_finetuning.py] the official example scripts: I wanted to fine tune the multilingual bert model on Bengali text.
## To Reproduce
Steps to reproduce the behavior:
1. Running the script mentioned using the command I used below and Bengali txt. For now I simply put Bengali wikidump in a text file.
student_1@gpuserver:~/aysha_anis_thesis/thesis$ python3 run_lm_finetuning.py --output_dir=lm_out --model_type=bert --model_name_or_path=bert-base-multilingual-cased --do_train --train_data_file=wikiText.txt --mlm --save_total_limit=2
2020-01-26 22:28:55.372480: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-01-26 22:28:55.377886: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
/usr/lib/python3/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
01/26/2020 22:28:56 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: False
01/26/2020 22:28:57 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json from cache at /home/student_1/.cache/torch/transformers/45629519f3117b89d89fd9c740073d8e4c1f0a70f9842476185100a8afe715d1.83b0fa3d7f1ac0e113ad300189a938c6f14d0588a4200f30eef109d0a047c484
01/26/2020 22:28:57 - INFO - transformers.configuration_utils - Model config {
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 119547
}
01/26/2020 22:28:59 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt from cache at /home/student_1/.cache/torch/transformers/96435fa287fbf7e469185f1062386e05a075cadbf6838b74da22bf64b080bc32.99bcd55fc66f4f3360bc49ba472b940b8dcf223ea6a345deb969d607ca900729
01/26/2020 22:29:00 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-pytorch_model.bin from cache at /home/student_1/.cache/torch/transformers/5b5b80054cd2c95a946a8e0ce0b93f56326dff9fbda6a6c3e02de3c91c918342.7131dcb754361639a7d5526985f880879c9bfd144b65a0bf50590bddb7de9059
01/26/2020 22:29:04 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
01/26/2020 22:29:06 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=False, do_train=True, eval_all_checkpoints=False, eval_data_file=None, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='bert-base-multilingual-cased', model_type='bert', n_gpu=2, no_cuda=False, num_train_epochs=1.0, output_dir='lm_out', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=50, save_total_limit=2, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='wikiText.txt', warmup_steps=0, weight_decay=0.0)
01/26/2020 22:29:06 - INFO - __main__ - Loading features from cached file bert_cached_lm_510_wikiText.txt
01/26/2020 22:29:08 - INFO - __main__ - ***** Running training *****
01/26/2020 22:29:08 - INFO - __main__ - Num examples = 103717
01/26/2020 22:29:08 - INFO - __main__ - Num Epochs = 1
01/26/2020 22:29:08 - INFO - __main__ - Instantaneous batch size per GPU = 4
01/26/2020 22:29:08 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8
01/26/2020 22:29:08 - INFO - __main__ - Gradient Accumulation steps = 1
01/26/2020 22:29:08 - INFO - __main__ - Total optimization steps = 12965
Epoch: 0%| | 0/1 [00:00<?, ?it/s/home/student_1/.local/lib/python3.6/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "run_lm_finetuning.py", line 785, in <module>
main()
File "run_lm_finetuning.py", line 735, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 353, in train
loss.backward()
File "/home/student_1/.local/lib/python3.6/site-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/student_1/.local/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA error: device-side assert triggered
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
Epoch: 0%| | 0/1 [00:02<?, ?it/s]
Iteration: 0%| | 0/12965 [00:02<?, ?it/s]
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2650/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2649/comments | https://api.github.com/repos/huggingface/transformers/issues/2649/events | https://github.com/huggingface/transformers/issues/2649 | 555,257,914 | MDU6SXNzdWU1NTUyNTc5MTQ= | 2,649 | Using a Model without any pretrained data | {
"login": "mnitin73",
"id": 13919821,
"node_id": "MDQ6VXNlcjEzOTE5ODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/13919821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnitin73",
"html_url": "https://github.com/mnitin73",
"followers_url": "https://api.github.com/users/mnitin73/followers",
"following_url": "https://api.github.com/users/mnitin73/following{/other_user}",
"gists_url": "https://api.github.com/users/mnitin73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnitin73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnitin73/subscriptions",
"organizations_url": "https://api.github.com/users/mnitin73/orgs",
"repos_url": "https://api.github.com/users/mnitin73/repos",
"events_url": "https://api.github.com/users/mnitin73/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnitin73/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just don't use the [from_pretrained](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained) method and initialize the class with a config.\r\n```\r\nfrom transformers import BertModel, BertConfig\r\n\r\n#model with pretrained weights\r\nmodel_with_Pretrained = BertModel.from_pretrained('bert-base-uncased')\r\n\r\n#model without pretrained weights\r\nconfig = BertConfig()\r\nmodel_without_Pretrained = BertModel(config)\r\n```",
"@cronoik Thanks",
"Hi, I also encounter the same question, is the solution still valid in the latest v4.28?\r\nthanks!"
] | 1,580 | 1,682 | 1,580 | NONE | null | ## β Questions & Help
<!-- Sorry for a very basic question. Can I use your library without any pertained data? For example, I want to use a BERT transformer model, but using only my corpus of data. In the docs, I only see examples using pretrained models. Thanks. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2649/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2648/comments | https://api.github.com/repos/huggingface/transformers/issues/2648/events | https://github.com/huggingface/transformers/issues/2648 | 555,243,376 | MDU6SXNzdWU1NTUyNDMzNzY= | 2,648 | run_lm_finetuning.py for GPT2 throw error "Using pad_token, but it is not set yet." | {
"login": "DayuanJiang",
"id": 34411969,
"node_id": "MDQ6VXNlcjM0NDExOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/34411969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DayuanJiang",
"html_url": "https://github.com/DayuanJiang",
"followers_url": "https://api.github.com/users/DayuanJiang/followers",
"following_url": "https://api.github.com/users/DayuanJiang/following{/other_user}",
"gists_url": "https://api.github.com/users/DayuanJiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DayuanJiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DayuanJiang/subscriptions",
"organizations_url": "https://api.github.com/users/DayuanJiang/orgs",
"repos_url": "https://api.github.com/users/DayuanJiang/repos",
"events_url": "https://api.github.com/users/DayuanJiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/DayuanJiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am having same error as well. Did you manage to fix it or any other updates?",
"Can you let me know if 6b4c3ee234db010ae2fb0554c0099fbf1f7f1f51 fixes your issue?",
"I encountered this issue and sure enough it is fixed with `6b4c3ee`.\r\n\r\nThanks @julien-c. It's mind blowing that I found the error 15 mins ago, searched here, found that you'd just patched it, and am now able to continue.",
"Thanks @julien-c , that fixes it.",
"Thanks guys! (and hat/tip @LysandreJik)",
"\r\n",
"ValueError: Unable to set proper padding strategy as the tokenizer does not have a padding token. In this case please set the `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via the function add_special_tokens if you want to use a padding strategy\r\n",
"python3 on kaggle\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer, Trainer, TrainingArguments\r\nimport torch\r\nimport numpy as np\r\n\r\n# Function to shift the labels for language modeling\r\ndef shift_labels(examples):\r\n examples[\"labels\"] = examples[\"input_ids\"].copy()\r\n return examples\r\n\r\n# Load the dataset\r\ndataset = load_dataset(\"wikipedia\", \"20220301.simple\")\r\n\r\n# Get the total number of entries in the dataset\r\ntotal_entries = len(dataset[\"train\"])\r\n\r\n# Split the dataset into training and test sets\r\ntrain_dataset = dataset[\"train\"].select(range(300)) # taking only the first 300 for example\r\n\r\n# Ensure that there are enough entries for the test set\r\nif total_entries < 60:\r\n raise ValueError(\"Not enough data to extract a test set of 60 entries.\")\r\n\r\nstart_index = total_entries - 60\r\ntest_dataset = dataset[\"train\"].select(range(start_index, total_entries)) # Now this will work\r\n\r\n\r\n\r\n# Load the tokenizer\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n\r\n# Explicitly set the padding token if it's not already defined\r\nif tokenizer.pad_token is None:\r\n tokenizer.pad_token = tokenizer.eos_token\r\n\r\n# Function to tokenize the input\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True, max_length=512)\r\n\r\n# Tokenize the datasets\r\ntrain_dataset = train_dataset.map(tokenize_function, batched=True)\r\ntest_dataset = test_dataset.map(tokenize_function, batched=True)\r\n\r\n\r\n# Tokenize the datasets\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\r\n\r\ntrain_dataset = train_dataset.map(tokenize_function, batched=True)\r\ntest_dataset = test_dataset.map(tokenize_function, batched=True)\r\n\r\n# Shift labels for the language modeling task\r\ntrain_dataset = train_dataset.map(shift_labels, batched=True)\r\ntest_dataset = test_dataset.map(shift_labels, batched=True)\r\n\r\n# Define the data collator\r\ndata_collator = lambda data: {'input_ids': torch.stack([f['input_ids'] for f in data]), \r\n 'attention_mask': torch.stack([f['attention_mask'] for f in data]), \r\n 'labels': torch.stack([f['labels'] for f in data])}\r\n\r\n# Set the PyTorch format for the dataset\r\ntrain_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])\r\ntest_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])\r\n\r\n# Load the model\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n\r\n# Define the compute_metrics function for evaluation\r\ndef compute_metrics(eval_pred):\r\n logits, labels = eval_pred\r\n shift_logits = logits[..., :-1, :].contiguous()\r\n shift_labels = labels[..., 1:].contiguous()\r\n # Flatten the outputs and labels\r\n loss_fct = torch.nn.CrossEntropyLoss(reduction='none')\r\n loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))\r\n perplexity = torch.exp(torch.mean(loss))\r\n return {\"perplexity\": perplexity.item()}\r\n\r\n# Define training arguments\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./results\",\r\n num_train_epochs=10,\r\n per_device_train_batch_size=4,\r\n per_device_eval_batch_size=4,\r\n warmup_steps=500,\r\n weight_decay=0.01,\r\n logging_dir='./logs',\r\n logging_steps=10,\r\n evaluation_strategy='epoch',\r\n save_strategy='epoch',\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"perplexity\",\r\n greater_is_better=False\r\n)\r\n\r\n# Initialize the Trainer\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=test_dataset,\r\n compute_metrics=compute_metrics,\r\n data_collator=data_collator,\r\n tokenizer=tokenizer\r\n)\r\n\r\n# Start training\r\ntrainer.train()\r\n\r\n# Save the best model\r\ntrainer.save_model(\"/kaggle/working/best_model_wiki\")\r\n\r\n\r\n\r\nfrom transformers import GPT2LMHeadModel\r\n\r\n# Make sure to provide the correct path where the best model is saved\r\nmodel_path = \"/kaggle/working/best_model_wiki\"\r\n``` \r\ni get the output Using pad_token, but it is not set yet.\r\n\r\n"
] | 1,580 | 1,699 | 1,580 | NONE | null | I used the official setting.
```bash
python transformers/examples/run_lm_finetuning.py \
--output_dir=gpt2_q_model \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=txt/{q_files[0]} \
```
But it says the padding id was not set.
```python
ERROR - transformers.tokenization_utils - Using pad_token, but it is not set yet.
Traceback (most recent call last):
File "transformers/examples/run_lm_finetuning.py", line 785, in <module>
main()
File "transformers/examples/run_lm_finetuning.py", line 735, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "transformers/examples/run_lm_finetuning.py", line 330, in train
for step, batch in enumerate(epoch_iterator):
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 979, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 346, in __next__
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "transformers/examples/run_lm_finetuning.py", line 231, in collate
return pad_sequence(examples, batch_first=True, padding_value=tokenizer.pad_token_id)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/utils/rnn.py", line 384, in pad_sequence
out_tensor = sequences[0].data.new(*out_dims).fill_(padding_value)
TypeError: fill_() received an invalid combination of arguments - got (NoneType), but expected one of:
* (Tensor value)
didn't match because some of the arguments have invalid types: (NoneType)
* (Number value)
didn't match because some of the arguments have invalid types: (NoneType)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2648/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2647/comments | https://api.github.com/repos/huggingface/transformers/issues/2647/events | https://github.com/huggingface/transformers/issues/2647 | 555,222,017 | MDU6SXNzdWU1NTUyMjIwMTc= | 2,647 | Question Answering with Japanese | {
"login": "Mukei",
"id": 266090,
"node_id": "MDQ6VXNlcjI2NjA5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/266090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mukei",
"html_url": "https://github.com/Mukei",
"followers_url": "https://api.github.com/users/Mukei/followers",
"following_url": "https://api.github.com/users/Mukei/following{/other_user}",
"gists_url": "https://api.github.com/users/Mukei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mukei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mukei/subscriptions",
"organizations_url": "https://api.github.com/users/Mukei/orgs",
"repos_url": "https://api.github.com/users/Mukei/repos",
"events_url": "https://api.github.com/users/Mukei/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mukei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @Mukei,\r\n\r\nAs far as I know, there is no Transformer-based model fine-tuned for Japanese question answering tasks.\r\nIt is partly due to the scarcity of Japanese QA datasets (like SQuAD) to train the models on.\r\n\r\n(Of course, we do wish to release models for QA, and it is left for our future work.)",
"As a workaround you could load the [bert-base-japanese](https://huggingface.co/bert-base-japanese) weights for the BertForQuestionAnswering model and just finetune the qa_outputs layer (in case of a single span prediction task). It will be quickly trained and maybe produces already sufficient results. ",
"@singletongue Thank you for your reply!\r\n\r\nYou might already know about it, but I found this [project](https://github.com/AkariAsai/extractive_rc_by_runtime_mt) with SQuAD V1.1 partially translated to Japanese: [Context](https://github.com/AkariAsai/extractive_rc_by_runtime_mt/blob/master/data/ja_question_v5_context.csv), [QA](https://github.com/AkariAsai/extractive_rc_by_runtime_mt/blob/master/data/ja_question_v5.csv)",
"@cronoik Thank you for the advice.\r\nI tried but unfortunately the results were pretty bad even for some simple phrase.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,580 | 1,586 | 1,586 | NONE | null | ## β Questions & Help
Hi @singletongue,
I am trying to use Question-Answering for Japanese, however I could not find any model trained for that.
I tried with the available models but the results were way off (as expected...).
Any suggestions on available models, or other library that already handle QnA with Japanese?
If it is supposed to work as-is, could you share a simple example?
Thank you in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2647/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2646/comments | https://api.github.com/repos/huggingface/transformers/issues/2646/events | https://github.com/huggingface/transformers/issues/2646 | 555,212,360 | MDU6SXNzdWU1NTUyMTIzNjA= | 2,646 | glue.py: AttributeError: 'numpy.str_' object has no attribute 'text_a' | {
"login": "pacebrian0",
"id": 11386046,
"node_id": "MDQ6VXNlcjExMzg2MDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11386046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacebrian0",
"html_url": "https://github.com/pacebrian0",
"followers_url": "https://api.github.com/users/pacebrian0/followers",
"following_url": "https://api.github.com/users/pacebrian0/following{/other_user}",
"gists_url": "https://api.github.com/users/pacebrian0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacebrian0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacebrian0/subscriptions",
"organizations_url": "https://api.github.com/users/pacebrian0/orgs",
"repos_url": "https://api.github.com/users/pacebrian0/repos",
"events_url": "https://api.github.com/users/pacebrian0/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacebrian0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think the problem was due to the dataset not being set to (index,example) structure",
"@pacebrian0 Could you post what changes did you make?",
"I decided to use simpletransformers python package, which allows you to train custom datasets.\r\nThe above problem can only be solved by using tensorflow-datasets data as far as I know",
"Ah! The same as me. I am using that same package, but had no idea that those problems could be solved only with tensorflow-datasets. "
] | 1,580 | 1,580 | 1,580 | NONE | null | when I am executing the glue data conversion i.e.
`sequences = glue_convert_examples_to_features(X_train, tokenizer, max_length=MAX_SEQUENCE_LENGTH, task='mrpc')
`
I'm getting this error:
> I0126 11:57:07.862119 16252 glue.py:70] Using label list ['0', '1'] for task mrpc
> I0126 11:57:07.863118 16252 glue.py:73] Using output mode classification for task mrpc
> I0126 11:57:07.864120 16252 glue.py:80] Writing example 0
> ---------------------------------------------------------------------------
>
> AttributeError Traceback (most recent call last)
>
> <ipython-input-11-621d8071aa9a> in <module>
> 2 #test_sequences = [tokenizer.encode(xxt,add_special_tokens=True) for xxt in X_test]
> 3 #val_sequences = [tokenizer.encode(xxt,add_special_tokens=True) for xxt in X_val]
> ----> 4 sequences = glue_convert_examples_to_features(df['cleantext'], tokenizer, max_length=MAX_SEQUENCE_LENGTH, task='mrpc')
> 5 val_sequences = glue_convert_examples_to_features(X_val, tokenizer, max_length=MAX_SEQUENCE_LENGTH, task='mrpc')
> 6 test_sequences = glue_convert_examples_to_features(X_val, tokenizer, max_length=MAX_SEQUENCE_LENGTH, task='mrpc')
>
> d:\anaconda3\envs\t2\lib\site-packages\transformers\data\processors\glue.py in glue_convert_examples_to_features(examples, tokenizer, max_length, task, label_list, output_mode, pad_on_left, pad_token, pad_token_segment_id, mask_padding_with_zero)
> 84
> 85 inputs = tokenizer.encode_plus(
> ---> 86 example.text_a,
> 87 example.text_b,
> 88 add_special_tokens=True,
>
> AttributeError: 'str' object has no attribute 'text_a'
As tokenizer I'm using
`tokenizer = BertTokenizer.from_pretrained('bert-base-cased')`
and my numpy version is 1.18.1, tensorflow version 2.1.0 (base 2.1.0), transformers version 2.3.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2646/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2646/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2645/comments | https://api.github.com/repos/huggingface/transformers/issues/2645/events | https://github.com/huggingface/transformers/issues/2645 | 555,187,701 | MDU6SXNzdWU1NTUxODc3MDE= | 2,645 | How to load locally saved tensorflow DistillBERT model | {
"login": "JKP0",
"id": 48640299,
"node_id": "MDQ6VXNlcjQ4NjQwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/48640299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JKP0",
"html_url": "https://github.com/JKP0",
"followers_url": "https://api.github.com/users/JKP0/followers",
"following_url": "https://api.github.com/users/JKP0/following{/other_user}",
"gists_url": "https://api.github.com/users/JKP0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JKP0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JKP0/subscriptions",
"organizations_url": "https://api.github.com/users/JKP0/orgs",
"repos_url": "https://api.github.com/users/JKP0/repos",
"events_url": "https://api.github.com/users/JKP0/events{/privacy}",
"received_events_url": "https://api.github.com/users/JKP0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please format your code correctly using code tags and not quote tags, and don't use screenshots but post your actual code so that we can copy-paste it and reproduce your errors. https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks",
"Thanks to your response, now it will be convenient to copy-paste.\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import DistilBertTokenizer, TFDistilBertModel\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\nmodel = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\ninput_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"), dtype=\"int32\")[None, :] # Batch size 1\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0]\r\n```\r\n>############################################ success \r\n```\r\nmodel.save(\"DSB/SV/distDistilBERT.h5\")\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n<ipython-input-5-c1f33594ba67> in <module>()\r\n----> 1 model.save(\"DSB/SV/distDistilBERT.h5\")\r\n\r\n1 frames\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options)\r\n 1006 \"\"\"\r\n 1007 save.save_model(self, filepath, overwrite, include_optimizer, save_format,\r\n-> 1008 signatures, options)\r\n 1009 \r\n 1010 def save_weights(self, filepath, overwrite=True, save_format=None):\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)\r\n 103 not isinstance(model, sequential.Sequential)):\r\n 104 raise NotImplementedError(\r\n--> 105 'Saving the model to HDF5 format requires the model to be a '\r\n 106 'Functional model or a Sequential model. It does not work for '\r\n 107 'subclassed models, because such models are defined via the body of '\r\n\r\nNotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format=\"tf\") or using `save_weights`.\r\n\r\n> #############################################\r\n```\r\nmodel.save(\"DSB/\")\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-8-75503fb9f2ea> in <module>()\r\n----> 1 model.save(\"DSB/\")\r\n\r\n3 frames\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options)\r\n 1006 \"\"\"\r\n 1007 save.save_model(self, filepath, overwrite, include_optimizer, save_format,\r\n-> 1008 signatures, options)\r\n 1009 \r\n 1010 def save_weights(self, filepath, overwrite=True, save_format=None):\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)\r\n 113 else:\r\n 114 saved_model_save.save(model, filepath, overwrite, include_optimizer,\r\n--> 115 signatures, options)\r\n 116 \r\n 117 \r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options)\r\n 63 \r\n 64 if save_impl.should_skip_serialization(model):\r\n---> 65 saving_utils.raise_model_input_error(model)\r\n 66 \r\n 67 if not include_optimizer:\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/saving_utils.py in raise_model_input_error(model)\r\n 111 'set. Usually, input shapes are automatically determined from calling'\r\n 112 ' .fit() or .predict(). To manually set the shapes, call '\r\n--> 113 'model._set_inputs(inputs).'.format(model))\r\n 114 \r\n 115 \r\n\r\nValueError: Model <transformers.modeling_tf_distilbert.TFDistilBertModel object at 0x7f6905c1fbe0> cannot be saved because the input shapes have not been set. Usually, input shapes are automatically determined from calling .fit() or .predict(). To manually set the shapes, call model._set_inputs(inputs).\r\n\r\n>#######################################################\r\n```\r\nmodel.save_pretrained(\"DSB\")\r\nmodel.save_weights(\"DSB/DistDistilBERT_weights.h5\")\r\n```\r\n>######################################################### success \r\n```\r\nfrom transformers import DistilBertConfig, PretrainedConfig\r\nconfig = DistilBertConfig.from_json_file('DSB/config.json')\r\nconf2=PretrainedConfig.from_pretrained(\"DSB\")\r\n\r\n```\r\n> ############################################################# success \r\n```\r\n#from tensorflow.keras.models import load_model\r\n#model=load_model(\"DSB/tf_model.h5\") # error \r\n\r\n```\r\n> ################ error, It looks because-of saved model is not by `model.save(\"path\")`\r\n```\r\nfrom transformers import TFPreTrainedModel\r\n#model=TFPreTrainedModel.from_pretrained(\"DSB\") # error \r\nmodel=TFPreTrainedModel.from_pretrained(\"DSB/tf_model.h5\", config=config) # error \r\n#config=TFPreTrainedModel.from_config(\"DSB/config.json\") # error \r\n#model=TFPreTrainedModel.from_pretrained(\"DSB/\") # error \r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n<ipython-input-28-7f562f1af321> in <module>()\r\n 1 from transformers import TFPreTrainedModel\r\n 2 #model=TFPreTrainedModel.from_pretrained(\"DSB\") # error \r\n----> 3 model=TFPreTrainedModel.from_pretrained(\"DSB/tf_model.h5\", config=config)\r\n 4 #config=TFPreTrainedModel.from_config(\"DSB/config.json\")\r\n 5 #model=TFPreTrainedModel.from_pretrained(\"DSB/\")\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 309 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)\r\n 310 \r\n--> 311 ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs\r\n 312 \r\n 313 assert os.path.isfile(resolved_archive_file), \"Error retrieving file {}\".format(resolved_archive_file)\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)\r\n 820 with base_layer_utils.autocast_context_manager(\r\n 821 self._compute_dtype):\r\n--> 822 outputs = self.call(cast_inputs, *args, **kwargs)\r\n 823 self._handle_activity_regularization(inputs, outputs)\r\n 824 self._set_mask_metadata(inputs, outputs, input_masks)\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py in call(self, inputs, training, mask)\r\n 710 \"\"\"\r\n 711 if not self._is_graph_network:\r\n--> 712 raise NotImplementedError('When subclassing the `Model` class, you should'\r\n 713 ' implement a `call` method.')\r\n 714 \r\n\r\nNotImplementedError: When subclassing the `Model` class, you should implement a `call` method.\r\n",
"To save/load a model:\r\n\r\n```py\r\n\r\nmodel = TFDistilBertModel(config)\r\n\r\n# Saving the model\r\nmodel.save_pretrained(\"directory\")\r\n\r\n# Loading the model\r\nloaded_model = TFDistilBertModel.from_pretrained(\"directory\") # automatically loads the configuration.\r\n```",
"Thanks @LysandreJik \r\nIt works.\r\ngreedy guidelines poped by `model.svae_pretrained` have confused me. It pops up like this \r\n```\r\nmodel.save_pretrained(\"directory\")\r\n\r\nsave a model and its configuration file to the directory, so that it can be re-loaded using the \r\n:func: ~transformers.PreTrainedModel.from_pretrained` \r\nclass method\r\n\r\n```\r\n\r\n\r\n"
] | 1,580 | 1,580 | 1,580 | NONE | null | I have got tf model for DistillBERT by the following python line
> `import tensorflow as tf
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"), dtype="int32")[None, :] # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]`
> These lines have been executed successfully. But I am facing error with model.save()
> `model.save("DSB/DistilBERT.h5")`
> `model.save("DSB")`
> `model.save("DSB/")`
> all the above 3 line gives errors


> but downlines works
>`model.save_pretrained("DSB")`
this saves 2 file tf_model.h5 and config.json
>`model.save_weights("DSB/DistDistilBERT_weights.h5")`
this also have saved the file

> but I am not able to re-load this locally saved model any how, I have tried with all down-lines it gives error
> `from tensorflow.keras.models import load_model
from transformers import DistilBertConfig, PretrainedConfig
from transformers import TFPreTrainedModel
config = DistilBertConfig.from_json_file('DSB/config.json') conf2=PretrainedConfig.from_pretrained("DSB")
config=TFPreTrainedModel.from_config("DSB/config.json")`
> all these load configuration , but I am unable to load model , tried with all down-line
> `model=TFPreTrainedModel.from_pretrained("DSB")`
> `model=PreTrainedModel.from_pretrained("DSB/tf_model.h5", from_tf=True, config=config)`
> `model=TFPreTrainedModel.from_pretrained("DSB/")`
> ` model=TFPreTrainedModel.from_pretrained("DSB/tf_model.h5", config=config)`

> NotImplementedError Traceback (most recent call last)
<ipython-input-28-7f562f1af321> in <module>()
1 from transformers import TFPreTrainedModel
----> 2 model=TFPreTrainedModel.from_pretrained("DSB/tf_model.h5", config=config)
3 #config=TFPreTrainedModel.from_config("DSB/config.json")
4 #model=TFPreTrainedModel.from_pretrained("DSB/")
2 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
309 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
310
--> 311 ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs
312
313 assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py in call(self, inputs, training, mask)
710 """
711 if not self._is_graph_network:
--> 712 raise NotImplementedError('When subclassing the `Model` class, you should'
713 ' implement a `call` method.')
714
NotImplementedError: When subclassing the `Model` class, you should implement a `call` method.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2645/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2644/comments | https://api.github.com/repos/huggingface/transformers/issues/2644/events | https://github.com/huggingface/transformers/issues/2644 | 555,164,973 | MDU6SXNzdWU1NTUxNjQ5NzM= | 2,644 | XLNet run_squad.py IndexError: tuple index out of range | {
"login": "phuongpm241",
"id": 29219768,
"node_id": "MDQ6VXNlcjI5MjE5NzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/29219768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phuongpm241",
"html_url": "https://github.com/phuongpm241",
"followers_url": "https://api.github.com/users/phuongpm241/followers",
"following_url": "https://api.github.com/users/phuongpm241/following{/other_user}",
"gists_url": "https://api.github.com/users/phuongpm241/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phuongpm241/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phuongpm241/subscriptions",
"organizations_url": "https://api.github.com/users/phuongpm241/orgs",
"repos_url": "https://api.github.com/users/phuongpm241/repos",
"events_url": "https://api.github.com/users/phuongpm241/events{/privacy}",
"received_events_url": "https://api.github.com/users/phuongpm241/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, are you sure you're running on commit babd41e, and that you didn't take the script from this version without updating the library itself? I believe this was patched in 073219b.\r\n\r\nCould you try to install from source `pip install git+https://github.com/huggingface/transformers` and let me know if it fixes this issue?",
"It works after updating reinstalling the library. I think I might forget to install after git pull. \r\n\r\nThank you!"
] | 1,580 | 1,580 | 1,580 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): XLNet
Language I am using the model on (English, Chinese....): English (xlnet-base-cased)
The problem arise when using:
* [x] the official example scripts: run_squad.py
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: run_squad.py
* [ ] my own task or dataset: (give details)
## To Reproduce
CUDA_VISIBLE_DEVICES=0,1,2,3 python run_squad.py \
--model_type xlnet \
--model_name_or_path xlnet-base-cased \
--do_train \
--do_eval \
--do_lower_case \
--version_2_with_negative \
--train_file /data/medg/misc/phuongpm/squadv2/train-v2.0.json \
--predict_file /data/medg/misc/phuongpm/squadv2/dev-v2.0.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 10000 \
--output_dir /scratch/phuongpm/tuned/squad_xlnet/
## Expected behavior
Epoch: 0%| | 0/2 [00:00<?, ?it/sTraceback (most recent call last): | 0/2791 [00:00<?, ?it/s]
File "run_squad.py", line 837, in <module>
main()
File "run_squad.py", line 776, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_squad.py", line 221, in train
inputs.update({"is_impossible": batch[7]})
IndexError: tuple index out of range
Epoch: 0%| | 0/2 [00:00<?, ?it/s]
Iteration: 0%| | 0/2791 [00:00<?, ?it/s]
## Environment
* OS: Linux
* Python version: 3.6
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch):
commit babd41e7fa07bdd764f8fe91c33469046ab7dbd1
Author: Lysandre <[email protected]>
Date: Fri Jan 24 17:06:55 2020 -0500
* Using GPU ? Yes
* Distributed or parallel setup ?
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2644/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2643/comments | https://api.github.com/repos/huggingface/transformers/issues/2643/events | https://github.com/huggingface/transformers/issues/2643 | 555,148,887 | MDU6SXNzdWU1NTUxNDg4ODc= | 2,643 | BERT LOSS FUNCTION | {
"login": "alshahrani2030",
"id": 55197626,
"node_id": "MDQ6VXNlcjU1MTk3NjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/55197626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alshahrani2030",
"html_url": "https://github.com/alshahrani2030",
"followers_url": "https://api.github.com/users/alshahrani2030/followers",
"following_url": "https://api.github.com/users/alshahrani2030/following{/other_user}",
"gists_url": "https://api.github.com/users/alshahrani2030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alshahrani2030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alshahrani2030/subscriptions",
"organizations_url": "https://api.github.com/users/alshahrani2030/orgs",
"repos_url": "https://api.github.com/users/alshahrani2030/repos",
"events_url": "https://api.github.com/users/alshahrani2030/events{/privacy}",
"received_events_url": "https://api.github.com/users/alshahrani2030/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Sure you can do that. Create a class which inherits from [BertForSequenceClassification](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1122) and overwrite the [forward](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1134) method.",
"Instead of overwriting the forward method you can retrieve the hidden states and compute the loss as you would do with any PyTorch model. \r\n\r\nThe loss is only computed by the model when you hand the `labels` to the model, which is not a required argument.",
"> Instead of overwriting the forward method you can retrieve the hidden states and compute the loss as you would do with any PyTorch model.\r\n> \r\n> The loss is only computed by the model when you hand the `labels` to the model, which is not a required argument.\r\n\r\nCould you please elaborate on the same please? @LysandreJik \r\n",
"What do you want me to elaborate on?",
"> Instead of overwriting the forward method you can retrieve the hidden states and compute the loss as you would do with any PyTorch model.\r\nThis @LysandreJik ",
"I got this error when I called \r\nloss.backward()\r\n loss is \"torch.float64\" type\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> My question is that can I use KLDivLoss instead of CrossEntropyLoss when I fine-tune BERT for classification? the reason for that is that I want to pass the weight of each class(e.g for binary classification, instead of 1 or 0 I will pass the probability distribution )\r\n> \r\n> Thank you in advance\r\n\r\nHi did you manage to do this? I also need to pass class probability distribution instead of the labels and am not sure how to do this.",
"BertForSequenceClassification.forward() returns the logits also. You can use these in any pytorch loss function (eg: KLDivLoss, not sure if you'll need to softmax them first) and then run backward on the resulting loss. It's a bit redundant (since BertForSequenceClassification's loss is still calculated), but works.",
"> Sure you can do that. Create a class which inherits from [BertForSequenceClassification](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1122) and overwrite the [forward](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1134) method.\r\n\r\nThat link to the forward function is stale, as of Feb 24 it's [here](https://github.com/huggingface/transformers/blob/7e662e6a3be0ece455b4c4ae2c3348beab11bad5/src/transformers/models/bert/modeling_bert.py#L1475)."
] | 1,579 | 1,614 | 1,594 | NONE | null | My question is that can I use KLDivLoss instead of CrossEntropyLoss when I fine-tune BERT for classification? the reason for that is that I want to pass the weight of each class(e.g for binary classification, instead of 1 or 0 I will pass the probability distribution )
Thank you in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2643/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2642/comments | https://api.github.com/repos/huggingface/transformers/issues/2642/events | https://github.com/huggingface/transformers/issues/2642 | 555,131,572 | MDU6SXNzdWU1NTUxMzE1NzI= | 2,642 | Scrambled dimensions on output of forward pass | {
"login": "amin3141",
"id": 18374534,
"node_id": "MDQ6VXNlcjE4Mzc0NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18374534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amin3141",
"html_url": "https://github.com/amin3141",
"followers_url": "https://api.github.com/users/amin3141/followers",
"following_url": "https://api.github.com/users/amin3141/following{/other_user}",
"gists_url": "https://api.github.com/users/amin3141/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amin3141/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amin3141/subscriptions",
"organizations_url": "https://api.github.com/users/amin3141/orgs",
"repos_url": "https://api.github.com/users/amin3141/repos",
"events_url": "https://api.github.com/users/amin3141/events{/privacy}",
"received_events_url": "https://api.github.com/users/amin3141/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! There was a mistake with the re-arrangement of the input embeddings inside the forward method of XLNet. I've fixed it with f09f42d.\r\n\r\nConcerning the issue with `d_model=25` and `n_heads=5`, this is due to the model dimension being an odd number which doesn't fare well with [`torch.arange` leveraging the model dimension to build relative positional embeddings](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlnet.py#L665). We should probably update this to allow for odd dimension XLNet architectures cc @thomwolf @julien-c.",
"Thanks for the quick fix on the the re-arrangement issue. I don't know how difficult the odd model dimension fix is. At the least, the model could throw a `ValueError` in the constructor if the dimension is odd. That would, at least, give users clear guidance.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any updates, or do you want to close this one?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any updates here? It's an easy fix to add a more informative error message.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,596 | 1,596 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): XLNet
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: see attached minimum working example.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: see attached minimum working example.
## To Reproduce
Steps to reproduce the behavior:
1. Run the minimal working example (see below) with the command: `python xl_mwe.py`
2. Observe the following output:
```
Embedded batch: torch.Size([3, 13, 300])
XLNet output : torch.Size([13, 3, 300])
```
3. Per the documentation, the correct dimensions for the output should have been [3, 13, 300]. From the documentation of `last_hidden_state` in `XLNetModel.forward`: last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)):
4. While constructing the minimal working example, I also observed another bug. If I change d_model to 25 (d_model = 300 in the code below) and n_heads to 5 (default is 10 in the code below), I get an error from einsum:
```
Traceback (most recent call last):
File "xl_mwe.py", line 43, in <module>
main()
File "xl_mwe.py", line 37, in main
xlnet_output = xlnet(inputs_embeds=embedded_batch)[0]
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\transformers\modeling_xlnet.py", line 858, in forward
head_mask=head_mask[i])
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\transformers\modeling_xlnet.py", line 436, in forward
head_mask=head_mask)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\transformers\modeling_xlnet.py", line 383, in forward
k_head_r = torch.einsum('ibh,hnd->ibnd', r, self.r)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\functional.py", line 202, in einsum
return torch._C._VariableFunctions.einsum(equation, operands)
RuntimeError: size of dimension does not match previous size, operand 1, dim 0
```
## Minimal working example
```
import numpy as np
import torch
from transformers import XLNetConfig, XLNetModel, XLNetTokenizer
def embed(input_str, dims=25, fix_len=-1):
result = []
for word in input_str.split():
result.append(np.random.rand(dims))
if fix_len > -1:
result = result[0: fix_len]
if len(result) < fix_len:
result = result + [np.zeros(dims)] * (fix_len - len(result))
return result
def embed_batch(batch, dims=25, fix_len=-1):
return np.stack([embed(x, dims, fix_len) for x in batch], axis=0)
def main():
batch = [
"Hello, how are you doing?",
"Please go to the store and buy some bread.",
"Trump was not exonerated by the Mueller report."
]
d_model = 300
config = XLNetConfig(d_model=d_model, n_head=10)
xlnet = XLNetModel(config)
embedded_batch = embed_batch(batch, dims=d_model, fix_len=13)
embedded_batch = torch.from_numpy(embedded_batch).float()
print(f"Embedded batch: {embedded_batch.shape}")
xlnet_output = xlnet(inputs_embeds=embedded_batch)[0]
print(f"XLNet output : {xlnet_output.shape}")
if __name__ == "__main__":
main()
```
## Environment
* OS: Windows 10
* Python version: 3.7.4
* PyTorch version: 1.2.0
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? Yes
* Distributed or parallel setup ? No
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2642/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2641/comments | https://api.github.com/repos/huggingface/transformers/issues/2641/events | https://github.com/huggingface/transformers/issues/2641 | 555,081,290 | MDU6SXNzdWU1NTUwODEyOTA= | 2,641 | ImportError: cannot import name 'TFDistilBertModel' | {
"login": "JKP0",
"id": 48640299,
"node_id": "MDQ6VXNlcjQ4NjQwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/48640299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JKP0",
"html_url": "https://github.com/JKP0",
"followers_url": "https://api.github.com/users/JKP0/followers",
"following_url": "https://api.github.com/users/JKP0/following{/other_user}",
"gists_url": "https://api.github.com/users/JKP0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JKP0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JKP0/subscriptions",
"organizations_url": "https://api.github.com/users/JKP0/orgs",
"repos_url": "https://api.github.com/users/JKP0/repos",
"events_url": "https://api.github.com/users/JKP0/events{/privacy}",
"received_events_url": "https://api.github.com/users/JKP0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Does the following import work?\r\n`from transformers.modeling_tf_distilbert import TFDistilBertModel`\r\nand what is the output of:\r\n```\r\nfrom transformers.file_utils import is_tf_available\r\nis_tf_available()\r\n```",
"Thank you for response. Thanks! \r\n\r\n> Does the following import work?\r\n> `from transformers.modeling_tf_distilbert import TFDistilBertModel`\r\n\r\n> This import works but gives the error.\r\n TypeError: Expected int32, got 0.0 of type 'float' instead.\r\n\r\n> TypeError Traceback (most recent call last)\r\n<ipython-input-6-e6dacece142c> in <module>()\r\n 3 \r\n 4 tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\n----> 5 model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\n 6 input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n 7 outputs = model(input_ids)\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 309 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)\r\n 310 \r\n--> 311 ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs\r\n 312 \r\n 313 assert os.path.isfile(resolved_archive_file), \"Error retrieving file {}\".format(resolved_archive_file)\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)\r\n 852 outputs = base_layer_utils.mark_as_return(outputs, acd)\r\n 853 else:\r\n--> 854 outputs = call_fn(cast_inputs, *args, **kwargs)\r\n 855 \r\n 856 except errors.OperatorNotAllowedInGraphError as e:\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)\r\n 235 except Exception as e: # pylint:disable=broad-except\r\n 236 if hasattr(e, 'ag_error_metadata'):\r\n--> 237 raise e.ag_error_metadata.to_exception(e)\r\n 238 else:\r\n 239 raise\r\n\r\nTypeError: in converted code:\r\n relative to /usr/local/lib/python3.6/dist-packages:\r\n\r\n transformers/modeling_tf_distilbert.py:569 call *\r\n outputs = self.distilbert(inputs, **kwargs)\r\n tensorflow_core/python/keras/engine/base_layer.py:854 __call__\r\n outputs = call_fn(cast_inputs, *args, **kwargs)\r\n transformers/modeling_tf_distilbert.py:455 call *\r\n embedding_output = self.embeddings(input_ids, inputs_embeds=inputs_embeds) # (bs, seq_length, dim)\r\n tensorflow_core/python/keras/engine/base_layer.py:824 __call__\r\n self._maybe_build(inputs)\r\n tensorflow_core/python/keras/engine/base_layer.py:2146 _maybe_build\r\n self.build(input_shapes)\r\n transformers/modeling_tf_distilbert.py:97 build\r\n initializer=get_initializer(self.initializer_range))\r\n tensorflow_core/python/keras/engine/base_layer.py:529 add_weight\r\n aggregation=aggregation)\r\n tensorflow_core/python/training/tracking/base.py:712 _add_variable_with_custom_getter\r\n **kwargs_for_getter)\r\n tensorflow_core/python/keras/engine/base_layer_utils.py:139 make_variable\r\n shape=variable_shape if variable_shape else None)\r\n tensorflow_core/python/ops/variables.py:258 __call__\r\n return cls._variable_v1_call(*args, **kwargs)\r\n tensorflow_core/python/ops/variables.py:219 _variable_v1_call\r\n shape=shape)\r\n tensorflow_core/python/ops/variables.py:197 <lambda>\r\n previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)\r\n tensorflow_core/python/ops/variable_scope.py:2503 default_variable_creator\r\n shape=shape)\r\n tensorflow_core/python/ops/variables.py:262 __call__\r\n return super(VariableMetaclass, cls).__call__(*args, **kwargs)\r\n tensorflow_core/python/ops/resource_variable_ops.py:1406 __init__\r\n distribute_strategy=distribute_strategy)\r\n tensorflow_core/python/ops/resource_variable_ops.py:1537 _init_from_args\r\n initial_value() if init_from_fn else initial_value,\r\n tensorflow_core/python/keras/engine/base_layer_utils.py:119 <lambda>\r\n init_val = lambda: initializer(shape, dtype=dtype)\r\n tensorflow_core/python/ops/init_ops.py:369 __call__\r\n shape, self.mean, self.stddev, dtype, seed=self.seed)\r\n tensorflow_core/python/ops/random_ops.py:171 truncated_normal\r\n mean_tensor = ops.convert_to_tensor(mean, dtype=dtype, name=\"mean\")\r\n tensorflow_core/python/framework/ops.py:1184 convert_to_tensor\r\n return convert_to_tensor_v2(value, dtype, preferred_dtype, name)\r\n tensorflow_core/python/framework/ops.py:1242 convert_to_tensor_v2\r\n as_ref=False)\r\n tensorflow_core/python/framework/ops.py:1297 internal_convert_to_tensor\r\n ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)\r\n tensorflow_core/python/framework/tensor_conversion_registry.py:52 _default_conversion_function\r\n return constant_op.constant(value, dtype, name=name)\r\n tensorflow_core/python/framework/constant_op.py:227 constant\r\n allow_broadcast=True)\r\n tensorflow_core/python/framework/constant_op.py:265 _constant_impl\r\n allow_broadcast=allow_broadcast))\r\n tensorflow_core/python/framework/tensor_util.py:449 make_tensor_proto\r\n _AssertCompatible(values, dtype)\r\n tensorflow_core/python/framework/tensor_util.py:331 _AssertCompatible\r\n (dtype.name, repr(mismatch), type(mismatch).__name__))\r\n\r\n TypeError: Expected int32, got 0.0 of type 'float' instead.\r\n\r\n \r\n> and what is the output of:\r\n> \r\n> ```\r\n> from transformers.file_utils import is_tf_available\r\n> is_tf_available()\r\n> ```\r\n> Output of this line is \r\n`False` \r\n\r\n\r\n\r\n",
"In my case it got resolved by (but have reached to another issue)\r\n> conda create -n bcm python==3.6.8 anaconda \r\n> conda activate bcm\r\n> conda install tensorflow-gpu\r\n> pip install transformers ",
"> Does the following import work?\r\n> `from transformers.modeling_tf_distilbert import TFDistilBertModel`\r\n> and what is the output of:\r\n> \r\n> ```\r\n> from transformers.file_utils import is_tf_available\r\n> is_tf_available()\r\n> ```\r\n\r\nI have the same error with TFBertModel, and when I run this, I get \"False\"\r\nAny suggestions? @cronoik ",
"@sbecon \r\nThat means that you haven't installed tensorflow 2.0 (or you have installed it in a different virtual environment). Please follow the [instructions](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available) and install it. It should work afterwards."
] | 1,579 | 1,599 | 1,580 | NONE | null | ```
import tensorflow as tf
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
###################
these line of codes gives the error
ImportError: cannot import name 'TFDistilBertModel'
############


| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2641/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2640/comments | https://api.github.com/repos/huggingface/transformers/issues/2640/events | https://github.com/huggingface/transformers/issues/2640 | 555,071,161 | MDU6SXNzdWU1NTUwNzExNjE= | 2,640 | batch_encode_plus not working for GPT2, OpenAI, TransfoXL when returning PyTorch tensors | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The problem lies here\r\n\r\nhttps://github.com/huggingface/transformers/blob/babd41e7fa07bdd764f8fe91c33469046ab7dbd1/src/transformers/tokenization_utils.py#L1003-L1006\r\n\r\nsince for these tokenizers `self.pad_token_id` is None.",
"Still having this issue running the above script :-(\r\nAny ideas?\r\n\r\nEnv:\r\n* OS: Windows 10\r\n* Python version: 3.6.12\r\n* PyTorch version: 1.5.0\r\n* PyTorch Transformers version (or branch): transformers-4.5.1\r\n* Using GPU ? yes\r\n* Distributed or parallel setup ? no\r\n\r\n"
] | 1,579 | 1,619 | 1,580 | COLLABORATOR | null | ## π Bug
`batch_encode_plus` does not work on GPT2, OpenAI, and TransfoXL when returning PyTorch tensors. Note that the code does work when leaving out the `return_tensors` argument. In that case, the output of `encoded` looks normal.
## To Reproduce
```python
from transformers import *
TOKENIZERS = {
'albert': (AlbertTokenizer, 'albert-base-v1'),
'bert': (BertTokenizer, 'bert-base-uncased'),
'distilbert': (DistilBertTokenizer, 'distilbert-base-uncased'),
'gpt2': (GPT2Tokenizer, 'gpt2'),
'openai': (OpenAIGPTTokenizer, 'openai-gpt'),
'roberta': (RobertaTokenizer, 'roberta-base'),
'transfoxl': (TransfoXLTokenizer, 'transfo-xl-wt103'),
'xlm': (XLMTokenizer, 'xlm-mlm-enfr-1024'),
'xlnet': (XLNetTokenizer, 'xlnet-base-cased')
}
text = ['I like bananas and cookies .',
'You are not what I thought you were , though .',
'Cookies are awesome .']
for tok_name, (tok_cls, tok_default) in TOKENIZERS.items():
tokenizer = tok_cls.from_pretrained(tok_default)
try:
encoded = tokenizer.batch_encode_plus(text, return_tensors='pt')
except Exception as e:
print(f"{tok_name} failed: {e}")
```
Output on latest master:
```
gpt2 failed: Could not infer dtype of NoneType
openai failed: Could not infer dtype of NoneType
transfoxl failed: Could not infer dtype of NoneType
```
## Environment
* OS: Windows 10
* Python version: 3.7.3
* PyTorch version: 1.3
* PyTorch Transformers version (or branch): latest master
* Using GPU ? yes
* Distributed or parallel setup ? no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2640/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2639/comments | https://api.github.com/repos/huggingface/transformers/issues/2639/events | https://github.com/huggingface/transformers/issues/2639 | 555,069,654 | MDU6SXNzdWU1NTUwNjk2NTQ= | 2,639 | AttributeError: 'Tensor' object has no attribute 'transpose' | {
"login": "mobassir94",
"id": 24439592,
"node_id": "MDQ6VXNlcjI0NDM5NTky",
"avatar_url": "https://avatars.githubusercontent.com/u/24439592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mobassir94",
"html_url": "https://github.com/mobassir94",
"followers_url": "https://api.github.com/users/mobassir94/followers",
"following_url": "https://api.github.com/users/mobassir94/following{/other_user}",
"gists_url": "https://api.github.com/users/mobassir94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mobassir94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mobassir94/subscriptions",
"organizations_url": "https://api.github.com/users/mobassir94/orgs",
"repos_url": "https://api.github.com/users/mobassir94/repos",
"events_url": "https://api.github.com/users/mobassir94/events{/privacy}",
"received_events_url": "https://api.github.com/users/mobassir94/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It seems you're passing TensorFlow variables to a PyTorch model. The TensorFlow equivalent of `XLNetModel` is `TFXLNetModel`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
<!-- error comes from modeling_xlnet.py file -->
i get this error :
---------------------------------------------------------------------------
```
AttributeError Traceback (most recent call last)
<ipython-input-80-01c16e13fe9a> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', "gkf = GroupKFold(n_splits=5).split(X=df_train.question_body, groups=df_train.question_body)\n\nvalid_preds = []\ntest_preds = []\nfor fold, (train_idx, valid_idx) in enumerate(gkf):\n \n # will actually only do 2 folds (out of 5) to manage < 2h\n if fold in [0, 2]:\n\n train_inputs = [inputs[i][train_idx] for i in range(len(inputs))]\n train_outputs = outputs[train_idx]\n\n valid_inputs = [inputs[i][valid_idx] for i in range(len(inputs))]\n valid_outputs = outputs[valid_idx]\n \n K.clear_session()\n model = create_model()\n optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5)\n #optimizer = AdamW(lr=1e-4)\n model.compile(loss=bce_dice_loss, optimizer=optimizer)\n model.fit(train_inputs, train_outputs, epochs=6, batch_size=6)\n # model.save_weights(f'bert-{fold}.h5')\n valid_preds.append(model.predict(valid_inputs))\n test_preds.append(model.predict(test_inputs))\n \n rho_val = compute_spearmanr_ignore_nan(valid_outputs, valid_preds[-1])\n print('validation score = ', rho_val)\n model.save_weights(f'/content/drive/My Drive/quest/validation-{rho_val}-fold-{fold}.hdf5')")
5 frames
</usr/local/lib/python3.6/dist-packages/decorator.py:decorator-gen-60> in time(self, line, cell, local_ns)
<timed exec> in <module>()
/usr/local/lib/python3.6/dist-packages/transformers/modeling_xlnet.py in forward(self, input_ids, attention_mask, mems, perm_mask, target_mapping, token_type_ids, input_mask, head_mask, inputs_embeds)
726 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
727 elif input_ids is not None:
--> 728 input_ids = input_ids.transpose(0, 1).contiguous()
729 qlen, bsz = input_ids.shape[0], input_ids.shape[1]
730 elif inputs_embeds is not None:
AttributeError: 'Tensor' object has no attribute 'transpose'
```
when i try xlnet but i don't get error when i try bert
code i am using :
```py
from transformers import XLNetConfig, XLNetModel,XLNetTokenizer
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
def compute_spearmanr_ignore_nan(trues, preds):
rhos = []
for tcol, pcol in zip(np.transpose(trues), np.transpose(preds)):
rhos.append(spearmanr(tcol, pcol).correlation)
return np.nanmean(rhos)
def create_model():
q_id = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
a_id = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
q_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
a_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
q_atn = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
a_atn = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
#config = BertConfig() # print(config) to see settings
config = XLNetConfig()
config.output_hidden_states = False # Set to True to obtain hidden states
# caution: when using e.g. XLNet, XLNetConfig() will automatically use xlnet-large config
# normally ".from_pretrained('bert-base-uncased')", but because of no internet, the
# pretrained model has been downloaded manually and uploaded to kaggle.
#bert_model = TFBertModel.from_pretrained(BERT_PATH+'bert-base-uncased-tf_model.h5', config=config)
#bert_model = TFBertModel.from_pretrained('xlnet-base-cased')
#bert_model = XLNetModel(config)
bert_model = XLNetModel.from_pretrained('xlnet-large-cased')
# if config.output_hidden_states = True, obtain hidden states via bert_model(...)[-1]
q_embedding = bert_model(q_id, attention_mask=q_mask, token_type_ids=q_atn)[0]
a_embedding = bert_model(a_id, attention_mask=a_mask, token_type_ids=a_atn)[0]
q = tf.keras.layers.GlobalAveragePooling1D()(q_embedding)
a = tf.keras.layers.GlobalAveragePooling1D()(a_embedding)
x = tf.keras.layers.Concatenate()([q, a])
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.Dense(30, activation='sigmoid')(x)
model = tf.keras.models.Model(inputs=[q_id, q_mask, q_atn, a_id, a_mask, a_atn,], outputs=x)
return model
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2639/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2639/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2638/comments | https://api.github.com/repos/huggingface/transformers/issues/2638/events | https://github.com/huggingface/transformers/issues/2638 | 555,030,516 | MDU6SXNzdWU1NTUwMzA1MTY= | 2,638 | Get Warning Message: Unable to convert output to tensors format pt | {
"login": "sharpant",
"id": 12255715,
"node_id": "MDQ6VXNlcjEyMjU1NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/12255715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sharpant",
"html_url": "https://github.com/sharpant",
"followers_url": "https://api.github.com/users/sharpant/followers",
"following_url": "https://api.github.com/users/sharpant/following{/other_user}",
"gists_url": "https://api.github.com/users/sharpant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sharpant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sharpant/subscriptions",
"organizations_url": "https://api.github.com/users/sharpant/orgs",
"repos_url": "https://api.github.com/users/sharpant/repos",
"events_url": "https://api.github.com/users/sharpant/events{/privacy}",
"received_events_url": "https://api.github.com/users/sharpant/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It seems that you are loading a tensorflow model, which you incorrectly call pytorch_model. The reason that the function doesn't work, though, is probably because you don't have pytorch installed and only tensorflow. Convert to tensorflow tenors instead ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | I am running the following code:
```
from transformers.modeling_tf_bert import TFBertForSequenceClassification
pytorch_model = TFBertForSequenceClassification.from_pretrained('./save/')
# Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task
sentence_0 = "This research was consistent with his findings."
sentence_1 = "His findings were compatible with this research."
sentence_2 = "His findings were not compatible with this research."
inputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt')
inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')
```
Get Warning Message:
```
WARNING:transformers.tokenization_utils:Unable to convert output to tensors format pt, PyTorch or TensorFlow is not available.
WARNING:transformers.tokenization_utils:Unable to convert output to tensors format pt, PyTorch or TensorFlow is not available.
```
Then, when I run:
```
pred_1 = pytorch_model(inputs_1['input_ids'], token_type_ids=inputs_1['token_type_ids'])[0].argmax().item()
pred_2 = pytorch_model(inputs_2['input_ids'], token_type_ids=inputs_2['token_type_ids'])[0].argmax().item()
print("sentence_1 is", "a paraphrase" if pred_1 else "not a paraphrase", "of sentence_0")
print("sentence_2 is", "a paraphrase" if pred_2 else "not a paraphrase", "of sentence_0")
```
I get the error: `AssertionError: Too many inputs.` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2638/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2637/comments | https://api.github.com/repos/huggingface/transformers/issues/2637/events | https://github.com/huggingface/transformers/pull/2637 | 554,992,078 | MDExOlB1bGxSZXF1ZXN0MzY3MDQ2OTU4 | 2,637 | Add AutoModelForPreTraining | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,580 | 1,580 | MEMBER | null | Add `AutoModelForPretraining` and `TFAutoModelForPretraining` classes which will load the full model used for pretraining (guarantee we should have all the pre-trained weights).
This class can be used for instance to convert between an original PyTorch and a TF2.0 models while being sure that all the pretrained weights are converted:
```python
# PyTorch => TF 2.0 (save TF 2.0 weights from PT weights)
tf_model = TFAutoModelForPretraining.from_pretrained('my-model', from_pt=True)
tf_model.save_pretrained()
# TF 2.0 => PyTorch (save PT weights from TF 2.0 weights)
pt_model = AutoModelForPretraining.from_pretrained('my-model', from_tf=True)
pt_model.save_pretrained()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2637/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2637",
"html_url": "https://github.com/huggingface/transformers/pull/2637",
"diff_url": "https://github.com/huggingface/transformers/pull/2637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2637.patch",
"merged_at": 1580153228000
} |
https://api.github.com/repos/huggingface/transformers/issues/2636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2636/comments | https://api.github.com/repos/huggingface/transformers/issues/2636/events | https://github.com/huggingface/transformers/issues/2636 | 554,989,462 | MDU6SXNzdWU1NTQ5ODk0NjI= | 2,636 | Gradient checkpointing with GPT2DoubleHeadsModel | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think I figured this out, it looks like I'll have to change the outputs returned by `Block` to be tuples instead of lists:\r\n\r\nhttps://github.com/huggingface/transformers/blob/babd41e7fa07bdd764f8fe91c33469046ab7dbd1/src/transformers/modeling_gpt2.py#L238\r\n\r\ni.e., change the above to `return tuple(outputs)` for checkpointing of the blocks inside `GPT2Model` to work.\r\n\r\n@thomwolf @LysandreJik Would this explicit type-casting of the outputs to tuple lead to any unexpected, downstream effects? If not, I think this update should be reflected in the repo as well, given that the README says that every model's forward() method always outputs a `tuple`.\r\n\r\nI am also finding that checkpointing the blocks doesn't seem to help fit a single example into memory with `gpt2-xl`. A check-pointed version of these classes would be really helpful!",
"Bumping this, I'm training a TensorFlow ALBERT model and with long sequence lengths (512) it's tough to get a large enough batch size - currently I'm constrained to 8 or 16 per GPU. Adding automatic gradient checkpointing support for `tf.recompute_grad()` would be a godsend :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am using `GPT2Model` and would also find this very useful. "
] | 1,579 | 1,593 | 1,590 | NONE | null | ## β Questions & Help
I've been trying to fine-tune `GPT2DoubleHeadsModel` using `gpt2-large` and `gpt2-xl` on the [Topical-Chat](https://github.com/alexa/alexa-prize-topical-chat-dataset) dataset.
I'm finding that loading even a single example into memory is difficult with the larger versions of GPT-2. I found [this](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255) Medium post by @thomwolf which suggests that gradient checkpointing would be effective at handling this situation.
Is there a gradient-checkpointed version of the code in `GPT2DoubleHeadsModel` or the underlying `GPT2Model` that could be used as-is? I'm trying to do this myself by editing `modeling_gpt2.py`, but I'm facing issues.
https://github.com/huggingface/transformers/blob/babd41e7fa07bdd764f8fe91c33469046ab7dbd1/src/transformers/modeling_gpt2.py#L478-L480
Specifically, I added a checkpoint in the above line like this:
`outputs = checkpoint(block, hidden_states, layer_past, attention_mask, head_mask[i])`
NOTE: I had to remove the key names since it looks like checkpoint does not support key-value arguments, only positional. This might lead to compatibility issues, I'd love to know thoughts on this as well.
This is using the official PyTorch [checkpoint](https://pytorch.org/docs/stable/checkpoint.html). I'm also considering trying [this](https://github.com/csrhddlam/pytorch-checkpoint/blob/master/checkpoint.py) other implementation for checkpoint since I read somewhere that it is supposed to be faster than the official implementation.
With the official PyTorch implementation, I'm getting the following error:
`CheckpointFunctionBackward.forward: expected Variable (got list) for return value 0.`
[This](https://discuss.pytorch.org/t/checkpoint-didnt-support-list-output/16957/3) thread on the PyTorch forums seems to suggest that this error arises when attempting to use `torch.utils.checkpoint` with modules that return a variable number of tensors, which is the case with `Block` within `GPT2Model`.
Could @thomwolf, @LysandreJik or anyone else in the Hugging Face team please help with this? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2636/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2635/comments | https://api.github.com/repos/huggingface/transformers/issues/2635/events | https://github.com/huggingface/transformers/pull/2635 | 554,984,937 | MDExOlB1bGxSZXF1ZXN0MzY3MDQxMTIy | 2,635 | Improving generation | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It would be great if this PR could handle the padding index for the models that do not have one. For example, GPT-2 doesn't have a padding index and therefore can't use the `generate` method, nor can it use the `batch_encode_plus` method.",
"PR #2885 added the proposed changes."
] | 1,579 | 1,651 | 1,582 | MEMBER | null | Fix #2554
TODO:
- add tests on generation
TODO potential:
- this PR could be used to fix #2415 and fix #2482 as well
- add TF 2.0 support for generation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2635/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2635",
"html_url": "https://github.com/huggingface/transformers/pull/2635",
"diff_url": "https://github.com/huggingface/transformers/pull/2635.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2635.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2634/comments | https://api.github.com/repos/huggingface/transformers/issues/2634/events | https://github.com/huggingface/transformers/pull/2634 | 554,965,890 | MDExOlB1bGxSZXF1ZXN0MzY3MDI1NzE0 | 2,634 | AutoModels Documentation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,582 | 1,579 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2634/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2634",
"html_url": "https://github.com/huggingface/transformers/pull/2634",
"diff_url": "https://github.com/huggingface/transformers/pull/2634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2634.patch",
"merged_at": 1579901851000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2633/comments | https://api.github.com/repos/huggingface/transformers/issues/2633/events | https://github.com/huggingface/transformers/issues/2633 | 554,898,766 | MDU6SXNzdWU1NTQ4OTg3NjY= | 2,633 | Details on T5's current integration status | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | MEMBER | null | Hi all,
Regarding Google's T5 model, here is a quick summary of the status:
* the core model is in the library and some people have started to use it, but:
- while the operations are identical or very similar (einsum vs. matmul), there is quite a significantly higher relative error between this model's PT hidden-state and the mesh-tensorflow hidden-state (in particular compared to our previous TF => PT model conversions).
- our guess is that this comes for a combination of bfloat16 vs. fp32, einsum+model parallelism vs. matmul, plus the fact that we are not masking the hidden-states at each layer as the original implementation do (this should not matter much though).
- as a consequence, we are waiting to be able to confirm it's performances on a GLUE fine-tuning before having a wider communication on its addition to the library.
* the full integration with GLUE tests requires a few features that we still need to add:
- a decoding mechanism,
- a pre/post-processing for GLUE to use it in text-to-text setting, and
- a model parallelism feature
^^ we plan to work on these in February (from the more general view of having better encoder-decoder support in the library).
cc @julien-c @LysandreJik @sshleifer @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2633/reactions",
"total_count": 28,
"+1": 21,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 7,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2633/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2632/comments | https://api.github.com/repos/huggingface/transformers/issues/2632/events | https://github.com/huggingface/transformers/pull/2632 | 554,783,628 | MDExOlB1bGxSZXF1ZXN0MzY2ODc1ODEy | 2,632 | Add FlauBERT: Unsupervised Language Model Pre-training for French | {
"login": "formiel",
"id": 41543169,
"node_id": "MDQ6VXNlcjQxNTQzMTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/41543169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/formiel",
"html_url": "https://github.com/formiel",
"followers_url": "https://api.github.com/users/formiel/followers",
"following_url": "https://api.github.com/users/formiel/following{/other_user}",
"gists_url": "https://api.github.com/users/formiel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/formiel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/formiel/subscriptions",
"organizations_url": "https://api.github.com/users/formiel/orgs",
"repos_url": "https://api.github.com/users/formiel/repos",
"events_url": "https://api.github.com/users/formiel/events{/privacy}",
"received_events_url": "https://api.github.com/users/formiel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I don't really know how it happened but I was denied push access on your repository while patching the failing FlauBERT bug. Instead I pushed to a new branch `flaubert` on this remote (huggingface/transformers), and I'm opening a pull request with your changes.\r\n\r\nYou're still the author of the commit.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2632?src=pr&el=h1) Report\n> Merging [#2632](https://codecov.io/gh/huggingface/transformers/pull/2632?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/adb8c93134f02fd0eac2b52189364af21977004c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2632?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2632 +/- ##\n=======================================\n Coverage 74.59% 74.59% \n=======================================\n Files 89 89 \n Lines 14971 14971 \n=======================================\n Hits 11168 11168 \n Misses 3803 3803\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2632?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2632?src=pr&el=footer). Last update [adb8c93...adb8c93](https://codecov.io/gh/huggingface/transformers/pull/2632?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The PR is #2677. I'm updating the documentation directly on this PR."
] | 1,579 | 1,580 | 1,580 | CONTRIBUTOR | null | This PR adds [FlauBERT](https://github.com/getalp/Flaubert). Most of the code is derived from XLM (there are some new features in FlauBERT such as `pre_norm` and `layerdrop`).
`make test` had 1 failure related to BERT and not to FlauBERT:
> [gw0] FAILED tests/test_configuration_auto.py::AutoConfigTest::test_pattern_matching_fallback
`make style` passed.
`make quality` passed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2632/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2632",
"html_url": "https://github.com/huggingface/transformers/pull/2632",
"diff_url": "https://github.com/huggingface/transformers/pull/2632.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2632.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2631/comments | https://api.github.com/repos/huggingface/transformers/issues/2631/events | https://github.com/huggingface/transformers/issues/2631 | 554,753,717 | MDU6SXNzdWU1NTQ3NTM3MTc= | 2,631 | CamembertTokenizer cannot be pickled | {
"login": "GMarzinotto",
"id": 5233985,
"node_id": "MDQ6VXNlcjUyMzM5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5233985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GMarzinotto",
"html_url": "https://github.com/GMarzinotto",
"followers_url": "https://api.github.com/users/GMarzinotto/followers",
"following_url": "https://api.github.com/users/GMarzinotto/following{/other_user}",
"gists_url": "https://api.github.com/users/GMarzinotto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GMarzinotto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GMarzinotto/subscriptions",
"organizations_url": "https://api.github.com/users/GMarzinotto/orgs",
"repos_url": "https://api.github.com/users/GMarzinotto/repos",
"events_url": "https://api.github.com/users/GMarzinotto/events{/privacy}",
"received_events_url": "https://api.github.com/users/GMarzinotto/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you look into just calling `save_pretrained()` on your CamembertTokenizer (and not include it inside your `MyModelCamembert`)?",
"No I did not try that because my model class is quite a big class that extends `nn.Module` and not `PreTrainedModel`. \r\nI'm just surprised that saving the model it works for Bert but fails for Camembert",
"Indeed, there was the state management lacking in the CamemBERT tokenizer, so it couldn't be pickled. It should have been fixed with 908230d.",
"Great ! So I'll just wait for the next release, Thanks ! :))",
"You can also install from source using `pip install git+https://github.com/huggingface/transformers` if you want to work with it now"
] | 1,579 | 1,579 | 1,579 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Camembert
Language I am using the model on (English, Chinese....): French
The problem arise when using my own modified scripts:
I have a nn.Module and, within this module, I store the tokenizers
I can normally save these tokenizers easily, but CamemberTokenizer
gives me a **TypeError: can't pickle SwigPyObject objects**
The tasks I am working on consists in creating a model and saving it using torch.save()
## To Reproduce
```
import torch
from transformers import CamembertTokenizer, BertTokenizer
class MyModelCamembert(torch.nn.Module):
def __init__(self):
super().__init__()
self.cheese = CamembertTokenizer.from_pretrained('camembert-base')
def forward(self, x):
return 1
class MyModelBert(torch.nn.Module):
def __init__(self):
super().__init__()
self.cheese = BertTokenizer.from_pretrained('bert-base')
def forward(self, x):
return 1
# with bert it works
no_cheese = MyModelBert()
torch.save(no_cheese, "~/bert.pkl")
# with camembert it doesn't
cheese = MyModelCamembert()
torch.save(cheese, "~/camembert.pkl")
```
Steps to reproduce the behavior:
1. Try to save a module containing a tokenizer using the torch.save()
2. Find out it works for Bert but not for Camembert
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
Stack Trace when saving CamembertTokenizer
Traceback (most recent call last):
File "/home/swqh0332/Desktop/blablapy.py", line 28, in <module>
torch.save(cheese, "/home/swqh0332/camembert.pkl")
File "/home/swqh0332/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/serialization.py", line 260, in save
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/swqh0332/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/serialization.py", line 185, in _with_file_like
return body(f)
File "/home/swqh0332/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/serialization.py", line 260, in <lambda>
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/swqh0332/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/serialization.py", line 332, in _save
pickler.dump(obj)
TypeError: can't pickle SwigPyObject objects
## Expected behavior
I would like to be able to pickle the CamembertTokenizer as I it is possible with the other models
## Environment
* OS: Ubuntu 18
* Python version: 3.6.9
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2.2
* Using GPU ? No
* Distributed or parallel setup ? No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2631/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2630/comments | https://api.github.com/repos/huggingface/transformers/issues/2630/events | https://github.com/huggingface/transformers/issues/2630 | 554,735,095 | MDU6SXNzdWU1NTQ3MzUwOTU= | 2,630 | Pad token for GPT2 and OpenAIGPT models | {
"login": "dakshvar22",
"id": 8708249,
"node_id": "MDQ6VXNlcjg3MDgyNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8708249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakshvar22",
"html_url": "https://github.com/dakshvar22",
"followers_url": "https://api.github.com/users/dakshvar22/followers",
"following_url": "https://api.github.com/users/dakshvar22/following{/other_user}",
"gists_url": "https://api.github.com/users/dakshvar22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakshvar22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakshvar22/subscriptions",
"organizations_url": "https://api.github.com/users/dakshvar22/orgs",
"repos_url": "https://api.github.com/users/dakshvar22/repos",
"events_url": "https://api.github.com/users/dakshvar22/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakshvar22/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Padding tokens were not used during the pre-training of GPT and GPT-2, therefore they have none. It shouldn't matter as when doing padding, you should specify an [attention mask](https://huggingface.co/transformers/glossary.html#attention-mask) to your model so that it doesn't attend to padded indices, therefore ignoring the value of the token.",
"i got the same issues any advice?",
"Also look at issue #3021 \r\n\r\nWhat do you need the padding for? What is the use case? \r\nFor both models using an attention mask over all tokens that you be padded should help (as explained above). ",
"Yes, using the attention mask over all tokens should help. Thanks",
"> Padding tokens were not used during the pre-training of GPT and GPT-2, therefore they have none. It shouldn't matter as when doing padding, you should specify an [attention mask](https://huggingface.co/transformers/glossary.html#attention-mask) to your model so that it doesn't attend to padded indices, therefore ignoring the value of the token.\r\n\r\nI thought the same as your reply but my experiments shows this Attention mask does not work. \r\nSee my recent [issue](https://github.com/huggingface/transformers/issues/3167), where i provided reproducible code to see my point. \r\n",
"should the attention mask cover the labels as well?\r\n\r\nfor example i want to train \"some passage <break> some content <pad> <pad>\". \r\n\r\nso my input would be \"some passage <break>\", and my label would be \"some passage <break> some content <pad> <pad>\", in which the padding is necessary for batch processing.\r\n\r\nIn such a case how do I mask out the paddings in the labels?\r\n",
"Because GPT2 and GPT are causal LM you don't need to pad shorter sentences in batches. It is important though that the loss on these \"unnecessary\" tokens is not calculated. You should set all lables corresponding to \"PADDED\" tokens to `-100`. In the code snippet you can see in the `map_to_encoder_decoder_inputs` function how the `labels` are set to -100 for `attention_mask = 0`:\r\nhttps://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16#training-script",
"> Because GPT2 and GPT are causal LM you don't need to pad shorter sentences in batches. \r\n Why? The pad is to make up the length of the batch. Does this have anything to do with GPT2's causal model? ",
"> ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\r\n\r\nJust do `tokenizer.pad_token = tokenizer.eos_token`, and also set `tokenizer.padding_side = 'left'`.\r\nIt should work fine with batches. No need of `add_special_tokens`, otherwise the model embedding layer should be resized accordingly.",
"\r\n\r\n@ecolss Why do we set \r\n```bash\r\ntokenizer.padding_size = 'left'\r\n```\r\n. What is the problem if it stays as 'right' which is by default.\r\nThank you.",
"Specially when looking at these remarks: https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/gpt2#transformers.GPT2Config\r\n\r\n```\r\nTips:\r\n\r\n- GPT-2 is a model with absolute position embeddings so itβs usually advised to pad the inputs on the right rather than the left.\r\n```",
"> > ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\r\n> \r\n> Just do `tokenizer.pad_token = tokenizer.eos_token`, and also set `tokenizer.padding_side = 'left'`. It should work fine with batches. No need of `add_special_tokens`, otherwise the model embedding layer should be resized accordingly.\r\n\r\nThis does not seem consistent with the document ? As mentioned by @roberth-plutoflume ",
"I am also facing this issue. Any update on it?\r\n```\r\nValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\r\n```\r\n\r\nThis happens when following t[he official example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/README.md#whisper-model) using GPT2 as a decoder (Warm Start of Speech Encoder Decoder Models) "
] | 1,579 | 1,705 | 1,583 | NONE | null | ## β Questions & Help
I noticed that out of all the models `pad_token` is not set for only `OpenAIGPTModel` and `GPT-2Model`.
I get a warning: `Using pad_token, but it is not set yet.` and `pad_token_id` is `None`
Is there any specific reason why is that so?
If not, what is the appropriate padding token to be used for these models?
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2630/reactions",
"total_count": 15,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2630/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2629/comments | https://api.github.com/repos/huggingface/transformers/issues/2629/events | https://github.com/huggingface/transformers/issues/2629 | 554,688,657 | MDU6SXNzdWU1NTQ2ODg2NTc= | 2,629 | Question about Architecture of BERT for QA | {
"login": "ghk829",
"id": 18682286,
"node_id": "MDQ6VXNlcjE4NjgyMjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/18682286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghk829",
"html_url": "https://github.com/ghk829",
"followers_url": "https://api.github.com/users/ghk829/followers",
"following_url": "https://api.github.com/users/ghk829/following{/other_user}",
"gists_url": "https://api.github.com/users/ghk829/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghk829/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghk829/subscriptions",
"organizations_url": "https://api.github.com/users/ghk829/orgs",
"repos_url": "https://api.github.com/users/ghk829/repos",
"events_url": "https://api.github.com/users/ghk829/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghk829/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please don't post screenshots. Use code tags instead and preferably post reproducible code.\r\n\r\nhttps://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
I have a question about the architecture of Bert for QA.
In Bert forward function
``` python
class BertForQuestionAnswering(BertPreTrainedModel):
def __init__(self, config):
super(BertForQuestionAnswering, self).__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
start_positions=None,
end_positions=None,
):
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
)
sequence_output = outputs[0]
logits = self.qa_outputs(sequence_output) # The line I don't understand
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
outputs = (start_logits, end_logits,) + outputs[2:]
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, split add a dimension
if len(start_positions.size()) > 1:
start_positions = start_positions.squeeze(-1)
if len(end_positions.size()) > 1:
end_positions = end_positions.squeeze(-1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
ignored_index = start_logits.size(1)
start_positions.clamp_(0, ignored_index)
end_positions.clamp_(0, ignored_index)
loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
outputs = (total_loss,) + outputs
return outputs # (loss), start_logits, end_logits, (hidden_states), (attentions)
```
I think logits are from linear layer (this is from bert output)
And start_loss and end_loss is calculated by the logits ( just splited by 2)
But, I read BERT article, But It describes

It looks like the model have to use only spans of the paragraph in last layer.
But, I can't get it how the model can know where's start/end span is?
So can you explain it?
It will be really helpful to me if you answer it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2629/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2628/comments | https://api.github.com/repos/huggingface/transformers/issues/2628/events | https://github.com/huggingface/transformers/issues/2628 | 554,675,763 | MDU6SXNzdWU1NTQ2NzU3NjM= | 2,628 | Albert on QQP inference | {
"login": "search4mahesh",
"id": 4182331,
"node_id": "MDQ6VXNlcjQxODIzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4182331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/search4mahesh",
"html_url": "https://github.com/search4mahesh",
"followers_url": "https://api.github.com/users/search4mahesh/followers",
"following_url": "https://api.github.com/users/search4mahesh/following{/other_user}",
"gists_url": "https://api.github.com/users/search4mahesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/search4mahesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/search4mahesh/subscriptions",
"organizations_url": "https://api.github.com/users/search4mahesh/orgs",
"repos_url": "https://api.github.com/users/search4mahesh/repos",
"events_url": "https://api.github.com/users/search4mahesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/search4mahesh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | While using Albert model trained on QQP data, i am using following code for inference.
How to manage two sentences and two labels (0,1) like QQP?
ffrom transformers import AlbertTokenizer, AlbertForSequenceClassification
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForSequenceClassification.from_pretrained('albert-base-v2')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2628/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2627/comments | https://api.github.com/repos/huggingface/transformers/issues/2627/events | https://github.com/huggingface/transformers/issues/2627 | 554,470,324 | MDU6SXNzdWU1NTQ0NzAzMjQ= | 2,627 | Why does the hidden state of the same input token change every time I call the same GPT2 model? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello,\r\n\r\nThe hidden state vectors doesn't seem to change with fixed input and token when I use the Hugging Face pre-trained GPT2 model, but in my case, I made and trained my own GPT2 model by doing the following:\r\n```python\r\n\r\nbptt = 1024\r\nbatch_size = 1\r\nlog_int = 50\r\nnlayer = 6\r\n\r\n# Define device\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n\r\ngc.set_threshold(700, 10, 10)\r\n\r\n# define the English text field\r\nTEXT_ch2 = Field(init_token = '<sos>',\r\n eos_token = '<eos>',\r\n unk_token = '<unk>',\r\n pad_token = '<pad>',\r\n fix_length = bptt,\r\n lower = True)\r\n\r\n# split the PennTreeBank corpus into a train, val, and test set.\r\ntrain_penn, val_penn, test_penn = torchtext.datasets.PennTreebank.splits(TEXT_ch2)\r\n\r\n# initialize new_train_penn\r\nnew_train_penn = train_penn\r\n\r\n# build vocabulary based on the field that we just defined.\r\n# (building vocabulary over all language datasets)\r\nTEXT_ch2.build_vocab(new_train_penn, val_penn, test_penn,\r\n specials=['<sos>','<eos>','<unk>','<pad>','<mask>','<mcoption>','<question>'])\r\n\r\n# define special token indices\r\nmask_index_ch2 = TEXT_ch2.vocab.stoi['<mask>']\r\npad_index_ch2 = TEXT_ch2.vocab.stoi['<pad>']\r\nmcoption_index_ch2 = TEXT_ch2.vocab.stoi['<mcoption>']\r\nquestion_index_ch2 = TEXT_ch2.vocab.stoi['<question>']\r\neos_index_ch2 = TEXT_ch2.vocab.stoi['<eos>']\r\nsos_index_ch2 = TEXT_ch2.vocab.stoi['<sos>']\r\nunk_index_ch2 = TEXT_ch2.vocab.stoi['<unk>']\r\n\r\n# set hyperparameter ntokens\r\nntokens = len(TEXT_ch2.vocab.stoi)\r\n\r\n## define GPT-2 configuration.\r\nGPT2config_ch2 = GPT2Config(vocab_size_or_config_json_file = ntokens,\r\n cutoffs = [20000, 40000, 200000], \r\n n_positions = 1024, \r\n n_embd = 768, \r\n n_head = 12, \r\n n_layer = nlayer,\r\n resid_pdrop = 0.1,\r\n embd_pdrop = 0.1,\r\n attn_pdrop = 0.1,\r\n output_hidden_states = True,\r\n output_attentions = True)\r\n\r\n# define the GPT-2 model based on the specifiTVD configuration.\r\nmodel_ch2 = GPT2DoubleHeadsModel(GPT2config_ch2)\r\n\r\n# add new tokens to the embeddings of our model\r\nmodel_ch2.resize_token_embeddings(ntokens)\r\n\r\n\r\ndef train_lm_head(model, train_iter, optimizer, scheduler, log_interval, pad_index):\r\n\r\n # turn on a training mode\r\n model.train()\r\n \r\n # initialize total_loss to 0\r\n total_loss = 0\r\n \r\n # list(enumerate(train_penn_iter))[0][1] would extract the 1st batch\r\n for batch_index, batch in enumerate(train_iter):\r\n \r\n gc.collect()\r\n \r\n input_ids = [instance for instance in batch.text]\r\n \r\n ## NOTE: Positions embeddings can be automatically created by the GPT2DoubleHeadsModel as (0, 1, ..., N)\r\n \r\n # set the gradient back to 0 (necessary step)\r\n optimizer.zero_grad() \r\n \r\n input_ids = torch.tensor([input_ids], dtype=torch.long)\r\n \r\n loss = model(input_ids, lm_labels = input_ids)[0]\r\n # 'loss' here is the cross entropy.\r\n # recall: 'input_ids' is defined above.\r\n\r\n # calculate gradient by backwarding the loss\r\n # calculate gradient of the loss w.r.t weights\r\n loss.backward()\r\n \r\n # clips norm of the gradient of an iterable of parameters.\r\n # The norm is computed over all gradients together, as if they were\r\n # concatenated into a single vector. Gradients are modified in-place.\r\n # so basically just normalizes the gradients and returns them.\r\n torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)\r\n \r\n optimizer.step() # update the weights by following the constLinearSchedule for the lr.\r\n \r\n # update the with the calculated loss \r\n total_loss = total_loss + loss \r\n \r\n # python format: 's' for string, 'd' to display decimal integers (10-base), and 'f' for floats.\r\n # ex: print(\"Sammy ate {0:.3f} percent of a pizza!\".format(75.765367))\r\n # >> Sammy ate 75.765 percent of a pizza!\r\n # print(\"Sammy ate {0:f} percent of a {1}!\".format(75, \"pizza\"))\r\n # >> Sammy ate 75.000000 percent of a pizza! \r\n #\r\n # Below is good enough since we are doing the Stochastic Gradient Descent.\r\n # (i.e. 1 batch = 1 sample)\r\n\r\n if batch_index % log_interval == 0 and batch_index > 0:\r\n cur_loss = total_loss / log_interval\r\n print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.9f} | loss {:5.4f} | ppl {:8.4f}'.format(\r\n epoch, batch_index, len(train_iter), scheduler.get_lr()[0], cur_loss, math.exp(cur_loss)))\r\n \r\n total_loss = 0 \r\n \r\n del input_ids\r\n del loss\r\n gc.collect() \r\n \r\n\r\n# evaluate (Apply the best model) to check the result with the validation dataset.\r\ndef evaluate_lm_head(model, val_iter, pad_index):\r\n model.eval() # Turn on the evaluation mode\r\n total_loss = 0.\r\n with torch.no_grad():\r\n \r\n for batch_index, batch in enumerate(val_iter):\r\n \r\n gc.collect()\r\n \r\n val_input_ids = [instance for instance in batch.text]\r\n val_input_ids = torch.tensor([val_input_ids], dtype=torch.long)\r\n \r\n ## NOTE: Positions embeddings can be automatically created by the GPT2DoubleHeadsModel as (0, 1, ..., N)\r\n loss = model(val_input_ids, lm_labels = val_input_ids)[0]\r\n total_loss = total_loss + loss\r\n \r\n del val_input_ids\r\n del loss\r\n gc.collect()\r\n \r\n return total_loss / (len(val_iter) - 1)\r\n\r\n\r\n# loop over epoch to find the best model (the best GPT2 language model based on pennTreeBank) \r\noptimizer_ch2 = AdamW(model_ch2.parameters(), lr = 0.00000485, correct_bias = True)\r\n\r\nscheduler_ch2 = get_constant_schedule(optimizer = optimizer_ch2, last_epoch = -1)\r\n\r\n\r\nbest_val_loss = float(\"inf\")\r\nepochs = 5 # The total number of epochs ... since the treebank is reasonably large-scale, 5 epoch (>1) is likely to be enough\r\n # see: https://stackoverflow.com/questions/38000189/is-it-ok-to-only-use-one-epoch\r\n\r\n# initialize best_model_ch2_penn to None\r\nbest_model_ch2_penn = None\r\n\r\nfor epoch in range(1, epochs + 1):\r\n \r\n gc.collect()\r\n \r\n epoch_start_time = time.time()\r\n \r\n # again, log_interval = 1 for Stochastic Gradient Descent\r\n train_lm_head(model_ch2, train_penn_iter, \r\n optimizer_ch2, scheduler_ch2, \r\n log_int, pad_index_ch2)\r\n\r\n val_loss = evaluate_lm_head(model_ch2, val_penn_iter, \r\n pad_index_ch2)\r\n \r\n print('-' * 89)\r\n print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.4f} | '\r\n 'valid ppl {:8.4f}'.format(epoch, (time.time() - epoch_start_time),\r\n val_loss, math.exp(val_loss)))\r\n print('-' * 89)\r\n\r\n if val_loss < best_val_loss:\r\n best_val_loss = val_loss\r\n best_model = model_ch2\r\n \r\n gc.collect()\r\n \r\n scheduler_ch2.step() # update the learning rate\r\n```\r\n\r\nWhen I use the ```best_model``` that I obtain from this train function, and pass in the same input, the hidden state of the last token keeps changing each time I compute it. How can I prevent this?\r\n\r\nWould saving the ```best_model``` as pre-trained model and re-loading it prevent the hidden state from changing? If so, what is the code to save and re-load the ```best_model``` as a pre-trained model? I am having a hard time following the documentation, as I am just a beginner.\r\n\r\nThank you,",
"This is too much code for me to debug now. But generally, inconsistent inference is caused by not setting your model to evaluation mode. Do `model.eval()` before retrieving your vector. This will disable dropout/norm (and dropout is pseudorandom, so that may cause inconsistent results).",
"Thank you! This solved my problem. Is it necessary to include ```model.eval()``` before retrieving loss to update the weights in my ```train()``` function? or should I NOT use ```model.eval()``` in my ```train()``` function, because the dropout and the norm needs to be applied during the training (which I am not so sure on)?\r\n\r\nThank you,",
"This is more a \"deep learning with PyTorch\" question than a transformers question, so I'll be brief. If you have more question, please ask the question on Stack Overflow.\r\n\r\n`.eval()` is used when you are **not** training, i.e. when you wish to get deterministic values from your model. This is typically done during _evaluation_ and _testing_. When you are training, though, you want those things such as dropout because it has been shown that they are beneficial for the training process (e.g. combat overfitting). To ensure that the model is using dropout etc. you should put in back into training mode (in contrast to evaluating mode) by setting `model.train()`.\r\n\r\nIn addition to `eval()` vs `.train()`, there is also the grad vs no_grad difference. During training, weights `require_grad`, which tells PyTorch that gradients need to be calculated for those parameters. As you can imagine, that is a computationally expensive step, which we don't need during testing/evaluating. So we can disable gradient calculation with a context manager `torch.no_grad()`.\r\n\r\nSo, in practice your code could look something like this (but it might look different, or you might use steps instead of epochs, etc.). (Note, this is pseudo code.)\r\n\r\n```python\r\nfor epoch in range(n_epochs):\r\n # train\r\n model.train()\r\n for batch in train_loader:\r\n out = model(batch)\r\n ...\r\n # evaluate\r\n model.eval()\r\n with torch.no_grad():\r\n for batch in eval_loader:\r\n out = model(batch)\r\n ...\r\n...\r\n# test\r\nmodel.eval()\r\nwith torch.no_grad():\r\n for batch in test_loader:\r\n test = model(batch)\r\n ...\r\n```\r\n\r\nAgain, if you have more detailed questions concerning, please ask them on Stack Overflow. ",
"Thank you for all your help, I appreciate it!"
] | 1,579 | 1,579 | 1,579 | NONE | null | Hello,
Say I fixed my input to the GPT2 model:
```python
input_ids = test_i[:,0]
input_ids = torch.tensor(input_ids.tolist()).unsqueeze(0)
```
Then I try to retrieve the hidden state vector of the last token:
```python
tst_hidden_states = best_model(input_ids)[3][1][0, (test_i.size()[0] - 1), :].detach()
tst_hidden_states[0:5]
>>>tensor([-0.0146, 0.0718, -0.0297, -0.0000, -0.0315])
```
but when I repeat the above process with the exactly same input, the hidden state of the last token keeps changing:
```python
tst_hidden_states = best_model(input_ids)[3][1][0, (test_i.size()[0] - 1), :].detach()
tst_hidden_states[0:5]
>>> tensor([-0.0146, 0.0000, -0.0297, -0.0212, -0.0315])
```
Given that I didn't change the model, I don't understand why the hidden state of the same input and the same token keeps changing at each turn. How can I prevent the hidden state from changing?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2627/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2626/comments | https://api.github.com/repos/huggingface/transformers/issues/2626/events | https://github.com/huggingface/transformers/issues/2626 | 554,437,093 | MDU6SXNzdWU1NTQ0MzcwOTM= | 2,626 | BertModel output the same embedding during Evaluation | {
"login": "nimning",
"id": 7147016,
"node_id": "MDQ6VXNlcjcxNDcwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7147016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nimning",
"html_url": "https://github.com/nimning",
"followers_url": "https://api.github.com/users/nimning/followers",
"following_url": "https://api.github.com/users/nimning/following{/other_user}",
"gists_url": "https://api.github.com/users/nimning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nimning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nimning/subscriptions",
"organizations_url": "https://api.github.com/users/nimning/orgs",
"repos_url": "https://api.github.com/users/nimning/repos",
"events_url": "https://api.github.com/users/nimning/events{/privacy}",
"received_events_url": "https://api.github.com/users/nimning/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Wherees the `forward()` function of your `BertTextEncoderFactory(nn.Module)`?",
"It problem is caused by the data.",
"@nimning hi, i got stuck on the same issue exactly the same as you mentioned, cloud you please tell me how did you solve this problem"
] | 1,579 | 1,594 | 1,581 | NONE | null | ## β Questions & Help
During evaluation, my text model outputs the same embedding regardless of the token id. The following is my model.
```
class BertTextEncoderFactory(nn.Module):
def __init__(self, embedding_dim = 256, model_name_or_path = None, backbone ='bert'):
super(BertTextEncoderFactory, self).__init__()
if (backbone == 'bert'):
self.encoder = BertForRetrival(embedding_dim, 'bert-base-uncased')
```
```
class BertForRetrival(nn.Module):
def __init__(self, single_embedding_dim = 256, model_name_or_path = 'bert-base-uncased'):
super(BertForRetrival, self).__init__()
self.config = BertConfig.from_pretrained(model_name_or_path)
self.bert = BertModel.from_pretrained(model_name_or_path, config=self.config)
self.single_embedding_dim = single_embedding_dim
self.dropout = nn.Dropout(self.config.hidden_dropout_prob)
self.embedding_layer = nn.Sequential(nn.Linear(self.config.hidden_size, self.config.hidden_size),
nn.LeakyReLU(),
self.dropout,
nn.Linear(self.config.hidden_size, int(self.config.hidden_size / 2)),
nn.LeakyReLU(),
self.dropout,
nn.Linear(int(self.config.hidden_size / 2), self.single_embedding_dim),
nn.ReLU())
self.init_weights(self.embedding_layer)
def init_weights(self, module):
for m in module.modules():
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0.001)
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
print(len(outputs))
print(outputs[0].size())
print(outputs)
first_token_tensor = outputs[0][:,0]
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output)
single_embedding = self.embedding_layer(pooled_output)
return single_embedding
```
Model initialization
`text_model = BertTextEncoderFactory(embedding_dim = 256, model_name_or_path = None, backbone = 'bert') `
After I finetuned the model, I save it to a checkpoint.
`torch.save(text_model, './pretrainedcheckpoint/checkpoint.pth.tar')`
During evaluation, I first initialize the model in the following way
`text_model = BertTextEncoderFactory(embedding_dim = 256, model_name_or_path = None, backbone = 'bert') `
The output of sample ids is the following. So far so good.
```
text_model.eval()
ids = torch.tensor([[101, 14378, 102]], dtype=torch.long)
text_model(ids, None, None)
```
Ouput is the following
```
2
torch.Size([1, 3, 768])
tensor([[[-0.6077, 0.1454, -0.1540, ..., 0.0763, 0.5157, 0.4968],
[ 0.7466, -0.3633, -0.0637, ..., 0.0403, 0.5987, 0.2889],
[ 0.9683, 0.0883, -0.3452, ..., 0.2865, -0.6153, -0.1851]]],
grad_fn=<NativeLayerNormBackward>)
```
Then, I load the model with trained weights.
```
import os
resume = './pretrainedcheckpoint/checkpoint.pth.tar'
if os.path.isfile(resume):
checkpoint = torch.load(resume)
text_model.load_state_dict(checkpoint)
```
`<All keys matched successfully>`
Now, given the same token id sequences, the model output three exactly same embedding.
```
text_model.eval()
ids = torch.tensor([[101, 14378, 102]], dtype=torch.long)
text_model(ids, None, None)
```
```
2
torch.Size([1, 3, 768])
tensor([[[-0.3972, 0.2239, -0.3335, ..., -0.4338, 0.4992, -0.0618],
[-0.3972, 0.2239, -0.3335, ..., -0.4338, 0.4992, -0.0618],
[-0.3972, 0.2239, -0.3335, ..., -0.4338, 0.4992, -0.0618]]],
grad_fn=<NativeLayerNormBackward>)
```
As you can see, the embedding for the three tokens are all the same !!!!
```
tensor([[[-0.3972, 0.2239, -0.3335, ..., -0.4338, 0.4992, -0.0618],
[-0.3972, 0.2239, -0.3335, ..., -0.4338, 0.4992, -0.0618],
[-0.3972, 0.2239, -0.3335, ..., -0.4338, 0.4992, -0.0618]]],
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2626/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2625/comments | https://api.github.com/repos/huggingface/transformers/issues/2625/events | https://github.com/huggingface/transformers/issues/2625 | 554,419,478 | MDU6SXNzdWU1NTQ0MTk0Nzg= | 2,625 | Pipeline error when creating a model without a model card json file (on Windows) | {
"login": "AlecS12",
"id": 1517014,
"node_id": "MDQ6VXNlcjE1MTcwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1517014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlecS12",
"html_url": "https://github.com/AlecS12",
"followers_url": "https://api.github.com/users/AlecS12/followers",
"following_url": "https://api.github.com/users/AlecS12/following{/other_user}",
"gists_url": "https://api.github.com/users/AlecS12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlecS12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlecS12/subscriptions",
"organizations_url": "https://api.github.com/users/AlecS12/orgs",
"repos_url": "https://api.github.com/users/AlecS12/repos",
"events_url": "https://api.github.com/users/AlecS12/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlecS12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"Please format your post correctly by using code blocks. https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,586 | 1,586 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
Bert
Language I am using the model on (English, Chinese....):
English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
```python
import sys
from transformers import pipeline
if __name__ == '__main__':
print("start")
nlp_ft = pipeline('question-answering', model=r'C:\Users\a652726\PycharmProjects\src\data\raw\qa\wwm-bert-uncased-finetuned-squad',
tokenizer='bert-large-uncased')
```
Results in the ValueError: no modelcard.json file (which i do not have)
My fix (hack):
In modelcard.py replace (line 164):
```python
except EnvironmentError:
if pretrained_model_name_or_path in ALL_PRETRAINED_CONFIG_ARCHIVE_MAP:
```
with
```python
except (EnvironmentError, ValueError):
if pretrained_model_name_or_path in ALL_PRETRAINED_CONFIG_ARCHIVE_MAP:
```
This results in:
```python
logger.warning("Creating an empty model card.")
```
And everything works fine after this.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details)
Changing QA pipeline to work on a fixed set of spans (like multiple choice QA task, or classification)
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Windows
* Python version: 3.7
* PyTorch version: 1.3
* PyTorch Transformers version (or branch): master (post 2.3)
* Using GPU ? no
* Distributed or parallel setup ? no
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2625/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2624/comments | https://api.github.com/repos/huggingface/transformers/issues/2624/events | https://github.com/huggingface/transformers/issues/2624 | 554,415,371 | MDU6SXNzdWU1NTQ0MTUzNzE= | 2,624 | How to merge TFDistilBertForSequenceClassification with another tf.Keras model | {
"login": "amaiya",
"id": 47191980,
"node_id": "MDQ6VXNlcjQ3MTkxOTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/47191980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amaiya",
"html_url": "https://github.com/amaiya",
"followers_url": "https://api.github.com/users/amaiya/followers",
"following_url": "https://api.github.com/users/amaiya/following{/other_user}",
"gists_url": "https://api.github.com/users/amaiya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amaiya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amaiya/subscriptions",
"organizations_url": "https://api.github.com/users/amaiya/orgs",
"repos_url": "https://api.github.com/users/amaiya/repos",
"events_url": "https://api.github.com/users/amaiya/events{/privacy}",
"received_events_url": "https://api.github.com/users/amaiya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Wait, is this just the end? Am also interested in doing this",
"This [comment](https://github.com/huggingface/transformers/issues/4733#issuecomment-647414520) may help you?"
] | 1,579 | 1,593 | 1,585 | NONE | null | ## β Questions & Help
In TensorFlow 2, what is the recommended way to merge `TFDistilBertForSequenceClassification` (or any other Transformer model) with another `tf.keras` model?
In other words, I'd like to do something like this:
```
merged_out = keras.layers.concatenate([other_model.output, distilbert_model.output])
merged_out = layers.Dense(1)(merged_out)
combined_model = keras.Model([other_model.input] + distilbert_model.input, merged_out)
```
The above produces an error because `distilbert_model.output` is not accessible in the same way as vanilla tf.Keras models:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-48-b15ccf9c0221> in <module>()
----> 1 merged_out = keras.layers.concatenate([other_model.output, distilbert_model.output])
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in output(self)
1574 """
1575 if not self._inbound_nodes:
-> 1576 raise AttributeError('Layer ' + self.name + ' has no inbound nodes.')
1577 return self._get_node_attribute_at_index(0, 'output_tensors', 'output')
1578
AttributeError: Layer tf_distil_bert_for_sequence_classification has no inbound nodes.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2624/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2624/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2623/comments | https://api.github.com/repos/huggingface/transformers/issues/2623/events | https://github.com/huggingface/transformers/issues/2623 | 554,370,095 | MDU6SXNzdWU1NTQzNzAwOTU= | 2,623 | QA pipeline run-time error when there is no answer | {
"login": "AlecS12",
"id": 1517014,
"node_id": "MDQ6VXNlcjE1MTcwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1517014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlecS12",
"html_url": "https://github.com/AlecS12",
"followers_url": "https://api.github.com/users/AlecS12/followers",
"following_url": "https://api.github.com/users/AlecS12/following{/other_user}",
"gists_url": "https://api.github.com/users/AlecS12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlecS12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlecS12/subscriptions",
"organizations_url": "https://api.github.com/users/AlecS12/orgs",
"repos_url": "https://api.github.com/users/AlecS12/repos",
"events_url": "https://api.github.com/users/AlecS12/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlecS12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,586 | 1,586 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
Bert large uncased SQUAD /finetuned on SQUAD2.0 and my dataset
Language I am using the model on (English, Chinese....):
English
The problem arise when using:
* [x ] the official example scripts: (give details)
```python
from transformers import pipeline
nlp_ft = pipeline('question-answering', model='/data/bert/divorce_qa/wwm-bert-uncased-finetuned-squad', tokenizer='bert-large-uncased')
nlp_ft({
'question': "is it raining?",
'context': ''
})
```
Converting examples to features: 100%|ββββββββββ| 1/1 [00:00<00:00, 2486.25it/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-13-c7eb59b211f0> in <module>
1 nlp_ft({
2 'question': "is it raining?",
----> 3 'context': ''
4 })
~/miniconda3/envs/nlp2/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs)
657 # Retrieve the score for the context tokens only (removing question tokens)
658 fw_args = {k: torch.tensor(v) for (k, v) in fw_args.items()}
--> 659 start, end = self.model(**fw_args)
660 start, end = start.cpu().numpy(), end.cpu().numpy()
661
~/miniconda3/envs/nlp2/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/miniconda3/envs/nlp2/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, start_positions, end_positions)
1265 position_ids=position_ids,
1266 head_mask=head_mask,
-> 1267 inputs_embeds=inputs_embeds)
1268
1269 sequence_output = outputs[0]
~/miniconda3/envs/nlp2/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/miniconda3/envs/nlp2/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
687 extended_attention_mask = attention_mask[:, None, None, :]
688 else:
--> 689 raise ValueError("Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format(input_shape, attention_mask.shape))
690
691 # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
ValueError: Wrong shape for input_ids (shape torch.Size([0])) or attention_mask (shape torch.Size([0]))
Another case (with topk):
```python
nlp_ft({
'question': "met with client to discuss her house.",
'context': 'snow'
}, topk=5)
```
Converting examples to features: 100%|ββββββββββ| 1/1 [00:00<00:00, 998.17it/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-21-bdcc590f57c7> in <module>
2 'question': "met with client to discuss her house.",
3 'context': 'snow'
----> 4 }, topk=5)
~/miniconda3/envs/nlp2/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs)
684 'answer': ' '.join(example.doc_tokens[feature.token_to_orig_map[s]:feature.token_to_orig_map[e] + 1])
685 }
--> 686 for s, e, score in zip(starts, ends, scores)
687 ]
688 if len(answers) == 1:
~/miniconda3/envs/nlp2/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0)
684 'answer': ' '.join(example.doc_tokens[feature.token_to_orig_map[s]:feature.token_to_orig_map[e] + 1])
685 }
--> 686 for s, e, score in zip(starts, ends, scores)
687 ]
688 if len(answers) == 1:
KeyError: 255
Is SQUAD2.0 no answer (the span is the CLS token) implemented?
A more difficult problem:
The same span 0:42 appears in the answers twice with different probabilities:
```python
nlp_ft({
'question': 'not divorcing?',
'context': 'My son loves ice cream. I hired a lawyer. The weather is beautiful'
}, topk=20)
```
Converting examples to features: 100%|ββββββββββ| 1/1 [00:00<00:00, 608.13it/s]
[{'score': 6.8867944232487335e-15,
'start': 24,
'end': 42,
'answer': 'I hired a lawyer.'},
{'score': 5.837555144012473e-15,
'start': 0,
'end': 42,
'answer': 'My son loves ice cream. I hired a lawyer.'},
{'score': 5.104124911876544e-15,
'start': 26,
'end': 42,
'answer': 'hired a lawyer.'},
{'score': 4.480651773242456e-15,
'start': 0,
'end': 23,
'answer': 'My son loves ice cream.'},
{'score': 3.9003458779199145e-15,
'start': 34,
'end': 42,
'answer': 'lawyer.'},
{'score': 3.683094227838328e-15,
'start': 24,
'end': 66,
'answer': 'I hired a lawyer. The weather is beautiful'},
{'score': 3.637585671341453e-15, 'start': 0, 'end': 6, 'answer': 'My son'},
{'score': 3.4290555803043746e-15,
'start': 3,
'end': 42,
'answer': 'son loves ice cream. I hired a lawyer.'},
{'score': 3.1219554896279132e-15,
'start': 0,
'end': 66,
'answer': 'My son loves ice cream. I hired a lawyer. The weather is beautiful'},
{'score': 2.963547353528702e-15,
'start': 32,
'end': 42,
'answer': 'a lawyer.'},
{'score': 2.8398875189274633e-15,
'start': 43,
'end': 66,
'answer': 'The weather is beautiful'},
{'score': 2.7297131068172943e-15,
'start': 26,
'end': 66,
'answer': 'hired a lawyer. The weather is beautiful'},
{'score': 2.631992946944041e-15,
'start': 3,
'end': 23,
'answer': 'son loves ice cream.'},
{'score': 2.4540475046027306e-15,
'start': 17,
'end': 42,
'answer': 'cream. I hired a lawyer.'},
{'score': 2.3835331682965698e-15,
'start': 24,
'end': 42,
'answer': 'I hired a lawyer.'},
{'score': 2.351548698805639e-15,
'start': 7,
'end': 42,
'answer': 'loves ice cream. I hired a lawyer.'},
{'score': 2.136764987640859e-15, 'start': 3, 'end': 6, 'answer': 'son'},
{'score': 2.0859256871447656e-15,
'start': 34,
'end': 66,
'answer': 'lawyer. The weather is beautiful'},
{'score': 2.0203893789166256e-15,
'start': 0,
'end': 42,
'answer': 'My son loves ice cream. I hired a lawyer.'},
{'score': 1.9864633437892544e-15, 'start': 24, 'end': 25, 'answer': 'I'}]
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu 18.04.3 LTS
* Python version: 3.7.4
* PyTorch version: 1.3
* PyTorch Transformers version (or branch): master 2.3 (or later)
* Using GPU ? no
* Distributed or parallel setup ? no
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2623/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2623/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2622/comments | https://api.github.com/repos/huggingface/transformers/issues/2622/events | https://github.com/huggingface/transformers/issues/2622 | 554,308,133 | MDU6SXNzdWU1NTQzMDgxMzM= | 2,622 | tokenizer.add_tokens not working | {
"login": "abhishek-jha13",
"id": 20038395,
"node_id": "MDQ6VXNlcjIwMDM4Mzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/20038395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishek-jha13",
"html_url": "https://github.com/abhishek-jha13",
"followers_url": "https://api.github.com/users/abhishek-jha13/followers",
"following_url": "https://api.github.com/users/abhishek-jha13/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishek-jha13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishek-jha13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishek-jha13/subscriptions",
"organizations_url": "https://api.github.com/users/abhishek-jha13/orgs",
"repos_url": "https://api.github.com/users/abhishek-jha13/repos",
"events_url": "https://api.github.com/users/abhishek-jha13/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishek-jha13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe this was fixed recently. Could you please try installing from source `pip install git+https://github.com/huggingface/transformers` and let me know if it fixes the bug?",
"I executed the above command and it worked.\r\nThanks",
"Glad it worked.",
"I found a similar bug even with the latest version built from source. \r\nAfter adding new tokens, if I use `len(tokenizer)`, I can see that the total number of tokens has increased. However, if I use `tokenizer.vocab_size`, the size was still the number before adding new tokens. If I save the vocab using `tokenizer.save_vocabulary(\"./\")`, the generated vocab.json file does not contain the new added tokens. \r\n",
"Hello! This is not an error. Your added tokens are in `added_tokens.json`.",
"> Hello! This is not an error. Your added tokens are in `added_tokens.json`.\r\n\r\nThanks for your reply! However, when I save the RobtertaTokenizer, there are only vocab.json and merge.txt. I can't find the file added_tokens.json.",
"Which version of transformers are you using? In the latest version:\r\n\r\n```py\r\n>>> from transformers import RobertaTokenizer\r\n>>> tok = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n>>> tok.add_tokens([\"lingjzhu\", \"LysandreJik\"])\r\n2\r\n>>> tok.save_pretrained(\"here\")\r\n('here/vocab.json', 'here/merges.txt', 'here/special_tokens_map.json', 'here/added_tokens.json')\r\n```\r\n\r\nWhen inspecting `here/added_tokens.json`:\r\n\r\n```\r\n{\"lingjzhu\": 50265, \"LysandreJik\": 50266}\r\n```",
"Thanks for your comments! I recompiled the package from the source and it is working now. I am sorry for the negligence!",
"I just want to add a comment about the tokenizer. The function `tokenizer.save_vocabulary()` will not save the added tokens even in the latest version. This was my original error. But `tokenizer.save_pretrained()` will solve the problem. ",
"@lingjzhu, that makes sense, `save_vocabulary` saves the vocabulary. The entire tokenizer (with the special tokens, with the added tokens, with the special added tokens) needs to be saved using `save_pretrained`, as you've said.\r\n\r\nThe difference is explicitely mentioned in the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.save_pretrained).",
"> Which version of transformers are you using? In the latest version:\r\n> \r\n> ```python\r\n> >>> from transformers import RobertaTokenizer\r\n> >>> tok = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n> >>> tok.add_tokens([\"lingjzhu\", \"LysandreJik\"])\r\n> 2\r\n> >>> tok.save_pretrained(\"here\")\r\n> ('here/vocab.json', 'here/merges.txt', 'here/special_tokens_map.json', 'here/added_tokens.json')\r\n> ```\r\n> \r\n> When inspecting `here/added_tokens.json`:\r\n> \r\n> ```\r\n> {\"lingjzhu\": 50265, \"LysandreJik\": 50266}\r\n> ```\r\nHi, since I append the origin vocab.json according to the added_tokens.json file, and the vocab size and tokenizer length both added from 21128 to 21300. However, convert_tokens_to_ids() function seems that referenced the origin vocab.json with 21128 length, is there any solutions to use both origin and added tokens to apply the convert_tokens_to_ids() function?\r\n"
] | 1,579 | 1,636 | 1,579 | NONE | null | ## π Bug
<!-- Important information -->
I tried to add new tokens in vocabulary using tokenizer.add_tokens() and then called model() according to the code given in `BertForMaskedLM` class definition. The code is given below:
```
from transformers import BertForMaskedLM, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
tokenizer.add_tokens(['[SPECIAL_TOKEN_1]', '[SPECIAL_TOKEN_2]'])
model.resize_token_embeddings(len(tokenizer))
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
```
I get the following error:
`RuntimeError: The size of tensor a (30524) must match the size of tensor b (30522) at non-singleton dimension 2`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2622/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2621/comments | https://api.github.com/repos/huggingface/transformers/issues/2621/events | https://github.com/huggingface/transformers/issues/2621 | 554,187,568 | MDU6SXNzdWU1NTQxODc1Njg= | 2,621 | Documentation markup for model descriptions | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Indeed there were quite a few issues with the documentation. #2532 was merged this morning, and hopefully fixes all these issues!\r\n\r\nWould love your feedback on the new documentation (be sure to refresh your cache to see the new doc on https://huggingface.co/transformers). ",
"Ah, sorry, didn't check the recent commits. Just checked a couple of items. Everything seems in order except for a small inconsistency in the tokenizers:\r\n\r\nSome (e.g. [openai](https://huggingface.co/transformers/model_doc/gpt.html#openaigpttokenizer)) put the first line (the one with 'peculiarities') in a highlighted block, while others (e.g. [XLNet](https://huggingface.co/transformers/model_doc/xlnet.html#xlnettokenizer)) don't.",
"Good catch! Up to now I've reworked the configuration + models + glossary. I've yet to do the tokenizers as well as the abstract classes, will work on them in the coming days.",
"If you've finalized that, feel free to close this issue through a commit. I'll have another look, then!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | COLLABORATOR | null | ## π Bug
Looking at [the documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification), it seems something went wrong in markup land. In some models (but not all, e.g. BertModel, BertForMaskedLM, BertForNextSentencePrediction), the model description (i.e. the first paragraph) is split up in one highlighted line (grey background) and the rest of the text is regular. Seems like a formatting issue in the source code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2621/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2620/comments | https://api.github.com/repos/huggingface/transformers/issues/2620/events | https://github.com/huggingface/transformers/issues/2620 | 554,184,245 | MDU6SXNzdWU1NTQxODQyNDU= | 2,620 | Document which heads are pretrained and which aren't | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm certain the random initialisation occurs when we instantiate the class (See `BertPreTrainedModel.init_weights()`)",
"> I'm certain the random initialisation occurs when we instantiate the class (See `BertPreTrainedModel.init_weights()`)\r\n\r\nYou're right. It gets a bit complicated to track down though.\r\n\r\nPretrainedModel implements `init_weights` which applies `self._init_weights` to all modules **but** there is no reference in that class to that method. You'll have to find it in the subclasses (e.g. BertPreTrainedModel). But for e.g. RoBERTa, it's of course not needed because RoBERTa extends BERT. It's a bit confusing to follow along - or at least it takes some time to get your head around.\r\n\r\nIt might be useful to implement the methods that are needed in subclasses in PreTrainedModel. Typically you'd see abstract methods for this, but something as simple as the following would also be nice.\r\n\r\n```python\r\ndef _init_weights(self):\r\n raise NotImplementedError('Please implement me')\r\n```\r\n\r\n(If and only if all subclasses have to implement it.)\r\n\r\nThis, of course, is not my main question here.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | COLLABORATOR | null | ## π Feature
I was going through the documentation and I realised I never thought about the different heads in much detail (I always start from the base model and built on top of that). Now that I did, I wonder whether users (mistakenly?) assume that models such as `BertForQuestionAnswering` have a pretrained head. I am assuming that these heads are _not_ pretrained but that it is a convenience to have an architecture that can be finetuned on downstream tasks. If I am correct, it might be useful to highlight for these kind of models that the heads are not pretrained.
That being said, when I run
```python
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
model.eval()
torch.set_grad_enabled(False)
for name, parameters in model.named_parameters():
if 'classifier' in name:
print(name)
print(parameters)
```
I get the notice that "Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']", but still:
```
classifier.weight
Parameter containing:
tensor([[-0.0098, 0.0137, 0.0275, ..., -0.0221, -0.0190, 0.0156],
[ 0.0144, 0.0016, 0.0084, ..., 0.0055, 0.0221, -0.0145]],
requires_grad=True)
````
I've been peaking into the rabbit hole, but I can't seem to find where this random initialisation occurs.
If you agree, I can put some time in putting that in the documentation but I might need help or at least a review. Perhaps this can even be automated? That would be awesome. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2620/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2619/comments | https://api.github.com/repos/huggingface/transformers/issues/2619/events | https://github.com/huggingface/transformers/issues/2619 | 554,140,805 | MDU6SXNzdWU1NTQxNDA4MDU= | 2,619 | Adding scibert in the list of pre-trained models? | {
"login": "aCampello",
"id": 27929341,
"node_id": "MDQ6VXNlcjI3OTI5MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/27929341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aCampello",
"html_url": "https://github.com/aCampello",
"followers_url": "https://api.github.com/users/aCampello/followers",
"following_url": "https://api.github.com/users/aCampello/following{/other_user}",
"gists_url": "https://api.github.com/users/aCampello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aCampello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aCampello/subscriptions",
"organizations_url": "https://api.github.com/users/aCampello/orgs",
"repos_url": "https://api.github.com/users/aCampello/repos",
"events_url": "https://api.github.com/users/aCampello/events{/privacy}",
"received_events_url": "https://api.github.com/users/aCampello/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It would be nice if AllenAI uploaded their models to [the user hub](https://huggingface.co/models). That would allow you to simply load the models like `.from_pretrained('allenai/scibert-scivocab-uncased')`. Perhaps you can open an issue on their repository and ask whether that is possible. It might be too much work/maintenance for them, though.",
"Right, I missed that 2.2.2 update for model sharing (https://huggingface.co/transformers/model_sharing.html). So you're right, probably the best thing is for them to upload their model.",
"Might be best to close this issue here and keep everything in the issue that you created over at AllenAI.",
"Yeah, makes sense. Thanks."
] | 1,579 | 1,579 | 1,579 | NONE | null | # πNew model addition
## Model description
Would it be possible/is it in the pipeline to add SCIBERT as one of the pre-trained models for Bert? Could be as simple as adding it to the `BERT_PRETRAINED_MODEL_ARCHIVE_MAP`.
## Open Source status
Scibert is available on its own repository (https://github.com/allenai/scibert). The advantage of adding it to the map is that we don't need to download it ad-hoc every time we want to use it, it would be cached in the same repository, etc.
* [x] the model implementation is available:
* [x] the model weights are available:
* [x] who are the authors: Iz Beltagy and Kyle Lo and Arman Cohan authored the ALlenAI
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2619/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2618/comments | https://api.github.com/repos/huggingface/transformers/issues/2618/events | https://github.com/huggingface/transformers/issues/2618 | 554,091,422 | MDU6SXNzdWU1NTQwOTE0MjI= | 2,618 | summarization codes | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | Hi
I greatly appreciate to add also possibilities to train the summarization codes from scratch. I see only evaluation part in the codes. Does this also work for training?
thanks a lot for your response.
Kind regards
Rabeeh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2618/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2618/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2617/comments | https://api.github.com/repos/huggingface/transformers/issues/2617/events | https://github.com/huggingface/transformers/issues/2617 | 554,024,296 | MDU6SXNzdWU1NTQwMjQyOTY= | 2,617 | TF Models have no attribute .train() or .eval() | {
"login": "jamescolless",
"id": 24419199,
"node_id": "MDQ6VXNlcjI0NDE5MTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/24419199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamescolless",
"html_url": "https://github.com/jamescolless",
"followers_url": "https://api.github.com/users/jamescolless/followers",
"following_url": "https://api.github.com/users/jamescolless/following{/other_user}",
"gists_url": "https://api.github.com/users/jamescolless/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamescolless/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamescolless/subscriptions",
"organizations_url": "https://api.github.com/users/jamescolless/orgs",
"repos_url": "https://api.github.com/users/jamescolless/repos",
"events_url": "https://api.github.com/users/jamescolless/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamescolless/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"`model.eval()` is a PyTorch directive. It will disable dropout/norm, as you point out. On top of that, though, you'd also set the `no_grad` parameter so that weights are not updated.\r\n\r\nTypically, your code'd look like this for inference/evaluation/testing.\r\n\r\n```python\r\nmodel.eval()\r\nwith torch.no_grad():\r\n # do stuff\r\n```\r\n\r\nI am not sure how this should be done in Tensorflow.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Is there a fix available for this because I am currently running a TFBertForSequenceClassification for classification and it gives non determinististic outcomes. I have written my code in tensorflow framework. Looks like the fix is placing model.eval() during evaluation operation as per https://github.com/google-research/bert/issues/583."
] | 1,579 | 1,610 | 1,590 | NONE | null | ## π Bug
Using any of the TF models I am unable to set the **.eval()** or **.train()** properties. In addition, when loading from a pre-trained path (which the documentation seems to imply would mean that the models will be set to eval mode) I see non deterministic outputs given the same input indicating the models do not have dropout turned off.
Basic example:
```
model = TFBert.from_pretrained('path_to_model_directory', from_pt=True)
model.eval() #### This errors with "TFBertModel object has no attribute 'eval'"
tokenizer = BertTokenizer.from_pretrained('path_to_model_directory')
inputs = tokenizer.encode_plus('Dummy text here.', return_tensors='tf')['input_ids']
print(model(inputs)) ## these outputs
print(model(inputs)) ## **will not** be the same
print(model(inputs, training=False)) ## these outputs
print(model(inputs, training=False)) ## **will** be the same
```
Any help would be greatly appreciated!
## Environment
* OS: Windows
* Python version: 3.6
* PyTorch version: 1.4
* PyTorch Transformers version (or branch): 2.3
* Using GPU ? Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2617/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2616/comments | https://api.github.com/repos/huggingface/transformers/issues/2616/events | https://github.com/huggingface/transformers/issues/2616 | 553,844,777 | MDU6SXNzdWU1NTM4NDQ3Nzc= | 2,616 | Adaptive Attention Span for Transformers | {
"login": "djstrong",
"id": 1849959,
"node_id": "MDQ6VXNlcjE4NDk5NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1849959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djstrong",
"html_url": "https://github.com/djstrong",
"followers_url": "https://api.github.com/users/djstrong/followers",
"following_url": "https://api.github.com/users/djstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/djstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djstrong/subscriptions",
"organizations_url": "https://api.github.com/users/djstrong/orgs",
"repos_url": "https://api.github.com/users/djstrong/repos",
"events_url": "https://api.github.com/users/djstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/djstrong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any clue about how to integrate that into a BERT model?"
] | 1,579 | 1,588 | 1,585 | NONE | null | # πNew model addition
## Model description
<!-- Important information -->
## Open Source status
* [x] the model implementation is available: https://github.com/facebookresearch/adaptive-span
* [x] the model weights are available: get_pretrained.sh
* [x] who are the authors: Facebook Research
## Additional context
No additional dependencies required.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2616/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2615/comments | https://api.github.com/repos/huggingface/transformers/issues/2615/events | https://github.com/huggingface/transformers/issues/2615 | 553,823,698 | MDU6SXNzdWU1NTM4MjM2OTg= | 2,615 | Question answering pipeline fails with long context | {
"login": "jswift24",
"id": 1891204,
"node_id": "MDQ6VXNlcjE4OTEyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1891204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jswift24",
"html_url": "https://github.com/jswift24",
"followers_url": "https://api.github.com/users/jswift24/followers",
"following_url": "https://api.github.com/users/jswift24/following{/other_user}",
"gists_url": "https://api.github.com/users/jswift24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jswift24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jswift24/subscriptions",
"organizations_url": "https://api.github.com/users/jswift24/orgs",
"repos_url": "https://api.github.com/users/jswift24/repos",
"events_url": "https://api.github.com/users/jswift24/events{/privacy}",
"received_events_url": "https://api.github.com/users/jswift24/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I also have this issue",
"This seems to be fixed when limiting the batch size."
] | 1,579 | 1,594 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Question Answering Pipeline / Distilbert
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [x] the official example scripts: (give details): Based on the sample pipeline code form here: https://github.com/huggingface/transformers#quick-tour-of-pipelines
* [] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
I think I found a bug in the pipeline code. It fails when there's a long context in a list. See below:
```
from transformers import pipeline
nlp = pipeline('question-answering')
long_str = 'These are some irrelevant words. ' * 100
long_str = 'Pipeline have been included in the huggingface/transformers repository. ' + long_str
#Works
nlp(
{
'question': 'What is the name of the repository ?',
'context': 'Pipeline have been included in the huggingface/transformers repository. '
},
{
'question': 'What is the name of the repository ?',
'context': 'Pipeline have been included in the huggingface/transformers repository. '
}
)
#Long context by itself - works
nlp(
{
'question': 'What is the name of the repository ?',
'context': long_str
})
#Long context in a list - fails
nlp(
{
'question': 'What is the name of the repository ?',
'context': long_str
},
{
'question': 'What is the name of the repository ?',
'context': 'Pipeline have been included in the huggingface/transformers repository. '
}
)
```
Here's the error message:
```
Converting examples to features: 100%|ββββββββββ| 2/2 [00:00<00:00, 87.19it/s]
Traceback (most recent call last):
File "<ipython-input-3-e795fc7f26bf>", line 8, in <module>
'context': 'Pipeline have been included in the huggingface/transformers repository. '
File "c:\users\admin\appdata\local\programs\python\python37\lib\site-packages\transformers\pipelines.py", line 686, in __call__
for s, e, score in zip(starts, ends, scores)
File "c:\users\admin\appdata\local\programs\python\python37\lib\site-packages\transformers\pipelines.py", line 686, in <listcomp>
for s, e, score in zip(starts, ends, scores)
IndexError: index 0 is out of bounds for axis 0 with size 0
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Would like to get the answer for the second example.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Windows
* Python version: Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
* PyTorch version: N/A
* PyTorch Transformers version (or branch): master
* Using GPU ?
* Distributed or parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2615/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2615/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2614/comments | https://api.github.com/repos/huggingface/transformers/issues/2614/events | https://github.com/huggingface/transformers/issues/2614 | 553,677,077 | MDU6SXNzdWU1NTM2NzcwNzc= | 2,614 | Missing module "startlette" when calling transformers-cli | {
"login": "tailaiw",
"id": 29800495,
"node_id": "MDQ6VXNlcjI5ODAwNDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/29800495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tailaiw",
"html_url": "https://github.com/tailaiw",
"followers_url": "https://api.github.com/users/tailaiw/followers",
"following_url": "https://api.github.com/users/tailaiw/following{/other_user}",
"gists_url": "https://api.github.com/users/tailaiw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tailaiw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tailaiw/subscriptions",
"organizations_url": "https://api.github.com/users/tailaiw/orgs",
"repos_url": "https://api.github.com/users/tailaiw/repos",
"events_url": "https://api.github.com/users/tailaiw/events{/privacy}",
"received_events_url": "https://api.github.com/users/tailaiw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"Hi @tailaiw, thanks for reporting the issue.\r\n\r\nCan you try to update to the latest version of transformers ? it should have been fixed in 5004d5af42c61c91d5df07aa139d37599ceb6215.\r\n\r\nFeel free to reopen if its not the case !"
] | 1,579 | 1,581 | 1,581 | NONE | null | Calling `transformers-cli` in terminal returns error ``ModuleNotFoundError: No module named 'starlette'``.
I assume starlette should be added into dependencies in setup.py and it will be a quick fix. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2614/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2613/comments | https://api.github.com/repos/huggingface/transformers/issues/2613/events | https://github.com/huggingface/transformers/issues/2613 | 553,676,558 | MDU6SXNzdWU1NTM2NzY1NTg= | 2,613 | XLnet memory usage for long sequences | {
"login": "AlaFalaki",
"id": 7250147,
"node_id": "MDQ6VXNlcjcyNTAxNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7250147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlaFalaki",
"html_url": "https://github.com/AlaFalaki",
"followers_url": "https://api.github.com/users/AlaFalaki/followers",
"following_url": "https://api.github.com/users/AlaFalaki/following{/other_user}",
"gists_url": "https://api.github.com/users/AlaFalaki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlaFalaki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlaFalaki/subscriptions",
"organizations_url": "https://api.github.com/users/AlaFalaki/orgs",
"repos_url": "https://api.github.com/users/AlaFalaki/repos",
"events_url": "https://api.github.com/users/AlaFalaki/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlaFalaki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
Hello,
I have some questions regarding how the XLnet memory and output work in this implementation.
1. As it's been mentioned before, by default, XLnet doesn't use memory. So, how is this possible that it accepts long sequences as input (in other words, why there isn't any limit on the number of input tokens), unlike BERT, for example, that will only accept 512 tokens.
2. If I set the memory length to 512, and feed the XLnet with 512 tokens at a time (for a 1024 sequence length), also pass the memory in each step. (like the example code below) Will the final output of the network, includes all the information from the whole sequence?
```
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetModel.from_pretrained('xlnet-large-cased')
mems = None
for i in range(2):
input_ids = torch.tensor(tokenizer.encode(text[i])).unsqueeze(0)
outputs = model(input_ids, mems=mems)
mems = outputs[1]
```
To be clear, I want to use XLnet for long text summarization. So I need to feed the XLnet output to a decoder part and need a fixed-length representation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2613/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2613/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.