url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/26430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26430/comments | https://api.github.com/repos/huggingface/transformers/issues/26430/events | https://github.com/huggingface/transformers/pull/26430 | 1,914,710,918 | PR_kwDOCUB6oc5bS57B | 26,430 | Changed warnings.warn with logging.getLogger | {
"login": "hegdeadithyak",
"id": 116452077,
"node_id": "U_kgDOBvDq7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/116452077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hegdeadithyak",
"html_url": "https://github.com/hegdeadithyak",
"followers_url": "https://api.github.com/users/hegdeadithyak/followers",
"following_url": "https://api.github.com/users/hegdeadithyak/following{/other_user}",
"gists_url": "https://api.github.com/users/hegdeadithyak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hegdeadithyak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hegdeadithyak/subscriptions",
"organizations_url": "https://api.github.com/users/hegdeadithyak/orgs",
"repos_url": "https://api.github.com/users/hegdeadithyak/repos",
"events_url": "https://api.github.com/users/hegdeadithyak/events{/privacy}",
"received_events_url": "https://api.github.com/users/hegdeadithyak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,696 | 1,696 | CONTRIBUTOR | null | This should fix the issue #26381 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26430/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26430",
"html_url": "https://github.com/huggingface/transformers/pull/26430",
"diff_url": "https://github.com/huggingface/transformers/pull/26430.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26430.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26429/comments | https://api.github.com/repos/huggingface/transformers/issues/26429/events | https://github.com/huggingface/transformers/issues/26429 | 1,914,505,400 | I_kwDOCUB6oc5yHQi4 | 26,429 | pretrained_model_name_or_path cannot be None | {
"login": "rahaazad2",
"id": 142936565,
"node_id": "U_kgDOCIUJ9Q",
"avatar_url": "https://avatars.githubusercontent.com/u/142936565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahaazad2",
"html_url": "https://github.com/rahaazad2",
"followers_url": "https://api.github.com/users/rahaazad2/followers",
"following_url": "https://api.github.com/users/rahaazad2/following{/other_user}",
"gists_url": "https://api.github.com/users/rahaazad2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahaazad2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahaazad2/subscriptions",
"organizations_url": "https://api.github.com/users/rahaazad2/orgs",
"repos_url": "https://api.github.com/users/rahaazad2/repos",
"events_url": "https://api.github.com/users/rahaazad2/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahaazad2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reporting, can you share the full traceback?",
"Sure, here is the complete traceback:\r\n\r\n Traceback (most recent call last):\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 261, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\n requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 429, in cached_file\r\n resolved_file = hf_hub_download(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1195, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1541, in get_hf_file_metadata\r\n hf_raise_for_status(r)\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 293, in hf_raise_for_status\r\n raise RepositoryNotFoundError(message, response) from e\r\n huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-65146a5e-2beb5c500637678b292e6559;39635aa7-8f65-42b4-8320-0877d5013995)\r\n \r\n Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.\r\n Please make sure you specified the correct `repo_id` and `repo_type`.\r\n If you are trying to access a private or gated repo, make sure you are authenticated.\r\n Invalid username or password.\r\n \r\n The above exception was the direct cause of the following exception:\r\n\r\n Traceback (most recent call last):\r\n File \"/code/my_model.py\", line 239, in <module>\r\n model, labels = load_model(args.model)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/code/my_model.py\", line 105, in load_model\r\n model = MyModel('my_model_name', device='cuda')\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/code/model_loader.py\", line 111, in __init__\r\n self.model, self.tokenizer, self.class_names = load_checkpoint(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/code/model_loader.py\", line 100, in load_checkpoint\r\n model, tokenizer = get_model_and_tokenizer(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/code/model_loader.py\", line 66, in get_model_and_tokenizer\r\n model = model_class.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 2389, in from_pretrained\r\n resolved_config_file = cached_file(\r\n ^^^^^^^^^^^^\r\n File \"/home/raha/.conda/envs/myenv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 450, in cached_file\r\n raise EnvironmentError(\r\n OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\n If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`",
"Hi @rahaazad2 \r\nThanks for the issue, I tried to reproduce locally but with no success:\r\n```python\r\nimport torch\r\nfrom transformers import AutoModel, AutoConfig\r\n\r\nfrom huggingface_hub import hf_hub_download\r\n\r\nmodel_id = \"bert-base-uncased\"\r\n\r\nconfig = AutoConfig.from_pretrained(model_id)\r\nstate_dict = torch.load(hf_hub_download(model_id, revision=\"main\", filename=\"pytorch_model.bin\"))\r\n\r\nmodel = AutoModel.from_pretrained(\r\n pretrained_model_name_or_path=None, \r\n config=config, \r\n state_dict=state_dict\r\n)\r\n```\r\nThe script above works fine on my end, if you say that the script was working before and not anymore it is definitely a bug, can you help me with a simple reproducible snippet?",
"Hello, a reproducible code snippet was provided: https://github.com/unitaryai/detoxify/pull/93#issuecomment-1776229505\r\n\r\n```py\r\n# works\r\nconfig = AutoConfig.from_pretrained(model_type, num_labels=num_classes)\r\nmodel = model_class.from_pretrained(\r\n pretrained_model_name_or_path=None,\r\n config=huggingface_config_path or config,\r\n state_dict=state_dict,\r\n local_files_only=huggingface_config_path is not None,\r\n )\r\n```\r\n\r\n```py\r\n# doesn't work\r\nconfig = model_class.from_pretrained(model_type, num_labels=num_classes)\r\nmodel = model_class.from_pretrained(\r\n pretrained_model_name_or_path=None,\r\n config=huggingface_config_path or config,\r\n state_dict=state_dict,\r\n local_files_only=huggingface_config_path is not None,\r\n )\r\n```",
"Hi @kraktus ! I managed to reproduce with this script:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModel, AutoConfig\r\nfrom huggingface_hub import hf_hub_download\r\n\r\nmodel_id = \"bert-base-uncased\"\r\nnum_classes = 2\r\nmodel_class = AutoModel\r\n\r\nstate_dict = torch.load(hf_hub_download(model_id, revision=\"main\", filename=\"pytorch_model.bin\"))\r\nhuggingface_config_path = None\r\n\r\nconfig = model_class.from_pretrained(model_id, num_labels=num_classes)\r\nmodel = model_class.from_pretrained(\r\n pretrained_model_name_or_path=None,\r\n config=huggingface_config_path or config,\r\n state_dict=state_dict,\r\n local_files_only=huggingface_config_path is not None,\r\n )\r\n```\r\n\r\nBut note this is **totally** expected, you are passing a `AutoModel` as a `config` argument. Maybe we should error out a better error message there. The code will try to be executed here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L488 since the check `if not isinstance(config, PretrainedConfig):` will return `True`.\r\n\r\nThe fix is to replace `model_class` with `AutoConfig` to correctly pass a config object:\r\n\r\n```diff\r\nimport torch\r\nfrom transformers import AutoModel, AutoConfig\r\nfrom huggingface_hub import hf_hub_download\r\n\r\nmodel_id = \"bert-base-uncased\"\r\nnum_classes = 2\r\nmodel_class = AutoModel\r\n\r\nstate_dict = torch.load(hf_hub_download(model_id, revision=\"main\", filename=\"pytorch_model.bin\"))\r\nhuggingface_config_path = None\r\n\r\n+ config = AutoConfig.from_pretrained(model_id, num_labels=num_classes)\r\nmodel = model_class.from_pretrained(\r\n pretrained_model_name_or_path=None,\r\n config=huggingface_config_path or config,\r\n state_dict=state_dict,\r\n local_files_only=huggingface_config_path is not None,\r\n )\r\n```\r\n\r\nLet me know if this makes sense",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,701 | 1,701 | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a pre-trained model that I want to load it from its state-dict. I use the following code:
model_class = getattr(transformers, 'XLMRobertaForSequenceClassification')
model = model_class.from_pretrained(
pretrained_model_name_or_path=None,
config=model_type,
num_labels=num_classes,
state_dict=state_dict,
)
This code was working on transformer version 4.22.1. However, after upgrading to version 4.33.2, I get this error:
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
### Expected behavior
I expect the model to be loaded from state_dict. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26429/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26428/comments | https://api.github.com/repos/huggingface/transformers/issues/26428/events | https://github.com/huggingface/transformers/issues/26428 | 1,914,399,129 | I_kwDOCUB6oc5yG2mZ | 26,428 | IDEFICS Cross Attention: Text tokens appearing before images still attend to image embeddings | {
"login": "momergul",
"id": 8013984,
"node_id": "MDQ6VXNlcjgwMTM5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8013984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/momergul",
"html_url": "https://github.com/momergul",
"followers_url": "https://api.github.com/users/momergul/followers",
"following_url": "https://api.github.com/users/momergul/following{/other_user}",
"gists_url": "https://api.github.com/users/momergul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/momergul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/momergul/subscriptions",
"organizations_url": "https://api.github.com/users/momergul/orgs",
"repos_url": "https://api.github.com/users/momergul/repos",
"events_url": "https://api.github.com/users/momergul/events{/privacy}",
"received_events_url": "https://api.github.com/users/momergul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What do you think @leot13 @VictorSanh ?",
"Thank you for noticing! It's not easy to detect. We are aware but did training this way. In practice that means the few first tokens with no image are attending to every image instead of none of them, so there's a small information leak.\r\nTo fix this, we could apply the image_attention_mask on the output of the cross-attention as a gating mechanism. The image attention mask has shape [bsz, num_tokens, num_images] so we would need to use a gating mechanism along the lines of:\r\n`residuals + self.act_cross_attn(self.alpha_cross_attn) * image_attention_mask.sum(dim=2).unsqueeze(-1) * cross_attention_hidden_states `\r\nHowever, it's not certain that the performance would transfer perfectly since this is a different setup from the training one. We would probably need to re-evaluate them on some benchmarks to make sure inference in this setup is fine. Most likely it will be. At least for the instruct ones since we do some finetuning on ultrachat, a text-only dataset for which we zero-out the cross-attentions.",
"Thanks for the response! I was able to notice only because I began receiving NaNs in the outputs of the cross attention layer for tokens appearing before images while doing QLoRA finetuning. How were you able to avoid this during training? During cross attention, if `(Q @ K.transpose(-2, -1) / math.sqrt(Q.size(-1)))` is sufficiently small and negative, there is a chance that adding the attention mask for these tokens before images will result in -inf for each value, causing NaNs after softmax.",
"I have been trying to reproduce you NaNs issue, but can't so far. There is a [colab notebook](https://colab.research.google.com/drive/1RltyDpv7Fbu_My03RyZ7ftavEQyoxbek#scrollTo=prXRsUiXCII9) for doing QLoRA PEFT finetuning. I used a similar setup, using almost the same libraries as you (except for cu17 which doesn't work in my env, so I used cu18) and didn't get NaNs even when placing text before the image.\r\n\r\nDid you perform the QLoRA fine tuning with the same setup as described in the colab? \r\n\r\nAlso side note: the image_attention_mask I described in the comment above is the one fed to the model, but it gets modified before reaching the cross-attention block. The idea stays the same though.",
"I have been finetuning IDEFICS on a separate task with unreleased data and have also not been using the Trainer module for finetuning, so there is a good chance I am introducing some error of my own for the NaNs. I also recently had the same NaN problem with padding tokens in regular self-attention (where the pad tokens also have an attention mask with all entries set to the smallest value), so the NaN problem I have is not about cross-attention. I'll see if I can replicate the problem with publicly available data and share my code in a separate repository. Thanks for looking into this!\r\n\r\nAs an aside, the problem itself is essentially discriminative image captioning, with the model being fed 10 images and being asked to produce a caption for a target image. To give some more information, the model input in training is structured in this manner:\r\n```\r\nprompt = [\r\n\"Image 0\", img_0, \"Image 1\", img_1, ..., \"Image 9\", img_9,\r\n\"Instruction: You will provide a discriminative caption for the target image.\",\r\nf\"The target image is Image {target_idx}. Caption: {caption}\"\r\n]\r\n```\r\n",
"PR looks good and should be merged soon 🤗 thanks for you patience ",
"I believe the problem @momergul mentioned is related to the `pytorch 2.1` release as mentioned in https://github.com/pytorch/pytorch/issues/110213. In which, the output of `nn.functional.scaled_dot_product_attention` will be `NaN` if the whole row is masked. It is the situation I met when I am finetuning the Idefics model. Would you mind provide any hints to fix this?",
"Thanks for bringing this back to my attention!\r\n[PR #26839](https://github.com/huggingface/transformers/pull/26839) answers this issue:\r\n> Expected behavior\r\n> Hello! I believe there is a bug in how cross attention is performed within IdeficsGatedCrossAttentionLayer in models/idefics/modeling_idefics.py for text tokens appearing before any images are given to the model. As IDEFICS is autoregressive, the hidden state for a text token appearing before any image is observed should not be changed after cross attention. During the forward pass in the code snippet I provided, I expect the following behavior immediately after Line 911 of models/idefics/modeling_idefics.py:\r\n> \r\n> Expected behavior:\r\n> torch.all(residual[0, 0:4] == hidden_states[0, 0:4]) evaluates to True\r\n> \r\n> Observed behavior:\r\n> torch.all(residual[0, 0:4] == hidden_states[0, 0:4]) evaluates to False\r\n> \r\n> I believe this is due to how the attention mask is applied. For the first 4 tokens which appear before any image, all values of image_attention_mask are set to the smallest possible value. This results in the attention weights during the call to nn.functional.scaled_dot_product_attention in Line 692 to each be equal to each other. This in turn means that these four text tokens appearing before any image each attend to the image embeddings.\r\n> \r\n> Is my understanding correct here? I would greatly appreciate it if you could look into this.\r\n\r\nYou are right though that it does not answer the NaNs problem. I looked into the issue you showed back then but 2 things made me think it was okay: the torch version specified by @momergul was: 2.0.1+cu117 , and when I double checked for torch 2.1, I did the inference in bf16. \r\nIt turns out, as I realized when diving back into this after seeing you message, this bug only happens in fp32. \r\nThe best quick fix right now would be to use bf16, or fp16 if you don't have access to bf16-friendly gpus.\r\n\r\nOtherwise, since it affects multiple models, I believe there will be a fix soon in transformers. \r\n@ydshieh for visibility",
"Hi @momergul @king159 the issue in IDEFICS related to PyTorch SDPA being used in fp32 on CUDA device with the memory-efficient attention backend yielding NaN (https://github.com/pytorch/pytorch/issues/110213) will be fixed with https://github.com/huggingface/transformers/pull/26572."
] | 1,695 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.33.1
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.1
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1: Run the following code snippet altered from `examples/idefics/inference.py` in the notebooks repo.
```
import torch
from transformers import IdeficsForVisionText2Text, AutoProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
checkpoint = "HuggingFaceM4/idefics-9b"
model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device)
processor = AutoProcessor.from_pretrained(checkpoint, use_auth_token=False)
model.eval()
url = "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg"
image = processor.image_processor.fetch_images(url)
prompts = [
[
"User:",
image,
"Describe this image.\nAssistant: An image of two kittens in grass.",
],
]
inputs = processor(prompts, return_tensors="pt").to(device)
logits = model(**inputs)['logits']
```
2: During the model forward pass, inspect hidden states in Line 912 of `models/idefics/modeling_idefics.py`
### Expected behavior
Hello! I believe there is a bug in how cross attention is performed within `IdeficsGatedCrossAttentionLayer` in `models/idefics/modeling_idefics.py` for text tokens appearing before any images are given to the model. As IDEFICS is autoregressive, the hidden state for a text token appearing before any image is observed should not be changed after cross attention. During the forward pass in the code snippet I provided, I expect the following behavior immediately after Line 911 of `models/idefics/modeling_idefics.py`:
Expected behavior:
`torch.all(residual[0, 0:4] == hidden_states[0, 0:4])` evaluates to `True`
Observed behavior:
`torch.all(residual[0, 0:4] == hidden_states[0, 0:4])` evaluates to `False`
I believe this is due to how the attention mask is applied. For the first 4 tokens which appear before any image, all values of `image_attention_mask` are set to the smallest possible value. This results in the attention weights during the call to `nn.functional.scaled_dot_product_attention` in Line 692 to each be equal to each other. This in turn means that these four text tokens appearing before any image each attend to the image embeddings.
Is my understanding correct here? I would greatly appreciate it if you could look into this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26428/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26427/comments | https://api.github.com/repos/huggingface/transformers/issues/26427/events | https://github.com/huggingface/transformers/pull/26427 | 1,914,150,601 | PR_kwDOCUB6oc5bRASk | 26,427 | Update deepspeed.py | {
"login": "sachinSingh16-09",
"id": 97588030,
"node_id": "U_kgDOBdETPg",
"avatar_url": "https://avatars.githubusercontent.com/u/97588030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinSingh16-09",
"html_url": "https://github.com/sachinSingh16-09",
"followers_url": "https://api.github.com/users/sachinSingh16-09/followers",
"following_url": "https://api.github.com/users/sachinSingh16-09/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinSingh16-09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinSingh16-09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinSingh16-09/subscriptions",
"organizations_url": "https://api.github.com/users/sachinSingh16-09/orgs",
"repos_url": "https://api.github.com/users/sachinSingh16-09/repos",
"events_url": "https://api.github.com/users/sachinSingh16-09/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinSingh16-09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,699 | 1,699 | CONTRIBUTOR | null |
# What does this PR do?
replaces warnings.warn with logging.warning..
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26381
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--@osanseviero
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26427/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26427",
"html_url": "https://github.com/huggingface/transformers/pull/26427",
"diff_url": "https://github.com/huggingface/transformers/pull/26427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26427.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26426/comments | https://api.github.com/repos/huggingface/transformers/issues/26426/events | https://github.com/huggingface/transformers/issues/26426 | 1,914,141,463 | I_kwDOCUB6oc5yF3sX | 26,426 | LlamaForCausalLM | {
"login": "Shikamaru5",
"id": 86502093,
"node_id": "MDQ6VXNlcjg2NTAyMDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/86502093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shikamaru5",
"html_url": "https://github.com/Shikamaru5",
"followers_url": "https://api.github.com/users/Shikamaru5/followers",
"following_url": "https://api.github.com/users/Shikamaru5/following{/other_user}",
"gists_url": "https://api.github.com/users/Shikamaru5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shikamaru5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shikamaru5/subscriptions",
"organizations_url": "https://api.github.com/users/Shikamaru5/orgs",
"repos_url": "https://api.github.com/users/Shikamaru5/repos",
"events_url": "https://api.github.com/users/Shikamaru5/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shikamaru5/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe cc @SunMarc :) ",
"Hi @Shikamaru5, if i understood that correctly, you want to be able to load the parameters contained in the state_dict in batches to the GPU. If this is indeed the case, i don't think we have an easy fix. Here is the [line](see code [here](https://github.com/huggingface/transformers/blob/946bac798caefada3f5f1c9fecdcfd587ed24ac7/src/transformers/modeling_utils.py#L680)) that put the parameters to the GPU in case you want to try something. bitsandbytes takes a lot of time to load because the quantization happens when we move the parameters from the CPU to the GPU. ",
"Ok I understand, I was just thinking it might be something of value, although I'm uncertain as to how I could do it. I was watching a video on I think it was bettertransformer or Flash attention or something and they were talking about how they do batches of the inputs and perhaps parameters to send to the gpu at once in order to speed up inference and training. It had occurred to me that these sorts of layers:\r\n\r\n device_map = {\r\n \"transformer.word_embeddings\": 0,\r\n \"transformer.word_embeddings_layernorm\": 0,\r\n \"model.embed_tokens.weight\": 0,\r\n \"model.layers.0.self_attn.q_proj.weight\": 0,\r\n \"model.layers.0.self_attn.k_proj.weight\": 0,\r\n \"model.layers.0.self_attn.v_proj.weight\": 0,\r\n \"model.layers.0.self_attn.o_proj.weight\": 0,\r\n \"model.layers.0.mlp.gate_proj.weight\": 0,\r\n \"model.layers.0.mlp.up_proj.weight\": 0,\r\n \"model.layers.0.mlp.down_proj.weight\": 0,\r\n \"model.layers.0.input_layernorm.weight\": 0,\r\n \"model.layers.0.post_attention_layernorm.weight\": 0,\r\n #rest of layers,\r\n }\r\n\r\nmight be able to be grouped together more efficiently, instead of one by one calling : 0, perhaps it could be something like:\r\n\r\n device_map = {\r\n (\"transformer.word_embeddings\",\r\n \"transformer.word_embeddings_layernorm\",\r\n \"model.embed_tokens.weight\",\r\n \"model.layers.0.self_attn.q_proj.weight\",\r\n \"model.layers.0.self_attn.k_proj.weight\",\r\n \"model.layers.0.self_attn.v_proj.weight\",\r\n \"model.layers.0.self_attn.o_proj.weight\",\r\n \"model.layers.0.mlp.gate_proj.weight\",\r\n \"model.layers.0.mlp.up_proj.weight\",\r\n \"model.layers.0.mlp.down_proj.weight\",\r\n \"model.layers.0.input_layernorm.weight\",\r\n \"model.layers.0.post_attention_layernorm.weight\").to(\"cuda\"),\r\n #rest of the layers,\r\n }\r\n\r\nAlthough I did try that and it told me that it wasn't set up to handle this. Anyway thanks for the suggestion.",
"Hi @Shikamaru5 , flash attention should work with device_map out the box. In flash attention, the layers are already on the GPU and we just handle better the communication of the parameters inside the gpu, so that we spend less time storing, reading and writing the keys, queries and values tensors and more time on the computation. Check out this [doc](https://huggingface.co/docs/text-generation-inference/conceptual/flash_attention) that explain better how it works ",
"Right that would've been nice, but because I'm using 4bit quantization with bitsandbytes and my model needs to offload between gpu and cpu it needs a custom device_map."
] | 1,695 | 1,696 | 1,695 | NONE | null | I'm trying to implement the bitsandbytes library into my script to run a Llama2 model and in this instance it requires that I use a custom device map. I have about 458 layers and most of them need to be sent to the GPU. However, I realized upon the loading the model it seems there's this huge bottleneck of leading every single one of those layers individually to the GPU.
My load time might increase if I could load the layers if not all at the same time, maybe in multiple batches to the GPU. I'm however, uncertain of how to do this as multiple different things that I've tried have not succeeded. I would greatly appreciate if anyone knows how or if I can solve this issue, and thank you for taking the time to read this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26426/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26425/comments | https://api.github.com/repos/huggingface/transformers/issues/26425/events | https://github.com/huggingface/transformers/pull/26425 | 1,913,932,047 | PR_kwDOCUB6oc5bQQH- | 26,425 | Add torch `RMSProp` optimizer | {
"login": "natolambert",
"id": 10695622,
"node_id": "MDQ6VXNlcjEwNjk1NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/10695622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/natolambert",
"html_url": "https://github.com/natolambert",
"followers_url": "https://api.github.com/users/natolambert/followers",
"following_url": "https://api.github.com/users/natolambert/following{/other_user}",
"gists_url": "https://api.github.com/users/natolambert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/natolambert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/natolambert/subscriptions",
"organizations_url": "https://api.github.com/users/natolambert/orgs",
"repos_url": "https://api.github.com/users/natolambert/repos",
"events_url": "https://api.github.com/users/natolambert/events{/privacy}",
"received_events_url": "https://api.github.com/users/natolambert/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks, I can't merge it as I don't have write access, so @ArthurZucker or @younesbelkada can do so!"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
Add torch `RMSProp` for easy use in TRL library, particularly to match the default use case of Direct Preference Optimization.
Script is [here](https://github.com/huggingface/trl/blob/main/examples/dpo.py).
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@younesbelkada and I discussed this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26425/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26425",
"html_url": "https://github.com/huggingface/transformers/pull/26425",
"diff_url": "https://github.com/huggingface/transformers/pull/26425.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26425.patch",
"merged_at": 1695749231000
} |
https://api.github.com/repos/huggingface/transformers/issues/26424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26424/comments | https://api.github.com/repos/huggingface/transformers/issues/26424/events | https://github.com/huggingface/transformers/issues/26424 | 1,913,786,143 | I_kwDOCUB6oc5yEg8f | 26,424 | Flash Attention 2 support for BERT, DistilBERT, and T5 | {
"login": "AmenRa",
"id": 11797097,
"node_id": "MDQ6VXNlcjExNzk3MDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/11797097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmenRa",
"html_url": "https://github.com/AmenRa",
"followers_url": "https://api.github.com/users/AmenRa/followers",
"following_url": "https://api.github.com/users/AmenRa/following{/other_user}",
"gists_url": "https://api.github.com/users/AmenRa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmenRa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmenRa/subscriptions",
"organizations_url": "https://api.github.com/users/AmenRa/orgs",
"repos_url": "https://api.github.com/users/AmenRa/repos",
"events_url": "https://api.github.com/users/AmenRa/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmenRa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @AmenRa you might be interested in this issue: https://github.com/huggingface/transformers/issues/26350\r\n\r\ncc @younesbelkada ",
"Thanks for your interest @AmenRa, I have just added these architectures to the list in #26350",
"@LysandreJik thanks for the fast reply.\r\n@younesbelkada thanks for adding the models to your list.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,698 | 1,698 | NONE | null | Hi and thanks for adding Flash Attention 2!
I was wondering if there's any plan to add support for Flash Attention 2 to BERT, DistilBERT, and T5 models.
Those models are still the go-to Transformer models in my research community (Information Retrieval).
Reducing training/inference time and memory usage of those models would be extremely helpful for many researchers and practitioners.
Thank you!
Elias | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26424/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26424/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26423/comments | https://api.github.com/repos/huggingface/transformers/issues/26423/events | https://github.com/huggingface/transformers/pull/26423 | 1,913,772,609 | PR_kwDOCUB6oc5bPtK5 | 26,423 | testing doc-builder | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/404
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26423/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26423",
"html_url": "https://github.com/huggingface/transformers/pull/26423",
"diff_url": "https://github.com/huggingface/transformers/pull/26423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26423.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26422/comments | https://api.github.com/repos/huggingface/transformers/issues/26422/events | https://github.com/huggingface/transformers/pull/26422 | 1,913,766,175 | PR_kwDOCUB6oc5bPrwW | 26,422 | fix_mbart_tied_weights | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm I'm not sure if all mBART weights share all weight matrices with each other. \r\n\r\nWe should make sure that at least for all the following models:\r\nhttps://huggingface.co/models?other=mbart&sort=trending&search=facebook\r\nall three embedding matrices are identical (I'm not sure this is always the case e.g. for multi-lingual ones) ",
"Yes I think we should look at the `config.tie_word_embeddings` value and adapt accordingly. See recent PR on FSMT: https://github.com/huggingface/transformers/pull/26292",
"Thanks for the link @LysandreJik. I've updated the code and added a test. ",
"As @patrickvonplaten was saying could you also quickly verify that it works with the most downloaded mbart models on the Hub? When doing the FSMT change I ended up breaking a few FSMT models on the Hub, let's try to prevent this here :grin: \r\n\r\nThanks for your help @SunMarc ",
"Hi @LysandreJik , I confirm that for the most downloaded mbart models on the hub, all three embedding matrices are identical. Here's the snippet that I used: \r\n```py\r\nfrom transformers import AutoModelForSeq2SeqLM\r\nmodels = [\"facebook/mbart-large-50-many-to-many-mmt\", \"facebook/mbart-large-50-many-to-one-mmt\", \"facebook/mbart-large-50-one-to-many-mmt\",\"facebook/mbart-large-50\",\"facebook/mbart-large-cc25\",\"facebook/mbart-large-en-ro\",\"facebook/mgenre-wiki\"]\r\nfor model_id in models:\r\n for safetensors in [True, False]:\r\n for device_map in [\"auto\", None]:\r\n try:\r\n model = AutoModelForSeq2SeqLM.from_pretrained(model_id, use_safetensors=safetensors, device_map=device_map)\r\n except:\r\n print(f\"{model_id} failed to load with safetensors={safetensors} and device_map={device_map}\")\r\n assert len(\r\n {\r\n model.get_output_embeddings().weight.data_ptr(),\r\n model.get_input_embeddings().weight.data_ptr(),\r\n model.base_model.decoder.embed_tokens.weight.data_ptr(),\r\n model.base_model.encoder.embed_tokens.weight.data_ptr(),\r\n }\r\n ) == 1, \"Embeddings are not tied in {}\".format(model_id)\r\n```",
"Thanks a lot @SunMarc !",
"Yay, that works. Thanks a lot everyone!"
] | 1,695 | 1,695 | 1,695 | MEMBER | null | # What does this PR do ?
Fixes #26266. This PR fixes the tied weights for mbart model. Before this PR, only `lm_head` was tied to `model.shared`. Now, we also make sure to tie `model.encoder.embed_tokens` and `model.decoder.embed_tokens` to `model.shared` by defining the `_tie_weights` method which will be called when we do `model.tie_weights()`. I've checked that we get the same weights at the end. This issue only happens when we load with `safetensors` + `device_map` because we don't save the shared tensors and the weights are on the meta device. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26422",
"html_url": "https://github.com/huggingface/transformers/pull/26422",
"diff_url": "https://github.com/huggingface/transformers/pull/26422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26422.patch",
"merged_at": 1695906516000
} |
https://api.github.com/repos/huggingface/transformers/issues/26421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26421/comments | https://api.github.com/repos/huggingface/transformers/issues/26421/events | https://github.com/huggingface/transformers/pull/26421 | 1,913,754,520 | PR_kwDOCUB6oc5bPpOS | 26,421 | Fix attention computation when padding mask is not null | {
"login": "zhipeng93",
"id": 20766443,
"node_id": "MDQ6VXNlcjIwNzY2NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/20766443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhipeng93",
"html_url": "https://github.com/zhipeng93",
"followers_url": "https://api.github.com/users/zhipeng93/followers",
"following_url": "https://api.github.com/users/zhipeng93/following{/other_user}",
"gists_url": "https://api.github.com/users/zhipeng93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhipeng93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhipeng93/subscriptions",
"organizations_url": "https://api.github.com/users/zhipeng93/orgs",
"repos_url": "https://api.github.com/users/zhipeng93/repos",
"events_url": "https://api.github.com/users/zhipeng93/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhipeng93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @Rocketknight1 so he can have a look 😉 ",
"Will do, but might be tomorrow before I get a chance!",
"Hey! I am working on #27114 which should fix the attention computation issue \r\n",
"> Hey! I am working on #27114 which should fix the attention computation issue\r\n\r\nHi @ArthurZucker, I am afraid #27114 is not enough since even if you fixed the underflow/overflow issue, the attention output of flashattention and the attention module we implemented here is different. Please feel free to try the code snippet here [1].\r\n\r\n[1] https://github.com/huggingface/transformers/pull/26421#discussion_r1368198875",
"Feel free to consult #27050 as I think it explains why! 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,702 | 1,702 | NONE | null | # What does this PR do?
This PR fixed two problems when using padding mask in `LlamaAttention`. In detail,
- Replace the `bfloat.min` with `float("-inf")` as the default value for the mask.
- Transpose the non-casual mask.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26421",
"html_url": "https://github.com/huggingface/transformers/pull/26421",
"diff_url": "https://github.com/huggingface/transformers/pull/26421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26421.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26420/comments | https://api.github.com/repos/huggingface/transformers/issues/26420/events | https://github.com/huggingface/transformers/issues/26420 | 1,913,735,538 | I_kwDOCUB6oc5yEUly | 26,420 | automatic-speech-recognition pipeline returns unhelpful error when passing inputs | {
"login": "govindrai",
"id": 13859249,
"node_id": "MDQ6VXNlcjEzODU5MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/govindrai",
"html_url": "https://github.com/govindrai",
"followers_url": "https://api.github.com/users/govindrai/followers",
"following_url": "https://api.github.com/users/govindrai/following{/other_user}",
"gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/govindrai/subscriptions",
"organizations_url": "https://api.github.com/users/govindrai/orgs",
"repos_url": "https://api.github.com/users/govindrai/repos",
"events_url": "https://api.github.com/users/govindrai/events{/privacy}",
"received_events_url": "https://api.github.com/users/govindrai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sanchit-gandhi, could you give this a look? :)",
"Thanks for the comprehensive issue description @govindrai! I've opened a PR with the proposed changes - feel free to take a look and comment if it looks good to you. Unfortunately, we can't get a more detailed log out for when the audio file read fails. We use `ffmpeg` through a subprocess here, so it's a bit black box: https://github.com/huggingface/transformers/blob/5e11d72d4d0939138fbabfebe9a69d2061519547/src/transformers/pipelines/audio_utils.py#L34-L35\r\n\r\nThus, I've endeavoured to improve the generic error message to inform the user of the most common pitfalls. Let me know if this would have helped in your use case!",
"Thank you @sanchit-gandhi. This is a great improvement!"
] | 1,695 | 1,696 | 1,696 | NONE | null | ### Feature request
[Colab Notebook](https://colab.research.google.com/drive/16vg9J-VYU48bu1C_lqfavlzzE01-K2mL?usp=sharing)
There are a couple user experience improvements that will really help those that are still in the beginning phases of their HF journey. Following the [HF Pipeline tutorial](https://huggingface.co/docs/transformers/pipeline_tutorial#pipeline-usage), I made my own dataset repo and upload a couple of audio video files.
Then I passes those urls as inputs to the pipeline as shown in the tutorial. Each time, I would be hit with:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-0f3343122218>](https://localhost:8080/#) in <cell line: 1>()
----> 1 generator("https://huggingface.co/datasets/govindrai/audio/resolve/main/mlk.flac")
9 frames
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/audio_utils.py](https://localhost:8080/#) in ffmpeg_read(bpayload, sampling_rate)
39 audio = np.frombuffer(out_bytes, np.float32)
40 if audio.shape[0] == 0:
---> 41 raise ValueError("Malformed soundfile")
42 return audio
43
ValueError: Malformed soundfile
```
I finally go the hunch that my dataset is private and so maybe the model is unable to pull it down. So I updated the dataset to public. Again, same errors. I realized, I had uploaded videos (mp4) instead of audio. So then I converted my video to audio (mp4). Again, same error. I then converted it to an mp3. It still didn't work.
Finally I compared my public url with the public url used in the tutorial: It turns out that copying a link to a file from HuggingFace dataset doesn't work. It has to be in a special format:
I used: `_https://huggingface.co/datasets/govindrai/audio/**blob**/main/mlk.flac_`
Tutorial used: `_https://huggingface.co/datasets/Narsil/asr_dummy/**resolve**/main/mlk.flac_`
Once I changed _blob_ to _resolve_, it finally worked. (This last part was an error on my end since the url containing the blob returns a html page and not the file itself, but still added here to showcase all errors encountered 😅
So my request is:
Is it possible to output a helpful error so that the user can have some sort of guidance on how to fix their program's input? It was quite frustrating that I could not replicate the tutorial on my own dataset without having to understand all this context on what the model accepts and doesn't accept.
Also the [AutomaticSpeechRecognition Pipeline documentation](https://huggingface.co/docs/transformers/v4.33.2/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) does not even mention that a URL can be passed (it states a str location, but I would assume that to be a local file path not a _public_ url).
I'm very new to hugging face and ML in general and it's very much possible this information is actually stated/found elsewhere, and if so it would be great to add that to the pipeline tutorial so that a reader can have a better learning experience.
---
one small nit:
[AutomaticSpeechRecognition Pipeline documentation](https://huggingface.co/docs/transformers/v4.33.2/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) uses transcriber for the pipeline whereas the [HF Pipeline tutorial](https://huggingface.co/docs/transformers/pipeline_tutorial#pipeline-usage) uses generator. I think transcriber is a much clearer variable name.
### Motivation
Provide readers of the pipeline tutorial a better learning experience.
### Your contribution
I would be happy to submit a PR to update doc/tutorial once I have answers myself. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26420/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26419/comments | https://api.github.com/repos/huggingface/transformers/issues/26419/events | https://github.com/huggingface/transformers/pull/26419 | 1,913,600,349 | PR_kwDOCUB6oc5bPHNl | 26,419 | Update semantic_segmentation.md | {
"login": "zekaouinoureddine",
"id": 61702091,
"node_id": "MDQ6VXNlcjYxNzAyMDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/61702091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zekaouinoureddine",
"html_url": "https://github.com/zekaouinoureddine",
"followers_url": "https://api.github.com/users/zekaouinoureddine/followers",
"following_url": "https://api.github.com/users/zekaouinoureddine/following{/other_user}",
"gists_url": "https://api.github.com/users/zekaouinoureddine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zekaouinoureddine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zekaouinoureddine/subscriptions",
"organizations_url": "https://api.github.com/users/zekaouinoureddine/orgs",
"repos_url": "https://api.github.com/users/zekaouinoureddine/repos",
"events_url": "https://api.github.com/users/zekaouinoureddine/events{/privacy}",
"received_events_url": "https://api.github.com/users/zekaouinoureddine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | fixed typo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26419",
"html_url": "https://github.com/huggingface/transformers/pull/26419",
"diff_url": "https://github.com/huggingface/transformers/pull/26419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26419.patch",
"merged_at": 1695808304000
} |
https://api.github.com/repos/huggingface/transformers/issues/26418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26418/comments | https://api.github.com/repos/huggingface/transformers/issues/26418/events | https://github.com/huggingface/transformers/pull/26418 | 1,913,349,021 | PR_kwDOCUB6oc5bOQE5 | 26,418 | 🚨 🚨 Raise error when no speaker embeddings in speecht5._generate_speech | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the reviews @sanchit-gandhi and @Vaibhavs10 !\r\n\r\nI'll request @ArthurZucker's reviews on this tiny PR ! :hugs: ",
"Hey @ArthurZucker, thanks for the review!\r\n\r\nI'm not an expert on SpeechT5, but as @sanchit-gandhi said [here](https://github.com/huggingface/transformers/pull/26418#discussion_r1338953648):\r\n\r\n> it's possible that people are using fine-tuned versions of SpeechT5 without speaker embeddings, so throwing an error here would block them.\r\n\r\nEven more, even if it is fine-tuned with speaker embeddings, the default speaker embeddings for one version might be different for the default speaker embedding of another version.\r\n\r\nIn anyways, loading a default speaker embedding is done [here](https://huggingface.co/lysandre/text-to-speech-pipeline/blob/main/tts.py#L28-L30), with this snippet:\r\n\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nif speaker_embeddings is None:\r\n embeddings_dataset = load_dataset(\"Matthijs/cmu-arctic-xvectors\", split=\"validation\")\r\n speaker_embeddings = torch.tensor(embeddings_dataset[7305][\"xvector\"]).unsqueeze(0)\r\n```\r\n\r\nI'm not sure using `datasets` is compatible with `transformers`. How shoud we deal with this if we agree on a default speaker embedding ?",
"The fine-tuned models also require passing a `speaker_embedding` btw: https://huggingface.co/learn/audio-course/chapter6/fine-tuning#speaker-embeddings\r\n\r\nSo the expected behaviour would always be to pass a `speaker_embedding` right? @sanchit-gandhi ",
"You can register a non persistent buffer (only if the default is always the same) ",
"Using a default embedding makes sense if we can do it in a non-breaking way, such as with the non-persistent buffer",
"At the end of the day, I'll raise an error, for those reasons:\r\n- Using a default embedding with a non-persistent buffer would be breaking\r\n- The code leaves the impression that a speaker embedding is necessary anyways, even when finetuning: https://github.com/huggingface/transformers/blob/aa7a7b09a8afff1983c96c9a62ca8e5c2cd3b002/src/transformers/models/speecht5/modeling_speecht5.py#L733-L738\r\n- we're moving away from warnings\r\n\r\n",
"Hey @sanchit-gandhi , @Vaibhavs10 and @ArthurZucker , feel free to quickly check the PR whenever you have time!",
"Nice, merging!"
] | 1,695 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
Following the discussion in #26401, this adds a warning when using `_generate_speech` without speaker embeddings.
If necessary, and after discussion, we could throw an error instead of a warning!
<!-- Remove if not applicable -->
Fixes #26401
## Who can review?
cc @Vaibhavs10 @sanchit-gandhi and @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26418/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26418",
"html_url": "https://github.com/huggingface/transformers/pull/26418",
"diff_url": "https://github.com/huggingface/transformers/pull/26418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26418.patch",
"merged_at": 1697551175000
} |
https://api.github.com/repos/huggingface/transformers/issues/26417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26417/comments | https://api.github.com/repos/huggingface/transformers/issues/26417/events | https://github.com/huggingface/transformers/issues/26417 | 1,913,323,561 | I_kwDOCUB6oc5yCwAp | 26,417 | Every time I run run_glue.py I get stuck in the "sock.connect(sa)" of the code at several places. Why? Is it a network problem? | {
"login": "LLIKKE",
"id": 94539160,
"node_id": "U_kgDOBaKNmA",
"avatar_url": "https://avatars.githubusercontent.com/u/94539160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LLIKKE",
"html_url": "https://github.com/LLIKKE",
"followers_url": "https://api.github.com/users/LLIKKE/followers",
"following_url": "https://api.github.com/users/LLIKKE/following{/other_user}",
"gists_url": "https://api.github.com/users/LLIKKE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LLIKKE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LLIKKE/subscriptions",
"organizations_url": "https://api.github.com/users/LLIKKE/orgs",
"repos_url": "https://api.github.com/users/LLIKKE/repos",
"events_url": "https://api.github.com/users/LLIKKE/events{/privacy}",
"received_events_url": "https://api.github.com/users/LLIKKE/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmmm it seems like it could be a connection issue, yes! Do you have the full error code? Do you have the same issue if running in google colab?",
"> Hmmm it seems like it could be a connection issue, yes! Do you have the full error code? Do you have the same issue if running in google colab?\n\nI am in China and it seems that a lot of people have this problem recently, it should be the Internet problem. Could you please tell me why transformers keeps visiting the website instead of first reading the downloaded model and dataset in the cache? Thank you!",
"We do this to see if there was an update to the model/tokenizer files on the Hub.\r\n\r\nIf you already have them in your cache, you can bypass this with the [offline mode through an environment variable](https://huggingface.co/docs/transformers/v4.33.3/en/installation#offline-mode) or through individual calls to `from_pretrained` with the `local_files_only` kwarg on each `from_pretrained` method (maybe this could be added to the offline-mode docs? cc @stevhliu @MKhalusova)",
"> We do this to see if there was an update to the model/tokenizer files on the Hub.\r\n> \r\n> If you already have them in your cache, you can bypass this with the [offline mode through an environment variable](https://huggingface.co/docs/transformers/v4.33.3/en/installation#offline-mode) or through individual calls to `from_pretrained` with the `local_files_only` kwarg on each `from_pretrained` method (maybe this could be added to the offline-mode docs? cc @stevhliu @MKhalusova)\r\n\r\nthank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Fixed in https://github.com/huggingface/transformers/pull/26478",
"If you stuck at sock.connect, probably it because the middleware server inr your company has blocked the transfromer url request to huggingface or whatever. There could be some proxy address or setting in your company that allows outbounding connection\r\n\r\nYou need to set these proxy variable correctly if you need them for outbound request, check what `env` prints out in terminal against these variables\r\n```\r\nexport no_proxy=your_setting\r\nexport http_proxy=your_proxy\r\nexport https_proxy=your_proxy\r\nexport all_proxy=your_proxy\r\nexport ftp_proxy=your_proxy\r\nexport NO_PROXY=your_setting\r\nexport HTTP_PROXY=your_proxy\r\nexport HTTPS_PROXY=your_proxy\r\nexport ALL_PROXY=your_proxy\r\nexport FTP_PROXY=your_proxy\r\n```\r\n\r\nor making transformer go offline by https://github.com/huggingface/transformers/issues/10379\r\n\r\n```\r\nHF_DATASETS_OFFLINE=1\r\nTRANSFORMERS_OFFLINE=1\r\n```\r\n"
] | 1,695 | 1,705 | 1,698 | NONE | null | ### System Info
check_min_version("4.33.0.dev0")
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--overwrite_output_dir \
--save_steps 50000 \
--output_dir checkpoint
### Expected behavior
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--overwrite_output_dir \
--save_steps 50000 \
--output_dir checkpoint
I run this code, which is slow when I load it earlier, and when I force the end, it says "connection.py", line 85, in create_connection
sock.connect(sa)
"KeyboardInterrupt" but waiting for a while still works for model training, what's the reason? Is it a network problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26417/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26415/comments | https://api.github.com/repos/huggingface/transformers/issues/26415/events | https://github.com/huggingface/transformers/pull/26415 | 1,913,297,486 | PR_kwDOCUB6oc5bOEwc | 26,415 | [`FA` / `tests`] Add use_cache tests for FA models | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
While testing out https://github.com/huggingface/transformers/pull/26414 I realised the current tests silently pass as we only check for the immediate next predicted token
For small models FA2 is quite flaky so I have decided to not increase `max_new_tokens` in `test_flash_attn_2_generate_padding_right` and `test_flash_attn_2_generate_left_padding` and rather have a separate test that will run `generate` with `use_cache` with a relatively large `max_new_tokens` to catch issues with caching when porting models to FA
cc @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26415/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26415",
"html_url": "https://github.com/huggingface/transformers/pull/26415",
"diff_url": "https://github.com/huggingface/transformers/pull/26415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26415.patch",
"merged_at": 1695810114000
} |
https://api.github.com/repos/huggingface/transformers/issues/26416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26416/comments | https://api.github.com/repos/huggingface/transformers/issues/26416/events | https://github.com/huggingface/transformers/issues/26416 | 1,913,303,764 | I_kwDOCUB6oc5yCrLU | 26,416 | RuntimeError: Unable to build docs using doc-builder | {
"login": "ENate",
"id": 6941995,
"node_id": "MDQ6VXNlcjY5NDE5OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6941995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ENate",
"html_url": "https://github.com/ENate",
"followers_url": "https://api.github.com/users/ENate/followers",
"following_url": "https://api.github.com/users/ENate/following{/other_user}",
"gists_url": "https://api.github.com/users/ENate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ENate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ENate/subscriptions",
"organizations_url": "https://api.github.com/users/ENate/orgs",
"repos_url": "https://api.github.com/users/ENate/repos",
"events_url": "https://api.github.com/users/ENate/events{/privacy}",
"received_events_url": "https://api.github.com/users/ENate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ENate , I'm transferring the issue to the `transformers` repo as it seems you are trying to build the docs for this library (instead of `huggingface_hub`). Also cc @mishig25 who worked on the doc builder.",
"Hi @Wauplin thanks. Was building the docs after forking the transformers repo. ",
"As the error suggests:\r\n```\r\nAdd them to docs/source/en/_toctree.yml.\r\n```\r\n\r\nCould you check the following doc pages exist in `docs/source/en/_toctree.yml` ?",
" Yes, I saw the doc pages in the \r\n\r\n```\r\n docs/source/en/_toctree.yml\r\n ```\r\n\r\n The docs are listed in the following manner \r\n\r\n\r\n```\r\n- local: <model_doc/<name-of-doc>\r\n title: <name>\r\n ```\r\n\r\nin the \r\n\r\n```\r\n docs/source/en/_toctree.yml\r\n ``` \r\n\r\nfile\r\n",
"btw, you are not on Windows?",
"nope. I am using Ubuntu 22.04 LTS",
"Hi @mishig25 did you manage to find out why this is happening on Ubuntu (UNIX)? Any ideas for a work around? Thanks",
"Hi @mishig25 I did add the model docs in the file as follows:\r\n\r\n```\r\n- sections:\r\n - local: model_doc/mt5\r\n title: Vision Transformer\r\n - local: model_doc/vivit\r\n title: Video Vision Transformer\r\n - local: model_doc/idefics\r\n title: Idefics\r\n - local: model_doc/imagegpt\r\n title: ImageGPT\r\n - local: model_doc/flaubert\r\n title: FlauBERT\r\n - local: model_doc/vit\r\n title: ViLT\r\n - local: model_doc/bartpho\r\n title: BARTpho\r\n - local: model_doc/clap\r\n title: CLAP\r\n - local: model_doc/mobilebert\r\n title: MobileBERT\r\n - local: model_doc/mgp-str\r\n title: MGP-STR\r\n - local: model_doc/efficientformer\r\n title: EfficientFormer\r\n - local: model_doc/tapex\r\n title: TAPEX\r\n - local: model_doc/switch_transformers\r\n title: SwitchTransformers\r\n - local: model_doc/electra\r\n title: ELECTRA\r\n - local: model_doc/mask2former\r\n title: Mask2Former\r\n title: Text models\r\n```\r\nin the ```_toc_tree.yml``` but got the same problem on Ubuntu (when trying to run ```pip install -e \"[docs]\" ```):\r\n\r\n```\r\n\r\nTraceback (most recent call last):\r\n File \"/home/\r\n/miniconda3/envs/transformers/bin/doc-builder\", line 8, in <module>\r\n sys.exit(main())\r\n ^^^^^^\r\n File \"/home/miniconda3/envs/transformers/lib/python3.11/site-packages/doc_builder/commands/doc_builder_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/home/miniconda3/envs/transformers/lib/python3.11/site-packages/doc_builder/commands/build.py\", line 96, in build_command\r\n build_doc(\r\n File \"/home/miniconda3/envs/transformers/lib/python3.11/site-packages/doc_builder/build_doc.py\", line 405, in build_doc\r\n sphinx_refs = check_toc_integrity(doc_folder, output_dir)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/miniconda3/envs/transformers/lib/python3.11/site-packages/doc_builder/build_doc.py\", line 460, in check_toc_integrity\r\n raise RuntimeError(\r\nRuntimeError: The following files are not present in the table of contents:\r\n- gpt_neo\r\n- t5\r\n- qdqbert\r\n- videomae\r\n- herbert\r\n- graphormer\r\n- reformer\r\n- mvp\r\n- visual_bert\r\n- dinat\r\n- esm\r\n- ernie\r\n- camembert\r\n- openai-gpt\r\n- bart\r\n- trocr\r\n- informer\r\n- audio-spectrogram-transformer\r\n- whisper\r\n- unispeech-sat\r\n- deberta\r\n- deta\r\n- xlsr_wav2vec2\r\n- xglm\r\n- speecht5\r\n- swin\r\n- gptsan-japanese\r\n- sew\r\n- mobilenet_v1\r\n- layoutlmv3\r\n- mobilevitv2\r\n- align\r\n- git\r\n- perceiver\r\n- xlm-v\r\n- efficientnet\r\n- wavlm\r\n- encoder-decoder\r\n- blenderbot-small\r\n- sew-d\r\n- xls_r\r\n- vision-text-dual-encoder\r\n- conditional_detr\r\n- layoutlmv2\r\n- flava\r\n- trajectory_transformer\r\n- pix2struct\r\n- data2vec\r\n- dinov2\r\n- unispeech\r\n- mctct\r\n- bark\r\n- xlm-prophetnet\r\n- ernie_m\r\n- clipseg\r\n- tvlt\r\n- decision_transformer\r\n- bert-generation\r\n- m2m_100\r\n- ul2\r\n- hubert\r\n- cpm\r\n- clip\r\n- marian\r\n- codegen\r\n- table-transformer\r\n- rembert\r\n- van\r\n- roformer\r\n- opt\r\n- mobilenet_v2\r\n- altclip\r\n- gpt_neox\r\n- mms\r\n- pegasus_x\r\n- big_bird\r\n- nllb-moe\r\n- lxmert\r\n- beit\r\n- deberta-v2\r\n- megatron-bert\r\n- pvt\r\n- dpr\r\n- detr\r\n- rag\r\n- upernet\r\n- bridgetower\r\n- gptj\r\n- levit\r\n- glpn\r\n- xmod\r\n- sam\r\n- llama2\r\n- xlm\r\n- vitdet\r\n- maskformer\r\n- owlvit\r\n- instructblip\r\n- transfo-xl\r\n- pegasus\r\n- longt5\r\n- mbart\r\n- layoutxlm\r\n- barthez\r\n- vision-encoder-decoder\r\n- dit\r\n- flan-ul2\r\n- byt5\r\n- roc_bert\r\n- mega\r\n- encodec\r\n- bert\r\n- phobert\r\n- ibert\r\n- gpt_bigcode\r\n- deformable_detr\r\n- vit_msn\r\n- markuplm\r\n- convnextv2\r\n- yoso\r\n- gpt-sw3\r\n- wav2vec2_phoneme\r\n- focalnet\r\n- donut\r\n- regnet\r\n- jukebox\r\n- mluke\r\n- realm\r\n- autoformer\r\n- bertweet\r\n- chinese_clip\r\n- swiftformer\r\n- umt5\r\n- auto\r\n- t5v1.1\r\n- poolformer\r\n- dialogpt\r\n- vit_hybrid\r\n- xclip\r\n- deplot\r\n- canine\r\n- resnet\r\n- bloom\r\n- groupvit\r\n- fsmt\r\n- bert-japanese\r\n- squeezebert\r\n- mra\r\n- convbert\r\n- biogpt\r\n- xlm-roberta\r\n- nystromformer\r\n- cpmant\r\n- nezha\r\n- bit\r\n- segformer\r\n- rwkv\r\n- bort\r\n- longformer\r\n- time_series_transformer\r\n- blip-2\r\n- mpnet\r\n- pop2piano\r\n- flan-t5\r\n- musicgen\r\n- speech_to_text_2\r\n- distilbert\r\n- tapas\r\n- lilt\r\n- blenderbot\r\n- gpt_neox_japanese\r\n- xlm-roberta-xl\r\n- xlnet\r\n- nllb\r\n- speech_to_text\r\n- vilt\r\n- llama\r\n- roberta-prelayernorm\r\n- yolos\r\n- megatron_gpt2\r\n- plbart\r\n- cvt\r\n- speech-encoder-decoder\r\n- mt5\r\n- vivit\r\n- idefics\r\n- imagegpt\r\n- flaubert\r\n- vit\r\n- bartpho\r\n- clap\r\n- mobilebert\r\n- mgp-str\r\n- efficientformer\r\n- tapex\r\n- switch_transformers\r\n- electra\r\n- mask2former\r\n```\r\nAdd them to docs/source/en/_toctree.yml. Id there any way to make this work (so that I can successfully build the docs, or dev) before opening a PR? Thanks again",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am not able to still resolve this issue. ",
"Hey! If you wanted to open a PR an check how the doc renders, would just recommend you to still open the PR but in draft mode. Then ping me if the doc does not render, make sure you rebase on main and then I can help you. It's hard without looking at any code! 🤗 ",
"Thanks. I did open the PR successfully. I am surprised it was impossible to build and view the docs locally. I was able to do so a few weeks ago. Not sure why it is no longer possible to view the docs locally.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,703 | 1,703 | CONTRIBUTOR | null | I am trying to build the docs with ```doc-builder ```
using the following command
```
doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build
```
but got the following error:
```
Traceback (most recent call last):
File "/home/miniconda3/envs/transformers/bin/doc-builder", line 8, in <module>
sys.exit(main())
File "/home/miniconda3/envs/transformers/lib/python3.9/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main
args.func(args)
File "/home/miniconda3/envs/transformers/lib/python3.9/site-packages/doc_builder/commands/build.py", line 96, in build_command
build_doc(
File "/home/miniconda3/envs/transformers/lib/python3.9/site-packages/doc_builder/build_doc.py", line 405, in build_doc
sphinx_refs = check_toc_integrity(doc_folder, output_dir)
File "/home/miniconda3/envs/transformers/lib/python3.9/site-packages/doc_builder/build_doc.py", line 460, in check_toc_integrity
raise RuntimeError(
RuntimeError: The following files are not present in the table of contents:
- gpt_neo
- t5
- qdqbert
- videomae
- herbert
- graphormer
- reformer
- mvp
- visual_bert
- dinat
- esm
- ernie
- camembert
- openai-gpt
- bart
- trocr
- informer
- audio-spectrogram-transformer
- whisper
- unispeech-sat
- deberta
- deta
- xlsr_wav2vec2
- xglm
- speecht5
- swin
- gptsan-japanese
- sew
- mobilenet_v1
- layoutlmv3
- mobilevitv2
- align
- git
- perceiver
- xlm-v
- efficientnet
- wavlm
- encoder-decoder
- blenderbot-small
- sew-d
- xls_r
- vision-text-dual-encoder
- conditional_detr
- layoutlmv2
- flava
- trajectory_transformer
- pix2struct
- data2vec
- dinov2
- unispeech
- mctct
- bark
- xlm-prophetnet
- ernie_m
- clipseg
- tvlt
- decision_transformer
- bert-generation
- m2m_100
- ul2
- hubert
- cpm
- clip
- marian
- codegen
- table-transformer
- rembert
- van
- roformer
- opt
- mobilenet_v2
- altclip
- gpt_neox
- mms
- pegasus_x
- big_bird
- nllb-moe
- lxmert
- beit
- deberta-v2
- megatron-bert
- pvt
- dpr
- detr
- rag
- upernet
- bridgetower
- gptj
- levit
- glpn
- xmod
- sam
- llama2
- xlm
- vitdet
- maskformer
- owlvit
- instructblip
- transfo-xl
- pegasus
- longt5
- mbart
- layoutxlm
- barthez
- vision-encoder-decoder
- dit
- flan-ul2
- byt5
- roc_bert
- mega
- encodec
- bert
- phobert
- ibert
- gpt_bigcode
- deformable_detr
- vit_msn
- markuplm
- convnextv2
- yoso
- gpt-sw3
- wav2vec2_phoneme
- focalnet
- donut
- regnet
- jukebox
- mluke
- realm
- autoformer
- bertweet
- chinese_clip
- swiftformer
- umt5
- auto
- t5v1.1
- poolformer
- dialogpt
- vit_hybrid
- xclip
- deplot
- canine
- resnet
- bloom
- groupvit
- fsmt
- bert-japanese
- squeezebert
- mra
- convbert
- biogpt
- xlm-roberta
- nystromformer
- cpmant
- nezha
- bit
- segformer
- rwkv
- bort
- longformer
- time_series_transformer
- blip-2
- mpnet
- pop2piano
- flan-t5
- musicgen
- speech_to_text_2
- distilbert
- tapas
- lilt
- blenderbot
- gpt_neox_japanese
- xlm-roberta-xl
- xlnet
- nllb
- speech_to_text
- vilt
- llama
- roberta-prelayernorm
- yolos
- megatron_gpt2
- plbart
- cvt
- speech-encoder-decoder
- mt5
- vivit
- idefics
- imagegpt
- flaubert
- vit
- bartpho
- clap
- mobilebert
- mgp-str
- efficientformer
- tapex
- switch_transformers
- electra
- mask2former
Add them to docs/source/en/_toctree.yml.
```
Any ideas on why this is happening? Any help will be appreciated. Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26414/comments | https://api.github.com/repos/huggingface/transformers/issues/26414/events | https://github.com/huggingface/transformers/pull/26414 | 1,913,220,887 | PR_kwDOCUB6oc5bNz4Y | 26,414 | [`FA2`] Add flash attention for opt | {
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The tests were done using a RTX 3060 (Ampere) which supports Flash Attention 2.\r\n\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26414). All of your documentation changes will be reflected on that endpoint.",
"Hi @younesbelkada, I have added the model to the list. Also I have merged the [susnato#3](https://github.com/susnato/transformers/pull/3), please let me know if anymore changes are needed or not.",
"Hi @younesbelkada , done.",
"Thanks for the reply @younesbelkada, in meantime I would like to work on adding flash attention for another model.",
"Thanks @susnato , sure yes ok, let me know on #26350 which model would you take, perhaps you can try your hands on Starcoder if you are interested. Let me know!",
"yes I am interested in that! @younesbelkada ",
"Thanks very much ! Looking forward to your PR ! ",
"Also please assign me in the list :). @younesbelkada ",
"Yes, just assigned you on the list !",
"@susnato @younesbelkada why not use `F.scaled_dot_product_attention` instead as it support Flash-Attention-2 and able to use with `torch.compile` ?. I used to use the `flash-attention` on official repo like this PR but now I moved to the `F.scaled_dot_product_attention`.",
"Hi @susnato \r\nThanks for your patience,\r\nI had a look at the issue with @michaelbenayoun and the solution is the following: \r\n\r\n1- define a method on the top level of the modeling opt file that checks if there is any padding token inside the attention mask\r\n```python\r\n\r\nif is_torch_fx_available():\r\n @torch.fx.wrap\r\n def check_padding_in_attention_mask(attention_mask):\r\n if 0 in attention_mask:\r\n return attention_mask\r\n return None\r\nelse:\r\n def check_padding_in_attention_mask(attention_mask):\r\n if 0 in attention_mask:\r\n return attention_mask\r\n return None\r\n```\r\nAnd add the `@torch.fx.wrap` decorator in the method to make sure it is compatible with the PT versions we support. \r\n2- revert until the commit [689f599](https://github.com/huggingface/transformers/pull/26414/commits/689f599132f121a3716cf4497570cbbda5c01737) and replace the simple logic by a call to that method \r\n\r\nThat way FX tests should hopefully pass\r\n\r\nI will later take care of moving `check_padding_in_attention_mask` in `pytorch_utils` file so that models that support FX tracing can use that method for future integrations.\r\n\r\n",
"Hi @dathudeptrai, `transformers` maintain a single modeling file policy, where we want to make sure that someone who is reading the code is properly able to understand the modeling code without having to switch to any other file. That's why even the commonly used `dot product attention` which is used by many modes are defined for each one. We define `F.scaled_dot_product_attention` so that the user does not have to visit the torch website to understand the implementation. \r\n\r\nI believe that in future when flash attention becomes more stable the maintainers will try to implement the function within the library itself(given that it's possible) to reduce the dependency. \r\n\r\nAlso @younesbelkada could give you a better explanation and correct me if I am wrong here :). ",
"Hi @younesbelkada, I have added the solution that you suggested and it's working!\r\nLet me know if any more changes are required or not.",
"@dathudeptrai Thanks for your question\r\nWe are currently thinking of migrating SDPA natively into transformers in the next weeks, we will keep you posted with @fxmarty ",
"Hi @younesbelkada the `test_flash_attn_2_generate_use_cache` is failing for `llama`, `falcon` and `opt` models :confused: .\r\n\r\n\r\n\r\n\r\n\r\n\r\nTrying to figure out the reason.\r\n\r\n",
"Hi @susnato \r\nThe test seem to pass on my end\r\n\r\n\r\n\r\nCan you try to merge with main and re-run the tests again?",
"@younesbelkada I already did :(.",
"@susnato can you share the full traceback?",
"Hi @younesbelkada ,\r\n\r\n<details>\r\n<summary>Full Traceback for Llama</summary>\r\n\r\n\r\nself = <tests.models.llama.test_modeling_llama.LlamaModelTest testMethod=test_flash_attn_2_generate_use_cache>\r\n\r\n @require_flash_attn\r\n @require_torch_gpu\r\n @mark.flash_attn_test\r\n @slow\r\n def test_flash_attn_2_generate_use_cache(self):\r\n import torch\r\n \r\n for model_class in self.all_generative_model_classes:\r\n if not model_class._supports_flash_attn_2:\r\n return\r\n \r\n config, _ = self.model_tester.prepare_config_and_inputs_for_common()\r\n model = model_class(config)\r\n \r\n with tempfile.TemporaryDirectory() as tmpdirname:\r\n model.save_pretrained(tmpdirname)\r\n \r\n dummy_input = torch.LongTensor([[0, 2, 3, 4], [0, 2, 3, 4]]).to(torch_device)\r\n dummy_attention_mask = torch.LongTensor([[1, 1, 1, 1], [1, 1, 1, 0]]).to(torch_device)\r\n \r\n model = model_class.from_pretrained(\r\n tmpdirname, torch_dtype=torch.float16, use_flash_attention_2=True, low_cpu_mem_usage=True\r\n ).to(torch_device)\r\n \r\n # Just test that a large cache works as expected\r\n> _ = model.generate(\r\n dummy_input, attention_mask=dummy_attention_mask, max_new_tokens=30, do_sample=False\r\n )\r\n\r\ntests/test_modeling_common.py:2936: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27: in decorate_context\r\n return func(*args, **kwargs)\r\nsrc/transformers/generation/utils.py:1606: in generate\r\n return self.greedy_search(\r\nsrc/transformers/generation/utils.py:2454: in greedy_search\r\n outputs = self(\r\n../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/llama/modeling_llama.py:1034: in forward\r\n outputs = self.model(\r\n../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/llama/modeling_llama.py:921: in forward\r\n layer_outputs = decoder_layer(\r\n../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/llama/modeling_llama.py:631: in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n../../anaconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/modules/module.py:1194: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/llama/modeling_llama.py:489: in forward\r\n attn_output = self._flash_attention_forward(\r\nsrc/transformers/models/llama/modeling_llama.py:546: in _flash_attention_forward\r\n attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)\r\n../../anaconda3/envs/transformers/lib/python3.9/site-packages/flash_attn/bert_padding.py:208: in pad_input\r\n output = index_put_first_axis(hidden_states, indices, batch * seqlen)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nctx = <torch.autograd.function.IndexPutFirstAxisBackward object at 0x7f1fb51b9040>\r\nvalues = tensor([[[-2.3666e-02, 2.1423e-02, -7.4646e-02, 3.6530e-02, -7.8979e-02,\r\n -3.4546e-02, 1.0236e-01, 4.3549...4504e-03, -9.3384e-02,\r\n -4.5532e-02, -5.5847e-02, 4.0253e-02]]], device='cuda:0',\r\n dtype=torch.float16)\r\nindices = tensor([0, 1], device='cuda:0', dtype=torch.int32), first_axis_dim = 2\r\n\r\n @staticmethod\r\n def forward(ctx, values, indices, first_axis_dim):\r\n ctx.save_for_backward(indices)\r\n assert indices.ndim == 1\r\n assert values.ndim >= 2\r\n output = torch.zeros(\r\n first_axis_dim, *values.shape[1:], device=values.device, dtype=values.dtype\r\n )\r\n # TD [2022-03-04] For some reason torch.scatter is a bit faster than indexing.\r\n> output[indices] = values\r\nE IndexError: tensors used as indices must be long, byte or bool tensors\r\n\r\n../../anaconda3/envs/transformers/lib/python3.9/site-packages/flash_attn/bert_padding.py:51: IndexError\r\n\r\n\r\n</details>\r\n\r\nBTW I updated the flash attention library and merged to main but the error is still there.",
"I have pushed the docstring change.\r\n\r\nIf you don't mind @younesbelkada , could you please checkout this branch and run the tests on your end and let me know the results?\r\nIf this passes on your machine, then It could be due to some unknown error on my end.",
"Sure @susnato no problem, will run the tests locally tomorrow and let you know how it goes",
"Hi @younesbelkada, did you manage to check if the tests run successfully on your local machine? ",
"Yes the tests seemed to pass the time I ran them, however since #26792 being merged you need to remove the support for `padding_mask` and follow whqat has been done in that PR. Let me know if you need help on this!",
"After I finish updating the `gpt_bigcode` I will take this up.",
"OK awesome, thanks @susnato !",
"Hi @younesbelkada, I have pushed the changes.\r\n\r\nAll the flash attention tests are running fine except `test_flash_attn_2_generate_use_cache` but I believe it's due to some problem with my local installation since the model generates outputs quite well when I run separately.\r\n",
"Hello @younesbelkada , are you going to check the speed-up for this model too? \r\n\r\nI am asking this out of curiosity - Are you going to check speed-ups for every model, FlashAttention is added to or is this benchmark needed for popular ones only? ",
"Hi @amyeroberts, I have pushed the changes you suggested, in addition to that I have also updated the model_doc file and added speedup graphs. ",
"Yes, I just checked and all tests are passing! @younesbelkada!\r\n\r\nFlash Attention tests - \r\n\r\n\r\n\r\nI found only one Integration test[`pt`] related to OPT - \r\n\r\n\r\n\r\n"
] | 1,695 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Flash Attention 2 for `OPT` as discussed in in this issue - #26350 .
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc : @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26414/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26414/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26414",
"html_url": "https://github.com/huggingface/transformers/pull/26414",
"diff_url": "https://github.com/huggingface/transformers/pull/26414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26414.patch",
"merged_at": 1700734611000
} |
https://api.github.com/repos/huggingface/transformers/issues/26413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26413/comments | https://api.github.com/repos/huggingface/transformers/issues/26413/events | https://github.com/huggingface/transformers/issues/26413 | 1,913,213,009 | I_kwDOCUB6oc5yCVBR | 26,413 | `resume_from_checkpoint` function fails because "There seems to be not a single sample in your epoch_iterator" | {
"login": "omermazig",
"id": 95534441,
"node_id": "U_kgDOBbG9aQ",
"avatar_url": "https://avatars.githubusercontent.com/u/95534441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omermazig",
"html_url": "https://github.com/omermazig",
"followers_url": "https://api.github.com/users/omermazig/followers",
"following_url": "https://api.github.com/users/omermazig/following{/other_user}",
"gists_url": "https://api.github.com/users/omermazig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omermazig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omermazig/subscriptions",
"organizations_url": "https://api.github.com/users/omermazig/orgs",
"repos_url": "https://api.github.com/users/omermazig/repos",
"events_url": "https://api.github.com/users/omermazig/events{/privacy}",
"received_events_url": "https://api.github.com/users/omermazig/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Update - I added `ignore_data_skip=True` to `TrainingArguments`, and it was succesfull in running a single epoch, and then failed with:\r\n\r\n> ValueError: 'videomae-finetuned/checkpoint-3000' is not in list\r\n\r\n\r\nCheckpoint 3000 is my best checkpoint (according to my `metric_for_best_model`), so I'm assuming that I have to have both the last checkpoint AND the best checkpoint available in the output dir, for this to work? If so, the documentation for [hub_strategy](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.hub_strategy) is mistaken, because it stated:\r\n\r\n> \"checkpoint\": like \"every_save\" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint=\"last-checkpoint\").\r\n\r\nWhich is wrong.\r\n\r\n\r\nAm I missing something?",
"cc @pacman100 @muellerzr ",
"Similar question: for the sake of reproducibility, I would like to be able to resume training from the same batch where I left off in my `IterableDataset` (so I don't want to set `ignore_data_skip=True`). However, it appears that the training loop relies on the `train_dataloader` [length](https://github.com/huggingface/transformers/blob/v4.32.1/src/transformers/trainer.py#L1578) to compute information needed in the resumption logic.\r\n\r\nIs there anyway to achieve this behavior? Thanks!",
"Is there any update from the team on the issues raised above? These issues make it prohibitively expensive or practically impossible to make use of an `IterableDataset` in certain contexts (e.g. preemptible runs).\r\n\r\nAlternatively, any advice on working with large datasets without using an `IterableDataset`? Due to issue #8818, which was mistakenly closed due to being stale but without actually being resolved, when using a regular dataset, you are essentially forced to use an `IterableDataset`. Perhaps there is a workaround I am not aware of.",
"CCing again: @muellerzr @pacman100",
"Hello, your iterable dataset should reiterate when reaching the end if the number of steps> number of samples in the iterable dataset. Best example of this is the [ConstantLengthDataset](https://github.com/huggingface/trl/blob/55d7c952c796345d9fa520e692f7760372b14b43/trl/trainer/utils.py#L490) from trl library. The main code snippet is given below when `infinite=True` setting which enables number of steps to be greater than the number of samples in the iterable dataset.\r\n\r\n```\r\ntry:\r\n buffer.append(self.formatting_func(next(iterator)))\r\n buffer_len += len(buffer[-1])\r\nexcept StopIteration:\r\n if self.infinite:\r\n iterator = iter(self.dataset)\r\n warnings.warn(\"The dataset reached end and the iterator is reset to the start.\")\r\n else:\r\n more_examples = False\r\n break\r\n```\r\n\r\nNotice the logic in exception handling to reassign `iterator =iter(self.dataset)` and the corresponding warning \"The dataset reached end and the iterator is reset to the start.\"\r\n\r\nHope this helps.",
"> ValueError: 'videomae-finetuned/checkpoint-3000' is not in list\r\n\r\n> Checkpoint 3000 is my best checkpoint (according to my metric_for_best_model), so I'm assuming that I have to have both the last checkpoint AND the best checkpoint available in the output dir, for this to work? If so, the documentation for [hub_strategy](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.hub_strategy) is mistaken, because it stated:\r\n> \r\n> \"checkpoint\": like \"every_save\" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint=\"last-checkpoint\").\r\n> \r\n> Which is wrong.\r\n> \r\n> Am I missing something?\r\n\r\nThis seems like a separate issue. Please open another one with a minimal reproducible example. Currently, the given details aren't enough for us to reproduce this.",
"> > ValueError: 'videomae-finetuned/checkpoint-3000' is not in list\r\n> \r\n> > Checkpoint 3000 is my best checkpoint (according to my metric_for_best_model), so I'm assuming that I have to have both the last checkpoint AND the best checkpoint available in the output dir, for this to work? If so, the documentation for [hub_strategy](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.hub_strategy) is mistaken, because it stated:\r\n> > \"checkpoint\": like \"every_save\" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint=\"last-checkpoint\").\r\n> > Which is wrong.\r\n> > Am I missing something?\r\n> \r\n> This seems like a separate issue. Please open another one with a minimal reproducible example. Currently, the given details aren't enough for us to reproduce this.\r\n\r\nDone here:\r\n\r\nhttps://github.com/huggingface/transformers/issues/27728\r\n\r\n\r\nI'm closing this issue because `ignore_data_skip=True` works for me. If someone doesn't find @pacman100 solution workable please reopen this.",
"> Hello, your iterable dataset should reiterate when reaching the end if the number of steps> number of samples in the iterable dataset.\n\nI'm sorry but this response doesn't make sense ~~and this issue should not be marked as closed so prematurely~~. The max number of steps passed to the trainer indicates the maximum number of steps over the *entire* training run. However, when resuming from checkpoint, the run will stop training if the number of steps is less than the number of samples *within a single epoch*.\n\nTo clarify, you are technically correct that \"your iterable dataset should reiterate when reaching the end.\" However, the `Trainer` and/or `IterableDataset` classes should handle this- as they already do when *not* resuming from checkpoint.\n\nIt is unclear why resuming from checkpoint causes them to fail to handle this. When not resuming from checkpoint, the training logic is as you expect: if you run out of samples in the current epoch but haven't reached max steps yet, you just start a *new* epoch until you do reach max steps.",
"> I'm closing this issue because `ignore_data_skip=True` works for me. If someone doesn't find @pacman100 solution workable please reopen this.\n\nThis issue should not be closed because `ignore_data_skip=True` is not a real solution to this problem as it changes the logic of the training run and eliminates reproducibility.\n\nI feel perhaps there is some fundamental miscommunication happening here because this seems very transparently obvious to me that this is not how this should work.\n\nI have had *identical* runs where:\n\n1. one run got preempted before reaching the second epoch (with `save_strategy=checkpoint`), and therefore resumed from the first epoch checkpoint, before erroring\n2. one run that continued past the second epoch\n\nThis is clear, incontrovertible evidence of a bug since it indicates different training logic is happening depending on whether `resume_from_checkpoint` is `True` or `False`.\n\nLet me put it another way: If you agree that\n\n```\nnumber of training steps = (desired number of epochs) * (number of samples)/(batch size)\n```\n\nwhich implies\n\n```\nnumber of desired training steps = (number of samples) * (desired number of epochs)/(batch size)\n```\n\nthen do you agree that `number of desired training steps > number of samples` if and only if `desired number of epochs > batch size`?\n\nWhich is going to be true in many instances? And yet this is precisely the condition upon which the error triggers, at least according to the error message.\n\nWhen not resuming from checkpoint, this simple mathematical fact poses no problem. It is *only* when resuming from checkpoint that for some reason this inequality poses a conundrum, and that is what makes no sense.",
"I am just now realizing that the example dataset @pacman100 provides as a working solution is a fully written `Dataset` class. However, the point of this issue is that it happens with a `Dataset` class provided by HuggingFace itself, namely, the `IterableDataset` class. The expectation is that HF datasets should \"just work\" with the HF `Trainer`; especially if this incompatibility is not identified in the docs, which, AFAIK, it is not. Perhaps I am incorrect on the latter count.",
"Hello @Ubadub, please provide a minimal reproducible example wrt this along with the related config, the launch command and the versions of the libraries.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@pacman100 Hello, I am also facing the same issue as @Ubadub is reporting. Here is my code to reproduce the issue:\r\n```python\r\nimport os\r\nimport shutil\r\nimport transformers\r\nimport datasets\r\n\r\n\r\nif os.path.exists(\"./output\"):\r\n shutil.rmtree(\"./output\")\r\n\r\n\r\ndef my_generator():\r\n for i in range(10):\r\n yield {\"input_ids\": [1000], \"labels\": [1000]}\r\n\r\n\r\n# This dataset yields 10 examples only, but let's set max_steps=20.\r\ndataset = datasets.IterableDataset.from_generator(my_generator)\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\nargs = transformers.TrainingArguments(\r\n output_dir=\"./output\",\r\n per_device_train_batch_size=1,\r\n max_steps=20,\r\n save_steps=10,\r\n report_to=\"none\",\r\n)\r\ntrainer = transformers.Trainer(\r\n model=model,\r\n args=args,\r\n train_dataset=dataset,\r\n)\r\ntrainer.train()\r\n# Trainer runs 20 steps, producing both checkpoint-10 checkpoint-20.\r\nassert os.path.exists(\"./output/checkpoint-10\")\r\nassert os.path.exists(\"./output/checkpoint-20\")\r\n\r\n# Now remove checkpoint-20 and resume training from checkpoint-10.\r\nshutil.rmtree(\"./output/checkpoint-20\")\r\ntrainer = transformers.Trainer(\r\n model=model,\r\n args=args,\r\n train_dataset=dataset,\r\n)\r\ntrainer.train(resume_from_checkpoint=True)\r\n# This time, trainer does nothing. checkpoint-20 is not produced.\r\nassert os.path.exists(\"./output/checkpoint-10\")\r\nassert not os.path.exists(\"./output/checkpoint-20\")\r\n```\r\noutput:\r\n```\r\n{'train_runtime': 20.8257, 'train_samples_per_second': 0.96, 'train_steps_per_second': 0.96, 'train_loss': 0.0, 'epoch': 1.5}\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:20<00:00, 1.04s/it]\r\nThere were missing keys in the checkpoint model loaded: ['lm_head.weight'].\r\n 0%| | 0/20 [00:00<?, ?it/s]\r\nThere seems to be not a single sample in your epoch_iterator, stopping training at step 10! This is expected if you're using an IterableDataset and set num_steps (20) higher than the number of available samples.\r\n{'train_runtime': 0.0044, 'train_samples_per_second': 4513.401, 'train_steps_per_second': 4513.401, 'train_loss': 0.0, 'epoch': 0.5}\r\n 0%|\r\n```\r\nWhen not resuming, Trainer runs until 20 steps. When resuming from a checkpoint, it tries to run until 10 steps. This seems inconsistent.\r\n\r\nAs discussed in https://github.com/huggingface/transformers/issues/26635, I think the correct behavior suggested by the current documentation of `max_steps` should be Trainer reiterating the dataset until 20 steps are executed even if the dataset is finite and smaller than 20.\r\nhttps://github.com/huggingface/transformers/blob/95091e1582688c2ffd8342918f3eb0e3abeeb0c8/src/transformers/training_args.py#L236-L239\r\n\r\nI'm using Python v3.10.12, transformers==4.36.2, datasets==2.16.1, accelerate==0.26.0, torch==2.1.2.",
"@muupan Thank you for the minimal example, I had a lot on my plate and was unable to do that so I ended up just scrapping the use of this functionality altogether, but this introduced its own complications so I would really appreciate a fix for this.",
"Confirming this issue should not be marked stale and still requires addressing."
] | 1,695 | 1,707 | null | NONE | null | ### System Info
transformers version - 4.33.2
I'm using the trainer api as such, so it pushes the latest checkpoint to huggingface each epoch:
```
from transformers import TrainingArguments, Trainer
new_model_name = "videomae-finetuned"
num_epochs = 50
batch_size = 8
steps_per_epoch = train_dataset.num_videos // batch_size
args = TrainingArguments(
output_dir=new_model_name,
remove_unused_columns=False,
evaluation_strategy="epoch",
save_strategy="epoch",
save_total_limit = 2, # Only last 2 models are saved. Older ones are deleted.
learning_rate=5e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
warmup_ratio=0.1,
logging_steps=10,
max_steps=steps_per_epoch * num_epochs, # Duplication of `num_train_epochs` because it throws otherwise.
load_best_model_at_end=True,
metric_for_best_model="accuracy",
hub_strategy="checkpoint",
push_to_hub=True,
num_train_epochs=num_epochs,
)
```
```
from transformers import EarlyStoppingCallback
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=image_processor,
compute_metrics=compute_metrics,
data_collator=collate_fn,
callbacks = [EarlyStoppingCallback(early_stopping_patience=10, early_stopping_threshold=0.01)]
)
```
```
import traceback
try:
results = trainer.train()
except RuntimeError as e:
print(traceback.format_exc())
```
And after about 25 epochs there's some exception (never mind what). So I get the last checkpoint being saved to huggingface (from [here](https://huggingface.co/omermazig/videomae-finetuned-nba-5-class-8-batch-2000-vid-multiclass/tree/main/last-checkpoint), if it matters) and put it on my drive, change the training code to this:
```
import traceback
try:
results = trainer.train(resume_from_checkpoint=pathlib.Path(f"./drive/MyDrive/").joinpath("last-checkpoint"))
except RuntimeError as e:
print(traceback.format_exc())
```
And rerun the whole notebook. Than, it prints (after some time - not immidiatlly):
> There seems to be not a single sample in your epoch_iterator, stopping training at step 5500! This is expected if you're using an IterableDataset and set num_steps (12500) higher than the number of available samples.
And than fails.
I do have an `IterableDataset` with 2000 training videos, and I'm using batch size 8 and want to run for 50 epochs, so I'm pretty sure 12500 is (2000/8)*50, but I still don't understand the message. Why is it problematic that num_steps (12500) > number of samples (2000)?
Thank you!
### Who can help?
@muellerzr
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Can't really for my code, but it is based on [your guide](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) and I believe will reproduce for this as well.
### Expected behavior
Continuing the training from the same state it stopped before. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26413/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26413/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26412/comments | https://api.github.com/repos/huggingface/transformers/issues/26412/events | https://github.com/huggingface/transformers/issues/26412 | 1,913,206,874 | I_kwDOCUB6oc5yCTha | 26,412 | How to run Trainer + DeepSpeed + Zero3 + PEFT | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Hello @BramVanroy, DeepSpeed is not compatible with BitsandBytes. You can either use Trainer + PEFT + DeepSpeed ZeRO3 or Trainer + PEFT + BitsandBytes/int4/int8. Above, you are trying to do Trainer + PEFT + BitsandBytes 4-bit + DeepSpeed ZeRO3 which isn't supported. For using PEFT with DeepSpeed ZeRO3, please refer this nice blog https://www.philschmid.de/deepspeed-lora-flash-attention",
"Are there plans to improve this situation? Or do you have a starting point where I can help? Having Zero3 support in this case would be very useful.",
"Bump to keep alive",
"OMG this thread will save my sanity. Been digging at this for days and thought something was wrong on my end. \r\n\r\nI am trying to run lora/qlora with Zero3 through the Axolotl library and I am encountering the exact same issue. \r\n\r\nIt would be extremely useful if we can run 4bit with Zero3. From my understanding, Zero3 is the only way to get model parallelism with DeepSpeed. This would allow us to finetune larger models on multiple GPUs with smaller vram limits. At the moment, it is only possible to run naive model parallelism (only 1 GPU would be active at any given time although the larger models can be spread across multiple GPUs). \r\n\r\nPlease let me know if there is an update on this issue.",
"> OMG this thread will save my sanity. Been digging at this for days and thought something was wrong on my end.\r\n> \r\n> I am trying to run lora/qlora with Zero3 through the Axolotl library and I am encountering the exact same issue.\r\n> \r\n> It would be extremely useful if we can run 4bit with Zero3. From my understanding, Zero3 is the only way to get model parallelism with DeepSpeed. This would allow us to finetune larger models on multiple GPUs with smaller vram limits. At the moment, it is only possible to run naive model parallelism (only 1 GPU would be active at any given time although the larger models can be spread across multiple GPUs).\r\n> \r\n> Please let me know if there is an update on this issue.\r\n\r\nDid you manage to solve this issue?",
"Will the issue be solved after deepspeed release the following information?\r\nhttps://www.deepspeed.ai/tutorials/MoQ-tutorial/"
] | 1,695 | 1,704 | null | COLLABORATOR | null | ### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.24.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
### Who can help?
@ArthurZucker and @younesbelkada and @pacman100 and @muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[This script](https://gist.github.com/BramVanroy/f2abb3940111b73ae8923822ef6096dd) is a modification of the official run_clm script. The only additions are the BNB config and PEFT. Yet, I cannot get it to work with a [deepspeed zero3 config](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_falcon_180b_z3.json).
Requirements to install:
```
accelerate >= 0.12.0
torch >= 1.3
datasets >= 1.8.0
sentencepiece != 0.1.92
protobuf
evaluate
scikit-learn
trl
peft
bitsandbytes
```
In the past I have had issues with low_cpu_mem_usage but neither a true/false value seem to get this to work:
Command 1:
```sh
deepspeed --include="localhost:0,1" run_clm.py \
--model_name_or_path facebook/opt-125m\
--dataset_name wikitext\
--dataset_config_name wikitext-2-raw-v1\
--per_device_train_batch_size 2\
--per_device_eval_batch_size 2\
--do_train\
--do_eval\
--output_dir /tmp/test-clm\
--deepspeed deepspeed_configs/ds_config_zero3.json\
--low_cpu_mem_usage true
```
==> `ValueError: DeepSpeed Zero-3 is not compatible with `low_cpu_mem_usage=True` or with passing a `device_map`.`
Command 2:
```sh
deepspeed --include="localhost:0,1" run_clm.py \
--model_name_or_path facebook/opt-125m\
--dataset_name wikitext\
--dataset_config_name wikitext-2-raw-v1\
--per_device_train_batch_size 2\
--per_device_eval_batch_size 2\
--do_train\
--do_eval\
--output_dir /tmp/test-clm\
--deepspeed deepspeed_configs/ds_config_zero3.json\
--low_cpu_mem_usage false
```
==> `ValueError: weight is on the meta device, we need a `value` to put in on 0.`
### Expected behavior
Any option to make this combination of Trainer + DeepSpeed + Zero3 + PEFT work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26412/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/26412/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26411/comments | https://api.github.com/repos/huggingface/transformers/issues/26411/events | https://github.com/huggingface/transformers/pull/26411 | 1,913,191,273 | PR_kwDOCUB6oc5bNtXi | 26,411 | [`WIP`] Multi-adapter saving support for PEFT | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26411). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@younesbelkada What's the status?",
"@BenjaminBossan I think this might be too much of an edge case for the work it requires, I propose to keep that PR as open and if some interest arises from the community I'll work on it",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
To be potentially merged after https://github.com/huggingface/transformers/pull/26407
This PR adds the multi-adapter support for `save_pretrained` to be consistent with PEFT API that saves all adapters when calling `save_pretrained`. Note the default adapter is always saved in the root directory of `save_directory`.
cc @LysandreJik @BenjaminBossan @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26411/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26411",
"html_url": "https://github.com/huggingface/transformers/pull/26411",
"diff_url": "https://github.com/huggingface/transformers/pull/26411.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26411.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26410/comments | https://api.github.com/repos/huggingface/transformers/issues/26410/events | https://github.com/huggingface/transformers/issues/26410 | 1,913,153,497 | I_kwDOCUB6oc5yCGfZ | 26,410 | TypeError: LlamaForCausalLM.__init__() got an unexpected keyword argument 'use_flash_attention_2' | {
"login": "timelfrink",
"id": 5418946,
"node_id": "MDQ6VXNlcjU0MTg5NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5418946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timelfrink",
"html_url": "https://github.com/timelfrink",
"followers_url": "https://api.github.com/users/timelfrink/followers",
"following_url": "https://api.github.com/users/timelfrink/following{/other_user}",
"gists_url": "https://api.github.com/users/timelfrink/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timelfrink/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timelfrink/subscriptions",
"organizations_url": "https://api.github.com/users/timelfrink/orgs",
"repos_url": "https://api.github.com/users/timelfrink/repos",
"events_url": "https://api.github.com/users/timelfrink/events{/privacy}",
"received_events_url": "https://api.github.com/users/timelfrink/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @timelfrink \r\nThanks for your interest in this feature, please use the main build of transformers to use that feature: `pip install -U git+https://github.com/huggingface/transformers.git`",
"@younesbelkada thanks for the quick hint and yes that worked!"
] | 1,695 | 1,695 | 1,695 | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-4.14.318-241.531.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I finetuned NousResearch/Llama-2-7b-hf and used merge LoRA and base model and to save the model.
Now I would like to use `flash_attention_v2` as described [here](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flash-attention-2) to improve inference speed. I wanted to load the model, apply the use_flash_attention_2 and safe it again. But i got this error.
`model = AutoModelForCausalLM.from_pretrained(
PATH,
local_files_only=True,
torch_dtype=torch.bfloat16,
use_flash_attention_2=True,
)`
I get this error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[5], line 3
1 PATH = 'old'
----> 3 model = AutoModelForCausalLM.from_pretrained(
4 PATH,
5 local_files_only=True,
6 torch_dtype=torch.bfloat16,
7 use_flash_attention_2=True,
8 )
File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:563, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
561 elif type(config) in cls._model_mapping.keys():
562 model_class = _get_model_class(config, cls._model_mapping)
--> 563 return model_class.from_pretrained(
564 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
565 )
566 raise ValueError(
567 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
568 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
569 )
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:2966, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
2963 init_contexts.append(init_empty_weights())
2965 with ContextManagers(init_contexts):
-> 2966 model = cls(config, *model_args, **model_kwargs)
2968 # Check first if we are `from_pt`
2969 if use_keep_in_fp32_modules:
TypeError: LlamaForCausalLM.__init__() got an unexpected keyword argument 'use_flash_attention_2'
```
### Expected behavior
Able to load my model with `flash_attention_v2` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26410/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26409/comments | https://api.github.com/repos/huggingface/transformers/issues/26409/events | https://github.com/huggingface/transformers/issues/26409 | 1,912,963,755 | I_kwDOCUB6oc5yBYKr | 26,409 | Skip part of the dataset when training with trainer | {
"login": "young-chao",
"id": 34190033,
"node_id": "MDQ6VXNlcjM0MTkwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/34190033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/young-chao",
"html_url": "https://github.com/young-chao",
"followers_url": "https://api.github.com/users/young-chao/followers",
"following_url": "https://api.github.com/users/young-chao/following{/other_user}",
"gists_url": "https://api.github.com/users/young-chao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/young-chao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/young-chao/subscriptions",
"organizations_url": "https://api.github.com/users/young-chao/orgs",
"repos_url": "https://api.github.com/users/young-chao/repos",
"events_url": "https://api.github.com/users/young-chao/events{/privacy}",
"received_events_url": "https://api.github.com/users/young-chao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\r\n",
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,699 | 1,699 | NONE | null | When I used the trainer to train, I encountered some bad data that caused the loss to increase abnormally. I wanted to temporarily skip this part of the data. I directly skipped "self.optimizer.step()" in some steps but it did not take effect. , the trajectory of loss is still consistent with before. I want to know how to skip some steps appropriately. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26409/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26408/comments | https://api.github.com/repos/huggingface/transformers/issues/26408/events | https://github.com/huggingface/transformers/issues/26408 | 1,912,956,359 | I_kwDOCUB6oc5yBWXH | 26,408 | Use a custom SummaryWriter to record training parameters | {
"login": "young-chao",
"id": 34190033,
"node_id": "MDQ6VXNlcjM0MTkwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/34190033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/young-chao",
"html_url": "https://github.com/young-chao",
"followers_url": "https://api.github.com/users/young-chao/followers",
"following_url": "https://api.github.com/users/young-chao/following{/other_user}",
"gists_url": "https://api.github.com/users/young-chao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/young-chao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/young-chao/subscriptions",
"organizations_url": "https://api.github.com/users/young-chao/orgs",
"repos_url": "https://api.github.com/users/young-chao/repos",
"events_url": "https://api.github.com/users/young-chao/events{/privacy}",
"received_events_url": "https://api.github.com/users/young-chao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @young-chao, the `Trainer`'s initialization method accepts a `callbacks` argument which is a list of `TrainerCallback`s.\r\n\r\nYou can pass a `TensorBoardCallback` to that list that you can initialize with your own `tb_writer`.\r\n\r\nSee its definition here: https://github.com/huggingface/transformers/blob/0ac3875011d32dc85e0e83970507e3afe8f0febb/src/transformers/integrations/integration_utils.py#L579\r\n\r\n",
"> Hello @young-chao, the `Trainer`'s initialization method accepts a `callbacks` argument which is a list of `TrainerCallback`s.\r\n> \r\n> You can pass a `TensorBoardCallback` to that list that you can initialize with your own `tb_writer`.\r\n> \r\n> See its definition here:\r\n> \r\n> https://github.com/huggingface/transformers/blob/0ac3875011d32dc85e0e83970507e3afe8f0febb/src/transformers/integrations/integration_utils.py#L579\r\n\r\nI will try it, thanks for your reply"
] | 1,695 | 1,696 | 1,696 | NONE | null | When I use trainer, how should I use a custom SummaryWriter to record training parameters, such as grad_norm, loss_scale, etc. I don’t see the setting options in the trainer parameters. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26407/comments | https://api.github.com/repos/huggingface/transformers/issues/26407/events | https://github.com/huggingface/transformers/pull/26407 | 1,912,936,840 | PR_kwDOCUB6oc5bM1qm | 26,407 | [`PEFT`] Fix PEFT multi adapters support | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for all the reviews, I think I have addressed most of them now, I propose to handle the multi-adapter saving in a follow up PR #26411 to be consistent with PEFT API",
"Thanks ! For that we could maybe sync the next PEFT release before / at the same time than the transformers release",
"As discussed offline I would propose the following:\r\n\r\n- 1.) We either deprecate or fully remove `active_adapter(...)` and replace it with `active_adapters(...)` which **always** returns a list. `active_adapter(...)` function has not been in a release yet (right @younesbelkada ? ), so I would deprecate it for two minor versions and then remove it. It's cleaner to always return a list here.\r\n- 2.) Instead of \"magically\" only saving the first adapter when having multiple active adapters, let's teach the user the difference between multi and single active adapters here by raising a ValueError when trying to save multi adapters. It's really not yet necessary to be able to save multiple adapters at once",
"I am ok to deprecate the `active_adapter` method ! The plan sounds great \r\n\r\n> active_adapter(...) function has not been in a release yet \r\n\r\nI meant that `active_adapters` was not released yet :D In any case I think it is ok here \r\n\r\nAgreed also on your second point! let's keep it simple\r\n\r\n",
"Agree with Patrick!\r\n\r\nIf we choose to go with\r\n> 2.) Instead of \"magically\" only saving the first adapter when having multiple active adapters, let's teach the user the difference between multi and single active adapters here by raising a ValueError when trying to save multi adapters. It's really not yet necessary to be able to save multiple adapters at once\r\n\r\nlet's maybe put this PR https://github.com/huggingface/transformers/pull/26411 to the side for now and focus on merging this PR with single-adapter saving first? WDYT @younesbelkada ?",
"Sounds great @LysandreJik !",
"I should have addressed the final comments @patrickvonplaten , @LysandreJik , thanks for your time reviewing the PR !",
"Failing test seems unrelated - merging !"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
With https://github.com/huggingface/peft/pull/905 being merged, it added the support for multi-adapter inference (combining multiple adapters at inference).
The PR has introduced some changes that are not compatible anymore with peft integration of transformers, for example `active_adapter` is now a property method and returns a list. This PR adds the corresponding fixes to make everything compatible with the newest features of PEFT, while preserving backward compatiblity
cc @BenjaminBossan @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26407",
"html_url": "https://github.com/huggingface/transformers/pull/26407",
"diff_url": "https://github.com/huggingface/transformers/pull/26407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26407.patch",
"merged_at": 1695825931000
} |
https://api.github.com/repos/huggingface/transformers/issues/26406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26406/comments | https://api.github.com/repos/huggingface/transformers/issues/26406/events | https://github.com/huggingface/transformers/pull/26406 | 1,912,928,556 | PR_kwDOCUB6oc5bMz4w | 26,406 | change mention of decoder_input_ids to input_ids and same with decode_inputs_embeds | {
"login": "tmabraham",
"id": 37097934,
"node_id": "MDQ6VXNlcjM3MDk3OTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/37097934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmabraham",
"html_url": "https://github.com/tmabraham",
"followers_url": "https://api.github.com/users/tmabraham/followers",
"following_url": "https://api.github.com/users/tmabraham/following{/other_user}",
"gists_url": "https://api.github.com/users/tmabraham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmabraham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmabraham/subscriptions",
"organizations_url": "https://api.github.com/users/tmabraham/orgs",
"repos_url": "https://api.github.com/users/tmabraham/repos",
"events_url": "https://api.github.com/users/tmabraham/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmabraham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:\r\n```\r\npip install -e \".[quality]\"\r\n```\r\nAnd then run them with:\r\n```\r\nmake fixup\r\n``` ",
"Hmm looks like something is wrong? Not sure if I am doing something wrong...\r\n```\r\n(base) tmabraham@DESKTOP-TPBRMQJ:/mnt/c/Users/tmabraham/Documents/GitHub/transformers$ make fixup\r\nmake: Warning: File 'Makefile' has modification time 70901 s in the future\r\nNo library .py files were modified\r\npython utils/custom_init_isort.py\r\npython utils/style_doc.py src/transformers docs/source --max_len 119\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\npython utils/class_mapping_update.py\r\npython utils/check_copies.py\r\nTraceback (most recent call last):\r\n File \"/mnt/c/Users/tmabraham/Documents/GitHub/transformers/utils/check_copies.py\", line 353, in <module>\r\n check_copies(args.fix_and_overwrite)\r\n File \"/mnt/c/Users/tmabraham/Documents/GitHub/transformers/utils/check_copies.py\", line 186, in check_copies\r\n new_diffs = is_copy_consistent(filename, overwrite)\r\n File \"/mnt/c/Users/tmabraham/Documents/GitHub/transformers/utils/check_copies.py\", line 164, in is_copy_consistent\r\n theoretical_code = blackify(lines[start_index - 1] + theoretical_code)\r\n File \"/mnt/c/Users/tmabraham/Documents/GitHub/transformers/utils/check_copies.py\", line 104, in blackify\r\n result = black.format_str(code, mode=black.FileMode([black.TargetVersion.PY35], line_length=119))\r\n File \"<string>\", line 3, in __init__\r\nTypeError: set object expected; got list\r\nmake: *** [Makefile:38: extra_quality_checks] Error 1\r\n```",
"Hmm indeed this seems like a weird error! I pushed a fix directly to your branch, I'll merge this PR as soon as it's green.\r\n\r\nThanks for your PR!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26406). All of your documentation changes will be reflected on that endpoint."
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | The arguments to Llama2 is `input_ids` and `inputs_embeds` but yet in several places it is mentioned as `decoder_input_ids` and `decoder_inputs_embeds` including in some error messages. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26406/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26406",
"html_url": "https://github.com/huggingface/transformers/pull/26406",
"diff_url": "https://github.com/huggingface/transformers/pull/26406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26406.patch",
"merged_at": 1695888948000
} |
https://api.github.com/repos/huggingface/transformers/issues/26405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26405/comments | https://api.github.com/repos/huggingface/transformers/issues/26405/events | https://github.com/huggingface/transformers/pull/26405 | 1,912,910,869 | PR_kwDOCUB6oc5bMwD0 | 26,405 | Update llama error message (decoder_input_ids --> input_ids) | {
"login": "tmabraham",
"id": 37097934,
"node_id": "MDQ6VXNlcjM3MDk3OTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/37097934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmabraham",
"html_url": "https://github.com/tmabraham",
"followers_url": "https://api.github.com/users/tmabraham/followers",
"following_url": "https://api.github.com/users/tmabraham/following{/other_user}",
"gists_url": "https://api.github.com/users/tmabraham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmabraham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmabraham/subscriptions",
"organizations_url": "https://api.github.com/users/tmabraham/orgs",
"repos_url": "https://api.github.com/users/tmabraham/repos",
"events_url": "https://api.github.com/users/tmabraham/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmabraham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"oops somehow old commits got pulled in, closing and will fix"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | Change:
```python
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
```
to
```python
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
```
Similarly change:
```python
else:
raise ValueError("You have to specify either python_input_ids or python_inputs_embeds")
```
to
```python
else:
raise ValueError("You have to specify either input_ids or inputs_embeds")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26405",
"html_url": "https://github.com/huggingface/transformers/pull/26405",
"diff_url": "https://github.com/huggingface/transformers/pull/26405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26405.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26403/comments | https://api.github.com/repos/huggingface/transformers/issues/26403/events | https://github.com/huggingface/transformers/issues/26403 | 1,912,844,162 | I_kwDOCUB6oc5yA6-C | 26,403 | Switching Mask2Former Backbones | {
"login": "alen-smajic",
"id": 63591221,
"node_id": "MDQ6VXNlcjYzNTkxMjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/63591221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alen-smajic",
"html_url": "https://github.com/alen-smajic",
"followers_url": "https://api.github.com/users/alen-smajic/followers",
"following_url": "https://api.github.com/users/alen-smajic/following{/other_user}",
"gists_url": "https://api.github.com/users/alen-smajic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alen-smajic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alen-smajic/subscriptions",
"organizations_url": "https://api.github.com/users/alen-smajic/orgs",
"repos_url": "https://api.github.com/users/alen-smajic/repos",
"events_url": "https://api.github.com/users/alen-smajic/events{/privacy}",
"received_events_url": "https://api.github.com/users/alen-smajic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @rafaelpadilla would love it if you could take a look!",
"Hi @alen-smajic ,\r\n\r\nThank you for reporting this issue. :) \r\n\r\nThe code you showed is not working because of `out_indices=(-2, -1)`. Try to replace it by:\r\n```python\r\nbackbone_config = FocalNetConfig(out_indices=(1,2,3,4))\r\n```\r\n\r\nFor the backbone, dinov2 model is not supported. These are the supported ones: `BitConfig`, `ConvNextConfig`, `ConvNextV2Config`, `DinatConfig`, `FocalNetConfig`, `MaskFormerSwinConfig`, `NatConfig`, `ResNetConfig`, `SwinConfig`, `TimmBackboneConfig`.\r\n\r\n",
"Hi @rafaelpadilla ,\r\n\r\nthanks for the quick help. You are totally right, the out_indices attribute was not correctly set.\r\n\r\nI have in fact managed to attach a dinov2 backbone on the Mask2Former model and it seems to work :) \r\n```python\r\nimport requests\r\n\r\nfrom PIL import Image\r\nimport torch\r\nfrom transformers import (\r\n AutoImageProcessor,\r\n Dinov2Config,\r\n Dinov2Model,\r\n Mask2FormerConfig,\r\n Mask2FormerForUniversalSegmentation\r\n)\r\n\r\n# Store Dinov2 weights locally \r\ndinov2_backbone_model = Dinov2Model.from_pretrained(\"facebook/dinov2-base\", out_indices=[6, 8, 10, 12])\r\ntorch.save(dinov2_backbone_model.state_dict(), \"dinov2-base.pth\")\r\n\r\n# Create Mask2Former config with Dinov2 backbone\r\nimage_processor = AutoImageProcessor.from_pretrained(\"facebook/mask2former-swin-tiny-cityscapes-semantic\")\r\nmodel_config = Mask2FormerConfig.from_pretrained(\"facebook/mask2former-swin-tiny-cityscapes-semantic\")\r\nmodel_config.backbone_config = Dinov2Config.from_pretrained(\"facebook/dinov2-base\", out_indices=(6, 8, 10, 12))\r\n\r\n# Instantiate Mask2Former model with Dinov2 backbone (random weights)\r\nmodel = Mask2FormerForUniversalSegmentation(model_config)\r\n\r\n# Load Dinov2 weights into Mask2Former backbone\r\ndinov2_backbone = model.model.pixel_level_module.encoder\r\ndinov2_backbone.load_state_dict(torch.load(\"dinov2-base.pth\"))\r\n\r\nimage_processor = AutoImageProcessor.from_pretrained(\"facebook/mask2former-swin-tiny-cityscapes-semantic\")\r\nurl = (\r\n \"https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg\"\r\n)\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ninputs = image_processor(image, return_tensors=\"pt\")\r\n\r\nwith torch.no_grad():\r\n outputs = model(**inputs)\r\n\r\nresults = image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])\r\n```",
"Hi @alen-smajic ,\r\n\r\nGlad to see the problem was solved :) \r\n\r\nI will close this issue for now. Feel free to re-open it in case you encounter any related concerns in the future.\r\n",
"Hi @alen-smajic, thanks for the snipped, I managed to use Dinov2 as a backbone for Mask2Former.\r\nDid you try to finetune it on your own data? I am experiencing very low performance. Could the reason be that the authors of Dinov2 used ViT-Adapter? Every additional suggestion would be very appreciated :) "
] | 1,695 | 1,697 | 1,695 | NONE | null | ### System Info
- `transformers` version: 4.33.1
- Platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I would like to combine the DINOv2 backbone model with the Mask2Former model for semantic segmentation. Even though the official documentation states that Mask2Former only works with a Swin Transformer backbone, I stumbled upon this issue #24244.
In PR #24532 multi-backbone support has been implemented by @amyeroberts , and some exemplary code has been provided. So far the model instantiation works, however when I try to infer any data into the model I get an error:
Script to reproduce:
```
from PIL import Image
import requests
from transformers import (
Mask2FormerConfig,
Mask2FormerModel,
Mask2FormerImageProcessor,
FocalNetConfig,
Dinov2Config
)
backbone_config = FocalNetConfig(out_indices=(-2, -1)) # This is the official example from PR #24532
#backbone_config = Dinov2Config(out_indices=(-2, -1)) # This doesn't work either
mask2former_config = Mask2FormerConfig(backbone_config=backbone_config)
model = Mask2FormerModel(mask2former_config)
processor = Mask2FormerImageProcessor(size=(224, 224))
url = (
"https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
)
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(image, return_tensors="pt")
output = model(**inputs)
```
The error I get:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/home/asl2hi/DST/fm_semseg/debug_model.ipynb Cell 5 line 2
[19](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d/home/asl2hi/DST/fm_semseg/debug_model.ipynb#X12sdnNjb2RlLXJlbW90ZQ%3D%3D?line=18) image = Image.open(requests.get(url, stream=True).raw)
[20](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d/home/asl2hi/DST/fm_semseg/debug_model.ipynb#X12sdnNjb2RlLXJlbW90ZQ%3D%3D?line=19) inputs = processor(image, return_tensors="pt")
---> [22](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d/home/asl2hi/DST/fm_semseg/debug_model.ipynb#X12sdnNjb2RlLXJlbW90ZQ%3D%3D?line=21) output = model(**inputs)
>
File [~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:2271](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:2271), in Mask2FormerModel.forward(self, pixel_values, pixel_mask, output_hidden_states, output_attentions, return_dict)
2268 if pixel_mask is None:
2269 pixel_mask = torch.ones((batch_size, height, width), device=pixel_values.device)
-> 2271 pixel_level_module_output = self.pixel_level_module(
2272 pixel_values=pixel_values, output_hidden_states=output_hidden_states
2273 )
2275 transformer_module_output = self.transformer_module(
2276 multi_scale_features=pixel_level_module_output.decoder_hidden_states,
2277 mask_features=pixel_level_module_output.decoder_last_hidden_state,
2278 output_hidden_states=True,
2279 output_attentions=output_attentions,
2280 )
2282 encoder_hidden_states = None
File [~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1396](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1396), in Mask2FormerPixelLevelModule.forward(self, pixel_values, output_hidden_states)
1394 def forward(self, pixel_values: Tensor, output_hidden_states: bool = False) -> Mask2FormerPixelLevelModuleOutput:
1395 backbone_features = self.encoder(pixel_values).feature_maps
-> 1396 decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)
1398 return Mask2FormerPixelLevelModuleOutput(
1399 encoder_last_hidden_state=backbone_features[-1],
1400 encoder_hidden_states=tuple(backbone_features) if output_hidden_states else None,
1401 decoder_last_hidden_state=decoder_output.mask_features,
1402 decoder_hidden_states=decoder_output.multi_scale_features,
1403 )
File [~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1320](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1320), in Mask2FormerPixelDecoder.forward(self, features, encoder_outputs, output_attentions, output_hidden_states, return_dict)
1318 # Send input_embeds_flat + masks_flat + level_pos_embed_flat (backbone + proj layer output) through encoder
1319 if encoder_outputs is None:
-> 1320 encoder_outputs = self.encoder(
1321 inputs_embeds=input_embeds_flat,
1322 attention_mask=masks_flat,
1323 position_embeddings=level_pos_embed_flat,
1324 spatial_shapes=spatial_shapes,
1325 level_start_index=level_start_index,
1326 valid_ratios=valid_ratios,
1327 output_attentions=output_attentions,
1328 output_hidden_states=output_hidden_states,
1329 return_dict=return_dict,
1330 )
1332 last_hidden_state = encoder_outputs.last_hidden_state
1333 batch_size = last_hidden_state.shape[0]
File [~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1175](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1175), in Mask2FormerPixelDecoderEncoderOnly.forward(self, inputs_embeds, attention_mask, position_embeddings, spatial_shapes, level_start_index, valid_ratios, output_attentions, output_hidden_states, return_dict)
1172 if output_hidden_states:
1173 all_hidden_states += (hidden_states.transpose(1, 0),)
-> 1175 layer_outputs = encoder_layer(
1176 hidden_states,
1177 attention_mask,
1178 position_embeddings=position_embeddings,
1179 reference_points=reference_points,
1180 spatial_shapes=spatial_shapes,
1181 level_start_index=level_start_index,
1182 output_attentions=output_attentions,
1183 )
1185 hidden_states = layer_outputs[0]
1187 if output_attentions:
File [~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1030](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:1030), in Mask2FormerPixelDecoderEncoderLayer.forward(self, hidden_states, attention_mask, position_embeddings, reference_points, spatial_shapes, level_start_index, output_attentions)
1027 residual = hidden_states
1029 # Apply Multi-scale Deformable Attention Module on the multi-scale feature maps.
-> 1030 hidden_states, attn_weights = self.self_attn(
1031 hidden_states=hidden_states,
1032 attention_mask=attention_mask,
1033 encoder_hidden_states=hidden_states,
1034 encoder_attention_mask=attention_mask,
1035 position_embeddings=position_embeddings,
1036 reference_points=reference_points,
1037 spatial_shapes=spatial_shapes,
1038 level_start_index=level_start_index,
1039 output_attentions=output_attentions,
1040 )
1042 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1043 hidden_states = residual + hidden_states
File [~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:964](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2248696c6465736865696d2d436c7573746572227d.vscode-resource.vscode-cdn.net/home/asl2hi/DST/fm_semseg/~/DST/venv/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py:964), in Mask2FormerPixelDecoderEncoderMultiscaleDeformableAttention.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, position_embeddings, reference_points, spatial_shapes, level_start_index, output_attentions)
960 if reference_points.shape[-1] == 2:
961 offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1)
962 sampling_locations = (
963 reference_points[:, :, None, :, None, :]
--> 964 + sampling_offsets / offset_normalizer[None, None, None, :, None, :]
965 )
966 elif reference_points.shape[-1] == 4:
967 sampling_locations = (
968 reference_points[:, :, None, :, None, :2]
969 + sampling_offsets / self.n_points * reference_points[:, :, None, :, None, 2:] * 0.5
970 )
RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 3
```
### Expected behavior
If working properly the code from above should output the model predictions (of class Mask2FormerModelOutput), which where produced by running the input image trough the new backbone and then forwarding the feature maps to the Mask2Former model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26403/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26402/comments | https://api.github.com/repos/huggingface/transformers/issues/26402/events | https://github.com/huggingface/transformers/pull/26402 | 1,912,824,738 | PR_kwDOCUB6oc5bMc8h | 26,402 | Change past_key_values shape of gpt_bigcode | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"WDYT @younesbelkada @ArthurZucker ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26402). All of your documentation changes will be reflected on that endpoint.",
"Sure. I see [bloom](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bloom/modeling_bloom.py#L293-L299) and gpt-neox (see [_attn ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bloom/modeling_bloom.py#L293-L299)and [forward](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L164-L184)) also have this kind of operations. I think these operations only have slightly impact on performance, but it will keep input and output format insistent with others. Besides, I also remove some operations in `generation/util.py` since we use the standard format of `past_key_values`. Would like to hear opinions from BigCode team.",
"It's definitely there for performance reasons. The current format avoids a few unnecessary data copies and makes the kv cache concatenation faster. Not sure if it matters much though because the inference speed is almost always bottlenecked by either cpu or kv cache concatenation (if anything, the proposed changes would make both a bit worse).\r\n\r\nIn any case, I think the proposed changes should be treated as backward incompatible? So the only real option would be to add the kv cache format as an option?",
"> Thanks a lot for your review @jlamypoirier That makes sense, this might be backward in-compatible indeed @jiqing-feng regarding performance issues, would you be able to run some quick comparisons between transformers main branch and yours to determine how much is the overhead? But in any case it seems @jlamypoirier is right, the past key values shape gets changed so this might be backward incompatible, can you confirm that @jiqing-feng ?\r\n\r\nYes, Sure.\r\n\r\nI tested on A100 and did not find any big difference. The script is as follows:\r\n\r\n```python\r\nimport argparse\r\nimport time\r\nimport torch\r\nfrom transformers import pipeline, AutoTokenizer, AutoModelForCausalLM\r\n\r\n\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument(\"--model_id\", default=\"bigcode/gpt_bigcode-santacoder\", type=str, required=False)\r\nparser.add_argument(\"--seq_len\", default=32, type=int, required=False)\r\nargs = parser.parse_args()\r\nmodel_id = args.model_id\r\nseq_len = args.seq_len\r\n\r\nname = model_id\r\nname = \"/home/bitsandbytes/hf_home/hub/models--bigcode--gpt_bigcode-santacoder\"\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n name,\r\n torch_dtype=torch.float16,\r\n use_cache=True,\r\n low_cpu_mem_usage=True,\r\n device_map=\"auto\",\r\n)\r\n\r\nmodel = model.eval()\r\ntokenizer = AutoTokenizer.from_pretrained(name)\r\ninput_sentence = \"Example input\"\r\n\r\ngeneration_kwargs = dict(max_length=seq_len, min_length=seq_len, use_cache=True)\r\ngenerator = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, **generation_kwargs)\r\nprint(f\"input sentence is {input_sentence}\")\r\nprint(f\"input tokens num is {len(tokenizer(input_sentence)['input_ids'])}\")\r\n\r\nfor i in range(10):\r\n pre = time.time()\r\n out = generator(input_sentence, **generation_kwargs)\r\n print(f\"Generate time costs {time.time()-pre} seconds\")\r\n print(f\"output = {out}\")\r\n print(f\"output token nums = {len(tokenizer(out[0]['generated_text'])['input_ids'])}\")\r\n```\r\nYou can just run `python test.py --seq_len 128` to double-check.\r\n\r\nThe performance results are shown in the following figure:\r\n<img width=\"284\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/107918818/e9e69ca4-9bd0-46f8-8f72-ec925a2140ae\">\r\n\r\n\r\nBTW, I am sorry that I can't understand backward incompatible. Does this mean backward broadcast in training?",
"Hi @younesbelkada @jlamypoirier . \r\n\r\nWould you please have a look at the results? Thx!\r\n\r\nBTW, there are no backward incompatible issues since the pkv shape happened inside the codes. I also tested it before and after changing the pkv shape and there are no differences of outputs between them.",
"Hi @jiqing-feng \r\nSorry for the delay replying on this matter\r\n\r\nWhat I meant by a potential backward incompatibility is that the new past key values in this PR will have a different shape than the current model on main branch. If you can confirm that this is not the case then the PR is BC\r\n\r\nI see this adds a ~5% overhead for seq_len = 512 and seems to grow linearly with the sequence length, perhaps with a longer sequence length & batch size the overhead will grow, therefore this PR introduces a regression which can be potentially important for long sequence length. However, given that this fixes some issues with external libs, we might consider merging this PR. I personally would prefer to avoid introducing additional overheads if possible, is there a way we could introduce a patch in optimum or optimum-intel ? Could you file an issue on optimum and tag me there to see what optimum maintainers think about it?",
"> Hi @jiqing-feng Sorry for the delay replying on this matter\r\n> \r\n> What I meant by a potential backward incompatibility is that the new past key values in this PR will have a different shape than the current model on main branch. If you can confirm that this is not the case then the PR is BC\r\n> \r\n> I see this adds a ~5% overhead for seq_len = 512 and seems to grow linearly with the sequence length, perhaps with a longer sequence length & batch size the overhead will grow, therefore this PR introduces a regression which can be potentially important for long sequence length. However, given that this fixes some issues with external libs, we might consider merging this PR. I personally would prefer to avoid introducing additional overheads if possible, is there a way we could introduce a patch in optimum or optimum-intel ? Could you file an issue on optimum and tag me there to see what optimum maintainers think about it?\r\n\r\nHi @younesbelkada . Thanks for your advice.\r\n\r\nFirst, I confirm that there is no backward incompatible issue. Furthermore, we can see that it adds 5% overhead on both seq_len=32 and seq_len=512 so I think it is a fixed overhead and will not increase by the seq_len. Besides, most models have `reshape` or `transpose` operations for `past_key_values`. I think it is for consistency with other models even if it will have an impact on generation speed. For example, [gpt2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L240). Maybe we can reshape the `attn_weight` instead of key-value in `_attn` function like gpt2.\r\n\r\nFor optimum, gpt_bigcode has a special onnx config, see [OnnxConfig](https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py#L314) and [DummyPastKeyValuesGenerator](https://github.com/huggingface/optimum/blob/main/optimum/utils/input_generators.py#L840).\r\n\r\nFor optimum-intel, we also need a special process for gpt_bigcode in [BaseModelForCausalLM](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/generation/modeling.py#L281). \r\n\r\nThere might be other external libs that rely on the shape of model inputs.\r\n\r\nWe can see external libs have special processes for gpt_bigcode, but we can avoid these kinds of changes if we can fix the shape of `past_key_values` for all models. It will help with code maintenance. WDYT?",
"hi @jiqing-feng thanks for providing more details!\r\nHmm I think here it is all about tradeoffs, IMO 5% slowdown is quite a lot since this means users that will update transformers will get a sudden slow down.\r\nRegarding the fixes you shared, unless I am mistaken, this also means that those libraries will have to update again their patch for the next transformers version right? With that regard maybe the best is to keep things as they are, but if the bigcode team is happy to merge the PR I am not against it cc @ArthurZucker @jlamypoirier ",
"> hi @jiqing-feng thanks for providing more details! Hmm I think here it is all about tradeoffs, IMO 5% slowdown is quite a lot since this means users that will update transformers will get a sudden slow down. Regarding the fixes you shared, unless I am mistaken, this also means that those libraries will have to update again their patch for the next transformers version right? With that regard maybe the best is to keep things as they are, but if the bigcode team is happy to merge the PR I am not against it cc @ArthurZucker @jlamypoirier\r\n\r\nThanks for your advice. Yes, we have to update again if we change the shape of pkv.\r\n\r\nThe only benefit of keeping pkv shape the same is that it can improve the code maintenance of transformers and other libs like optimum and optimum-intel. It is true that it's all about trade-offs, we can keep things as they are if you think 5% performance is more important. Would like to hear your opinion, @ArthurZucker @jlamypoirier . Thx.",
"Hey @jiqing-feng I think our biggest concern when breaking backward compatibility is the incentive behind the change. In this case, consistency is not a big enough incentive, specifically given that we are also adding an overhead and other libraries out here probably had to update their codebase for this model. We'll be more careful with the cache when reviewing models @Rocketknight1 . Unless the community is super enthusiast, let's not merge this but feel free to keep it open! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,700 | 1,700 | CONTRIBUTOR | null | Hi @ArthurZucker @younesbelkada
I found that the shape of `past_key_values` in `gpt_bigcode` is different from others, which will cause inconvenience or unexpected errors in `Optimum` and `Optimum-intel`. Therefore, I added some reshaping operations to make the shape insistent with other models. I have tested it on my device and didn't find any performance decay. Would you please help me review it? Thx!
BTW, could we add a restriction like the shape of `past_key_values` must be (batch_size, num_heads, sequence_length, head_dim)? It could avoid many issues if we have a fixed output and input shape of `past_key_values` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26402",
"html_url": "https://github.com/huggingface/transformers/pull/26402",
"diff_url": "https://github.com/huggingface/transformers/pull/26402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26402.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26401/comments | https://api.github.com/repos/huggingface/transformers/issues/26401/events | https://github.com/huggingface/transformers/issues/26401 | 1,912,795,336 | I_kwDOCUB6oc5yAvDI | 26,401 | 🚨 [Bug] Text-to-Audio pipeline is broken for Speech T5 TTS | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc: @Narsil for vis.",
"@ylacombe Maybe ?",
"Hey @Narsil and @Vaibhavs10,\r\nAt the moment, you have to pass the speaker embeddings as a forward argument:\r\n\r\n```python\r\nspeaker_embeddings = torch.tensor(embeddings_dataset[7306][\"xvector\"]).unsqueeze(0)\r\nforward_params = {\r\n \"speaker_embeddings\": speaker_embeddings,\r\n}\r\nspeech = speecht5_tts(\"Hello, my dog is cute!\", forward_params = forward_params)\r\n```\r\n\r\nFor reference, see [this snippet](https://huggingface.co/suno/bark-small/discussions/4#64f5c7c11c2c6f58c91c8ad1) for Bark.\r\n\r\nAdding a default speechT5 speaker embedding in the pipeline would imply model-specific code inside the pipeline code, which is [something](https://github.com/huggingface/transformers/pull/24952/#discussion_r1280336582) we'd like to avoid rn\r\n\r\nHappy to hear about your thoughts on this, though.\r\n\r\n",
"Hey @ylacombe - I think we should explicitly raise an exception here asking the user to provide a speaker embedding. Currently, the pipeline would work without any exception or warning - outputting gibberish audio. This results in a significantly degraded Developer Experience.",
"I agree @Vaibhavs10! But instead of adding this to the pipeline, let's add this to [_generate_speech](https://github.com/huggingface/transformers/blob/6da93f5580e109fad5f7b523cf2b6e8a5bafb623/src/transformers/models/speecht5/modeling_speecht5.py#L2532) from `modeling_speecht5.py` since it is an issue specifically related to the model.\r\n\r\nI'll make a PR this afternoon if that's okay with you",
"Agreed! and thanks for the PR 🤗 "
] | 1,695 | 1,697 | 1,697 | MEMBER | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.2
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.13.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @ylacombe
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Speech T5 TTS expects speaker embeddings to be passed for understandable speech to be produced. At the moment in the `Text-to-Audio` pipeline we just call the model (processor) + generate without any speaker embeddings. This results in garbage audio as an output.
Full-repro here: https://github.com/Vaibhavs10/scratchpad/blob/main/transformers_speecht5_tts_repro.ipynb
### Expected behavior
Text-to-Audio pipeline for SpeechT5 TTS checkpoints outputs audio that is understandable. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26401/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26400/comments | https://api.github.com/repos/huggingface/transformers/issues/26400/events | https://github.com/huggingface/transformers/pull/26400 | 1,912,714,898 | PR_kwDOCUB6oc5bMFIo | 26,400 | remove the assumption: source and target vocab should be shared for Marian model | {
"login": "nana-na-nana-na",
"id": 33016235,
"node_id": "MDQ6VXNlcjMzMDE2MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/33016235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nana-na-nana-na",
"html_url": "https://github.com/nana-na-nana-na",
"followers_url": "https://api.github.com/users/nana-na-nana-na/followers",
"following_url": "https://api.github.com/users/nana-na-nana-na/following{/other_user}",
"gists_url": "https://api.github.com/users/nana-na-nana-na/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nana-na-nana-na/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nana-na-nana-na/subscriptions",
"organizations_url": "https://api.github.com/users/nana-na-nana-na/orgs",
"repos_url": "https://api.github.com/users/nana-na-nana-na/repos",
"events_url": "https://api.github.com/users/nana-na-nana-na/events{/privacy}",
"received_events_url": "https://api.github.com/users/nana-na-nana-na/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker I believe you wanted to dive into this in the days/weeks to come",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,703 | 1,703 | NONE | null | # Motivation
I faced below errors when run https://github.com/huggingface/transformers/blob/b880508440f43f80e35a78ccd2a32f3bde91cb23/src/transformers/models/marian/convert_marian_to_pytorch.py
RuntimeError: Error(s) in loading state_dict for MarianMTModel:
**size mismatch** for final_logits_bias: copying a param with shape **torch.Size([1, 52])** from checkpoint, the shape in current model is **torch.Size([1, 36]).**
size mismatch for model.decoder.embed_tokens.weight: copying a param with shape **torch.Size([52, 256])** from checkpoint, the shape in current model is torch.Size(**[36, 256]).**
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.
# Root cause
for my C++ marian model, source vocab has 36 words. the target vocab has 52 words. Current transfomer marian model code treats the source vocab same as the target vocab. That's why the errors said copying a param with shape torch.Size([52, 256]) from checkpoint, the shape in current model is torch.Size([36, 256]).
I saw the comment in the marianMTModel, there is an assumption for the model, that the source and target vocab should be shared. But the assumption is not suitable for all marian models. at least the model I trained is not shared. (I can not retrain the model from scratch due to some reason)
# What does this PR do?
remove the assumption: source and target vocab should be shared for Marian model
## Was this discussed/approved via a Github issue?
https://github.com/huggingface/transformers/issues/26338
https://github.com/huggingface/transformers/issues/15109
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @younesbelkada @gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26400",
"html_url": "https://github.com/huggingface/transformers/pull/26400",
"diff_url": "https://github.com/huggingface/transformers/pull/26400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26400.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26399/comments | https://api.github.com/repos/huggingface/transformers/issues/26399/events | https://github.com/huggingface/transformers/pull/26399 | 1,912,691,516 | PR_kwDOCUB6oc5bMAId | 26,399 | Replaced with logger.warning | {
"login": "hegdeadithyak",
"id": 116452077,
"node_id": "U_kgDOBvDq7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/116452077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hegdeadithyak",
"html_url": "https://github.com/hegdeadithyak",
"followers_url": "https://api.github.com/users/hegdeadithyak/followers",
"following_url": "https://api.github.com/users/hegdeadithyak/following{/other_user}",
"gists_url": "https://api.github.com/users/hegdeadithyak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hegdeadithyak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hegdeadithyak/subscriptions",
"organizations_url": "https://api.github.com/users/hegdeadithyak/orgs",
"repos_url": "https://api.github.com/users/hegdeadithyak/repos",
"events_url": "https://api.github.com/users/hegdeadithyak/events{/privacy}",
"received_events_url": "https://api.github.com/users/hegdeadithyak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26399/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26399",
"html_url": "https://github.com/huggingface/transformers/pull/26399",
"diff_url": "https://github.com/huggingface/transformers/pull/26399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26399.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26398/comments | https://api.github.com/repos/huggingface/transformers/issues/26398/events | https://github.com/huggingface/transformers/issues/26398 | 1,912,418,416 | I_kwDOCUB6oc5x_TBw | 26,398 | `resize_token_embeddings` sets `vocab_size` to 0 in Deepspeed mode. | {
"login": "zhouku92",
"id": 144086204,
"node_id": "U_kgDOCJaUvA",
"avatar_url": "https://avatars.githubusercontent.com/u/144086204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhouku92",
"html_url": "https://github.com/zhouku92",
"followers_url": "https://api.github.com/users/zhouku92/followers",
"following_url": "https://api.github.com/users/zhouku92/following{/other_user}",
"gists_url": "https://api.github.com/users/zhouku92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhouku92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhouku92/subscriptions",
"organizations_url": "https://api.github.com/users/zhouku92/orgs",
"repos_url": "https://api.github.com/users/zhouku92/repos",
"events_url": "https://api.github.com/users/zhouku92/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhouku92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Same issue here with same package versions except deepspeed==0.10.3. Downgrading transformers to 4.30.2 temporarily resolved the issue. ",
"Indeed, it seems there is an issue here! Would you mind opening a PR with your proposed fix?",
"Ah actually it's already being fixed here: https://github.com/huggingface/transformers/pull/26387\r\n",
"Hello @weixi-feng, this merged PR https://github.com/huggingface/transformers/pull/26259 should have fixed the issue . Could you retry with the main branch?",
"Yes, the main branch works for me. Thanks for the quick follow-up.",
"Thank you, it works for me."
] | 1,695 | 1,695 | 1,695 | NONE | null | ### System Info
**System Info**
* transformers==4.33.2
* accelerate==0.23.0
* deepspeed==0.9.2
I was running a customized script to train a LLaMA model via Deepspeed and encountered the following exception:
```
File "/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1736, in forward
shift_logits = shift_logits.view(-1, self.config.vocab_size)
RuntimeError: shape '[-1, 0]' is invalid for input of size 32705022
```
A quick check directs me to the following block ([link](https://github.com/huggingface/transformers/blob/v4.33.2/src/transformers/modeling_utils.py#L1439)):
```
def resize_token_embeddings(
self, new_num_tokens: Optional[int] = None, pad_to_multiple_of: Optional[int] = None
) -> nn.Embedding:
model_embeds = self._resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
if new_num_tokens is None and pad_to_multiple_of is None:
return model_embeds
# Update base model and current model config
self.config.vocab_size = model_embeds.weight.shape[0]
self.vocab_size = model_embeds.weight.shape[0]
# Tie weights again if needed
self.tie_weights()
return model_embeds
```
In Deepspeed, `model_embeds` is `Embedding(32001, 4096)` but `model_embeds.weight` is `torch.Size([0])` (per my understanding, the model hasn't been initialized yet). Thus, `self.config.vocab_size` is mistakenly set to `0`.
**Who can help?**
@pacman100
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("llama-path")
model.resize_token_embeddings(new_num_tokens=32001, pad_to_multiple_of=None)
```
Run with deepspeeed
### Expected behavior
`self.config.vocab_size` and `self.vocab_size` should be set to the correct number. Personally `model_embeds.num_embeddings` is better than `model_embeds.weight.shape[0]`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26398/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26398/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26397/comments | https://api.github.com/repos/huggingface/transformers/issues/26397/events | https://github.com/huggingface/transformers/issues/26397 | 1,912,299,853 | I_kwDOCUB6oc5x-2FN | 26,397 | Allow phi 1.5 to accept attention mask as a parameter | {
"login": "bnicholl",
"id": 26211830,
"node_id": "MDQ6VXNlcjI2MjExODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26211830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bnicholl",
"html_url": "https://github.com/bnicholl",
"followers_url": "https://api.github.com/users/bnicholl/followers",
"following_url": "https://api.github.com/users/bnicholl/following{/other_user}",
"gists_url": "https://api.github.com/users/bnicholl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bnicholl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bnicholl/subscriptions",
"organizations_url": "https://api.github.com/users/bnicholl/orgs",
"repos_url": "https://api.github.com/users/bnicholl/repos",
"events_url": "https://api.github.com/users/bnicholl/events{/privacy}",
"received_events_url": "https://api.github.com/users/bnicholl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @bnicholl, phi 1.5 is being added to the library here: https://github.com/huggingface/transformers/pull/26170\r\n\r\nThis will support the attention mask as a parameter.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,698 | 1,698 | NONE | null | ### Feature request
As of right not, phi-1.5 does not accept attention masking for training or running inferences in batches. Could we get phi-1.5 to accept attention mask as a parameter
### Motivation
I would like attention masking for fine tuning
### Your contribution
If this isn't planned on getting taken care of, I could begin incorporating the weights into a torch module that will accept attention mask myself. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26396/comments | https://api.github.com/repos/huggingface/transformers/issues/26396/events | https://github.com/huggingface/transformers/pull/26396 | 1,912,289,350 | PR_kwDOCUB6oc5bKqpa | 26,396 | Fix padding for IDEFICS | {
"login": "shauray8",
"id": 39147312,
"node_id": "MDQ6VXNlcjM5MTQ3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauray8",
"html_url": "https://github.com/shauray8",
"followers_url": "https://api.github.com/users/shauray8/followers",
"following_url": "https://api.github.com/users/shauray8/following{/other_user}",
"gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauray8/subscriptions",
"organizations_url": "https://api.github.com/users/shauray8/orgs",
"repos_url": "https://api.github.com/users/shauray8/repos",
"events_url": "https://api.github.com/users/shauray8/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauray8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"sure!",
"@ArthurZucker CI looks green, PR's ready for another review.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26396). All of your documentation changes will be reflected on that endpoint.",
"thank you @shauray8 !"
] | 1,695 | 1,696 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
Fixes padding issues -
```python
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics-9b", padding_side="right")
sents = [["hello world"], [" this is a longer sentence to testing padding"]]
# This is the correct behaviour:
a = processor(sents, padding="max_length", truncation=True, max_length=20)
print(processor.tokenizer.decode(a['input_ids'][0]))
# => <s> hello world<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>
# This is the incorrect behaviour:
b = processor(sents, padding="longest", truncation=True, max_length=30)
print(processor.tokenizer.decode(b['input_ids'][0]))
# => <unk><unk><unk><unk><unk><unk><s> hello world
```
With this fix the results would be
```
# for max_length
<s> hello world<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>
# for longest
<s> hello world<unk><unk><unk><unk><unk><unk>
```
Fixes #26354
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
@VictorSanh @ArthurZucker @LysandreJik
*let me know if there's anything that needs follow-ups* | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26396/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26396/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26396",
"html_url": "https://github.com/huggingface/transformers/pull/26396",
"diff_url": "https://github.com/huggingface/transformers/pull/26396.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26396.patch",
"merged_at": 1695804967000
} |
https://api.github.com/repos/huggingface/transformers/issues/26395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26395/comments | https://api.github.com/repos/huggingface/transformers/issues/26395/events | https://github.com/huggingface/transformers/pull/26395 | 1,912,279,812 | PR_kwDOCUB6oc5bKoj4 | 26,395 | test doc-builder | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/408
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26395/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26395",
"html_url": "https://github.com/huggingface/transformers/pull/26395",
"diff_url": "https://github.com/huggingface/transformers/pull/26395.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26395.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26394/comments | https://api.github.com/repos/huggingface/transformers/issues/26394/events | https://github.com/huggingface/transformers/pull/26394 | 1,912,178,700 | PR_kwDOCUB6oc5bKSTn | 26,394 | Deleted duplicate sentence | {
"login": "titi-devv",
"id": 66329321,
"node_id": "MDQ6VXNlcjY2MzI5MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/66329321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/titi-devv",
"html_url": "https://github.com/titi-devv",
"followers_url": "https://api.github.com/users/titi-devv/followers",
"following_url": "https://api.github.com/users/titi-devv/following{/other_user}",
"gists_url": "https://api.github.com/users/titi-devv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/titi-devv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/titi-devv/subscriptions",
"organizations_url": "https://api.github.com/users/titi-devv/orgs",
"repos_url": "https://api.github.com/users/titi-devv/repos",
"events_url": "https://api.github.com/users/titi-devv/events{/privacy}",
"received_events_url": "https://api.github.com/users/titi-devv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
Deleted duplicate sentence
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26394",
"html_url": "https://github.com/huggingface/transformers/pull/26394",
"diff_url": "https://github.com/huggingface/transformers/pull/26394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26394.patch",
"merged_at": 1695715888000
} |
https://api.github.com/repos/huggingface/transformers/issues/26393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26393/comments | https://api.github.com/repos/huggingface/transformers/issues/26393/events | https://github.com/huggingface/transformers/pull/26393 | 1,911,956,022 | PR_kwDOCUB6oc5bJhiN | 26,393 | Fix DeepSpeed issue with Idefics | {
"login": "HugoLaurencon",
"id": 44556846,
"node_id": "MDQ6VXNlcjQ0NTU2ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HugoLaurencon",
"html_url": "https://github.com/HugoLaurencon",
"followers_url": "https://api.github.com/users/HugoLaurencon/followers",
"following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}",
"gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions",
"organizations_url": "https://api.github.com/users/HugoLaurencon/orgs",
"repos_url": "https://api.github.com/users/HugoLaurencon/repos",
"events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HugoLaurencon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | MEMBER | null | # What does this PR do?
This PR fixes a bug when trying to do inference with Idefics with DeepSpeed Zero-3.
The model was loaded correctly, but the shapes were found to be incorrect using DS at a step in `IdeficsDecoupledLinear` during inference.
I checked on different prompts that the output is the same without using DeepSpeed.
## Who can review?
@pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26393/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26393",
"html_url": "https://github.com/huggingface/transformers/pull/26393",
"diff_url": "https://github.com/huggingface/transformers/pull/26393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26393.patch",
"merged_at": 1695716340000
} |
https://api.github.com/repos/huggingface/transformers/issues/26392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26392/comments | https://api.github.com/repos/huggingface/transformers/issues/26392/events | https://github.com/huggingface/transformers/pull/26392 | 1,911,925,629 | PR_kwDOCUB6oc5bJa47 | 26,392 | replaced warnings.warn with logger.warning | {
"login": "hegdeadithyak",
"id": 116452077,
"node_id": "U_kgDOBvDq7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/116452077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hegdeadithyak",
"html_url": "https://github.com/hegdeadithyak",
"followers_url": "https://api.github.com/users/hegdeadithyak/followers",
"following_url": "https://api.github.com/users/hegdeadithyak/following{/other_user}",
"gists_url": "https://api.github.com/users/hegdeadithyak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hegdeadithyak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hegdeadithyak/subscriptions",
"organizations_url": "https://api.github.com/users/hegdeadithyak/orgs",
"repos_url": "https://api.github.com/users/hegdeadithyak/repos",
"events_url": "https://api.github.com/users/hegdeadithyak/events{/privacy}",
"received_events_url": "https://api.github.com/users/hegdeadithyak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26392/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26392",
"html_url": "https://github.com/huggingface/transformers/pull/26392",
"diff_url": "https://github.com/huggingface/transformers/pull/26392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26392.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26391/comments | https://api.github.com/repos/huggingface/transformers/issues/26391/events | https://github.com/huggingface/transformers/pull/26391 | 1,911,920,109 | PR_kwDOCUB6oc5bJZrd | 26,391 | [WIP]Add ProPainter | {
"login": "shauray8",
"id": 39147312,
"node_id": "MDQ6VXNlcjM5MTQ3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauray8",
"html_url": "https://github.com/shauray8",
"followers_url": "https://api.github.com/users/shauray8/followers",
"following_url": "https://api.github.com/users/shauray8/following{/other_user}",
"gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauray8/subscriptions",
"organizations_url": "https://api.github.com/users/shauray8/orgs",
"repos_url": "https://api.github.com/users/shauray8/repos",
"events_url": "https://api.github.com/users/shauray8/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauray8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @shauray8 :)\r\n\r\nPlease, let me know when it is ready for review ",
"I'll ping you before the EOD.\r\n\r\n*Update - got caught up with something, Writing tests!*",
"@shauray8 Any update on adding this model?",
"@amyeroberts it's almost done, I'm stuck with tests rn and I might be able to push it for 1st review tomorrow.",
"@shauray8 Great! Let us know if you need any help with the tests.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Adds ProPainter to transformers.
Author - https://github.com/sczhou/ProPainter
hub - https://huggingface.co/shauray/ProPainter-hf/
Fixes https://github.com/huggingface/transformers/issues/26360
*issue contains all the other relevant links*
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@rafaelpadilla @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26391/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26391",
"html_url": "https://github.com/huggingface/transformers/pull/26391",
"diff_url": "https://github.com/huggingface/transformers/pull/26391.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26391.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26390/comments | https://api.github.com/repos/huggingface/transformers/issues/26390/events | https://github.com/huggingface/transformers/pull/26390 | 1,911,888,302 | PR_kwDOCUB6oc5bJSoQ | 26,390 | Replaced warning.warn with logging.warning | {
"login": "hegdeadithyak",
"id": 116452077,
"node_id": "U_kgDOBvDq7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/116452077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hegdeadithyak",
"html_url": "https://github.com/hegdeadithyak",
"followers_url": "https://api.github.com/users/hegdeadithyak/followers",
"following_url": "https://api.github.com/users/hegdeadithyak/following{/other_user}",
"gists_url": "https://api.github.com/users/hegdeadithyak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hegdeadithyak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hegdeadithyak/subscriptions",
"organizations_url": "https://api.github.com/users/hegdeadithyak/orgs",
"repos_url": "https://api.github.com/users/hegdeadithyak/repos",
"events_url": "https://api.github.com/users/hegdeadithyak/events{/privacy}",
"received_events_url": "https://api.github.com/users/hegdeadithyak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26390/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26390",
"html_url": "https://github.com/huggingface/transformers/pull/26390",
"diff_url": "https://github.com/huggingface/transformers/pull/26390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26390.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26389/comments | https://api.github.com/repos/huggingface/transformers/issues/26389/events | https://github.com/huggingface/transformers/pull/26389 | 1,911,839,398 | PR_kwDOCUB6oc5bJHyj | 26,389 | Replaced every warnings.warn with logging.warning | {
"login": "hegdeadithyak",
"id": 116452077,
"node_id": "U_kgDOBvDq7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/116452077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hegdeadithyak",
"html_url": "https://github.com/hegdeadithyak",
"followers_url": "https://api.github.com/users/hegdeadithyak/followers",
"following_url": "https://api.github.com/users/hegdeadithyak/following{/other_user}",
"gists_url": "https://api.github.com/users/hegdeadithyak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hegdeadithyak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hegdeadithyak/subscriptions",
"organizations_url": "https://api.github.com/users/hegdeadithyak/orgs",
"repos_url": "https://api.github.com/users/hegdeadithyak/repos",
"events_url": "https://api.github.com/users/hegdeadithyak/events{/privacy}",
"received_events_url": "https://api.github.com/users/hegdeadithyak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26389",
"html_url": "https://github.com/huggingface/transformers/pull/26389",
"diff_url": "https://github.com/huggingface/transformers/pull/26389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26389.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26388/comments | https://api.github.com/repos/huggingface/transformers/issues/26388/events | https://github.com/huggingface/transformers/issues/26388 | 1,911,797,964 | I_kwDOCUB6oc5x87jM | 26,388 | Implementation of Git-Rebasin to Huggingface models | {
"login": "NamburiSrinath",
"id": 40389487,
"node_id": "MDQ6VXNlcjQwMzg5NDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/40389487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NamburiSrinath",
"html_url": "https://github.com/NamburiSrinath",
"followers_url": "https://api.github.com/users/NamburiSrinath/followers",
"following_url": "https://api.github.com/users/NamburiSrinath/following{/other_user}",
"gists_url": "https://api.github.com/users/NamburiSrinath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NamburiSrinath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NamburiSrinath/subscriptions",
"organizations_url": "https://api.github.com/users/NamburiSrinath/orgs",
"repos_url": "https://api.github.com/users/NamburiSrinath/repos",
"events_url": "https://api.github.com/users/NamburiSrinath/events{/privacy}",
"received_events_url": "https://api.github.com/users/NamburiSrinath/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,695 | 1,698 | null | NONE | null | ### Feature request
Hi,
I am not sure if this is even feasible/appropriate request in Huggingface. Recently, there was an ICLR paper titled "Git Rebasin" (https://arxiv.org/abs/2209.04836) which demonstrates a method to effectively merge models.
But the authors have implemented it only for plain NN and Resnet kind of architectures. It would be really great if this feature is supported for Huggingface models as there'll be lots of instances where we might need to merge the models and a naive way of adding is not the best way.
### Motivation
This feature is helpful especially in federated learning where 2 different people train 2 models separately and would like to combine their models.
In academic circles as well, this is very much useful as there'll be lots of instances where effective model merging is critical!
### Your contribution
I've raised PR's in subsequent Github repos
1. https://github.com/samuela/git-re-basin/issues/13#issue-1910375545
2. https://github.com/themrzmaster/git-re-basin-pytorch/issues/8#issue-1910392068
Not sure if this is feasible based on some of the previous discussions as the paper is based on "Permutation Invariance" of model weights and an architecture like Llama might have various possibilities thus difficulty in the method.
I would be happy to contribute if there's some help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26388/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26388/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26387/comments | https://api.github.com/repos/huggingface/transformers/issues/26387/events | https://github.com/huggingface/transformers/pull/26387 | 1,911,790,924 | PR_kwDOCUB6oc5bI87b | 26,387 | Fix setting vocab_size when resizing embeddings and using deepspeed zero 3 | {
"login": "ppetrushkov",
"id": 39625270,
"node_id": "MDQ6VXNlcjM5NjI1Mjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/39625270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ppetrushkov",
"html_url": "https://github.com/ppetrushkov",
"followers_url": "https://api.github.com/users/ppetrushkov/followers",
"following_url": "https://api.github.com/users/ppetrushkov/following{/other_user}",
"gists_url": "https://api.github.com/users/ppetrushkov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ppetrushkov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppetrushkov/subscriptions",
"organizations_url": "https://api.github.com/users/ppetrushkov/orgs",
"repos_url": "https://api.github.com/users/ppetrushkov/repos",
"events_url": "https://api.github.com/users/ppetrushkov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ppetrushkov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26387). All of your documentation changes will be reflected on that endpoint.",
"Hello @ppetrushkov , this merged PR https://github.com/huggingface/transformers/pull/26259 should have fixed this issue. Could you retry from the main branch?",
"@pacman100 can confirm it works from the main branch. I'll close this PR then."
] | 1,695 | 1,695 | 1,695 | NONE | null | # What does this PR do?
Currently there seems to be an issue when training with deepspeed zero stage 3 and resizing the token embeddings. Here is an example I used:
```python
from transformers import (
AutoConfig,
AutoTokenizer,
AutoModelForCausalLM,
HfArgumentParser,
TrainingArguments,
DataCollatorForSeq2Seq,
Trainer,
)
def main():
parser = HfArgumentParser((TrainingArguments))
training_args, = parser.parse_args_into_dataclasses()
model_path = 'facebook/opt-125m'
config = AutoConfig.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
config = config,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path,
model_max_length=1024,
)
add_new_tokens = True
if add_new_tokens:
tokenizer.add_special_tokens({"pad_token": "<pad>",})
model.resize_token_embeddings(len(tokenizer))
else:
tokenizer.pad_token = tokenizer.eos_token
from datasets import Dataset
def gen():
for _ in range(100):
yield {"input_ids": [1, 2, 3], "labels": [1, 1, 1]}
datasets = Dataset.from_generator(gen)
datasets.set_format('pt')
trainer = Trainer(
model=model,
args=training_args,
tokenizer=tokenizer,
data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model, max_length=tokenizer.model_max_length),
train_dataset=datasets,
)
trainer.train()
if __name__ == "__main__":
main()
```
Executed on at least 2 GPUs with: `deepspeed minimal.py --output_dir output_dir --deepspeed deepspeed_config.json`
where deepspeed_config.json is:
```json
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"warmup_type": "linear",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Currently the `vocab_size` and `config.vocab_size` are set to 0, when using deepspeed zero in this line https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L1537-L1538 . This fails later during training with `RuntimeError: shape '[-1, 0]' is invalid for input of size 804240` when trying to flatten the logits using `vocab_size` shape. Instead it should use approach similar to https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L1555-L1559 to set the correct size.
## Who can review?
@ArthurZucker @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26387/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26387",
"html_url": "https://github.com/huggingface/transformers/pull/26387",
"diff_url": "https://github.com/huggingface/transformers/pull/26387.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26387.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26386/comments | https://api.github.com/repos/huggingface/transformers/issues/26386/events | https://github.com/huggingface/transformers/pull/26386 | 1,911,624,625 | PR_kwDOCUB6oc5bIYT0 | 26,386 | added support for gradient checkpointing in ESM models | {
"login": "sanjeevk-os",
"id": 73068589,
"node_id": "MDQ6VXNlcjczMDY4NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/73068589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanjeevk-os",
"html_url": "https://github.com/sanjeevk-os",
"followers_url": "https://api.github.com/users/sanjeevk-os/followers",
"following_url": "https://api.github.com/users/sanjeevk-os/following{/other_user}",
"gists_url": "https://api.github.com/users/sanjeevk-os/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanjeevk-os/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanjeevk-os/subscriptions",
"organizations_url": "https://api.github.com/users/sanjeevk-os/orgs",
"repos_url": "https://api.github.com/users/sanjeevk-os/repos",
"events_url": "https://api.github.com/users/sanjeevk-os/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanjeevk-os/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26386). All of your documentation changes will be reflected on that endpoint."
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #24602
Adds gradient checkpointing support for ESM models
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Issue link: https://github.com/huggingface/transformers/issues/24602
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 @amyeroberts : Please review and suggest any required changes, thanks.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26386/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26386",
"html_url": "https://github.com/huggingface/transformers/pull/26386",
"diff_url": "https://github.com/huggingface/transformers/pull/26386.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26386.patch",
"merged_at": 1695716153000
} |
https://api.github.com/repos/huggingface/transformers/issues/26385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26385/comments | https://api.github.com/repos/huggingface/transformers/issues/26385/events | https://github.com/huggingface/transformers/pull/26385 | 1,911,591,101 | PR_kwDOCUB6oc5bIQ80 | 26,385 | Fix num. of minimal calls to the Hub with peft for pipeline | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> This seems okay to me generally, would like another review as it's touching critical code.\r\n> \r\n> Not too sure how we could do it, but it would also be nice to have tests for this, as it's the kind of thing that will continue to break unless we actively test it.\r\n\r\nWe do have tests: 4 tests named `test_cached_xxx_has_minimum_calls_to_head`. This is why I found the failure to fix. ",
"Merge now."
] | 1,695 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
This is a continuation of #25715 where @sgugger fix `test_cached_model_has_minimum_calls_to_head` but not `test_cached_pipeline_has_minimum_calls_to_head` (as I didn't mention this to him).
This PR mostly copies the logic from #25715.
The current failing test and the error are
```bash
tests/pipelines/test_pipelines_common.py::CustomPipelineTest::test_cached_pipeline_has_minimum_calls_to_head
(line 905) AssertionError: 2 != 1
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26385/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26385",
"html_url": "https://github.com/huggingface/transformers/pull/26385",
"diff_url": "https://github.com/huggingface/transformers/pull/26385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26385.patch",
"merged_at": 1697187794000
} |
https://api.github.com/repos/huggingface/transformers/issues/26384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26384/comments | https://api.github.com/repos/huggingface/transformers/issues/26384/events | https://github.com/huggingface/transformers/issues/26384 | 1,911,481,332 | I_kwDOCUB6oc5x7uP0 | 26,384 | 4.33.2 breaks deepspeed_custom_scheduler fixed in 4.33.1 | {
"login": "wizyoung",
"id": 13296106,
"node_id": "MDQ6VXNlcjEzMjk2MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/13296106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wizyoung",
"html_url": "https://github.com/wizyoung",
"followers_url": "https://api.github.com/users/wizyoung/followers",
"following_url": "https://api.github.com/users/wizyoung/following{/other_user}",
"gists_url": "https://api.github.com/users/wizyoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wizyoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wizyoung/subscriptions",
"organizations_url": "https://api.github.com/users/wizyoung/orgs",
"repos_url": "https://api.github.com/users/wizyoung/repos",
"events_url": "https://api.github.com/users/wizyoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/wizyoung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
] | [
"Weird... The github 4.33.1 tag fixed the bug as I mentioned above, but the pypi release 4.33.1 src code still lies the bug, which is inconsistent.",
"Hello @wizyoung, this should be fixed in https://github.com/huggingface/transformers/releases/tag/v4.33.3"
] | 1,695 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-4.14.0_1-0-0-44-x86_64-with-glibc2.27
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@muellerzr @pacman100
4.33.1 fixed the deepspeed_custom_scheduler in https://github.com/huggingface/transformers/commit/6bc517ccd4a3bcda4d0621d54a37c3e047df223a, however, the 4.33.2 patch release broke it again.
4.33.1:
https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/trainer.py#L2367-L2380
vs broken 4.33.2:
https://github.com/huggingface/transformers/blob/6da93f5580e109fad5f7b523cf2b6e8a5bafb623/src/transformers/trainer.py#L2364-L2370
4.33.1:
https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/trainer.py#L2434-L2445
vs broken 4.33.2:
https://github.com/huggingface/transformers/blob/6da93f5580e109fad5f7b523cf2b6e8a5bafb623/src/transformers/trainer.py#L2424-L2431
In version 4.33.2, if we use HF lr_scheduler and have deepspeed enabled, the lr_scheduler cannot be saved and loaded normally. This is a huge affect that breaks large scale training-pause-resuming loop. Considering 4.33.2 fixes two critical issues, I suggest release a new patch as this bug affects more people.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
100%
### Expected behavior
A new patch release. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26383/comments | https://api.github.com/repos/huggingface/transformers/issues/26383/events | https://github.com/huggingface/transformers/issues/26383 | 1,911,414,435 | I_kwDOCUB6oc5x7d6j | 26,383 | RuntimeError: result type Float can't be cast to the desired output type Byte | {
"login": "LuciferianInk",
"id": 94832312,
"node_id": "U_kgDOBacGuA",
"avatar_url": "https://avatars.githubusercontent.com/u/94832312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LuciferianInk",
"html_url": "https://github.com/LuciferianInk",
"followers_url": "https://api.github.com/users/LuciferianInk/followers",
"following_url": "https://api.github.com/users/LuciferianInk/following{/other_user}",
"gists_url": "https://api.github.com/users/LuciferianInk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LuciferianInk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LuciferianInk/subscriptions",
"organizations_url": "https://api.github.com/users/LuciferianInk/orgs",
"repos_url": "https://api.github.com/users/LuciferianInk/repos",
"events_url": "https://api.github.com/users/LuciferianInk/events{/privacy}",
"received_events_url": "https://api.github.com/users/LuciferianInk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada as well",
"Hi @LuciferianInk \r\nThanks for the issue, I recently made https://github.com/huggingface/transformers/pull/26134 that should fix all issues related with RWKV and 4bit, please install transformers from source `pip install -U git+https://github.com/huggingface/transformers.git` and let me know if this fixes your issue",
"Thanks for the update! While this build does appear to fix my earlier problem, it breaks the custom CUDA kernel that's supposed to ship with RWKV. Because this results in 10x slower computations in RWKV, I'll have to revert back to 4.32.X for now.\r\n\r\nFor reference, I am installing from pip inside of the `nvcr.io/nvidia/cuda:12.2.0-devel-ubuntu22.04` container. The kernel works great, on the previous build. Feel free to close this issue if you'll track that one elsewhere.",
"As of 4.34.0, this does appear to be resolved. Thanks for the update!"
] | 1,695 | 1,697 | 1,697 | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-6.5.3-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.2
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@gante @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This problem occurs when trying to use RWKV with the "bnb_4bit_use_double_quant" argument in a BitsAndBytesConfig. You can fully reproduce the error with the following code:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_name = "RWKV/rwkv-4-169m-pile"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name, torch_dtype=torch.bfloat16, quantization_config=quantization_config
)
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0]))
```
### Expected behavior
I would expect this to work, fail gracefully, or perhaps revert to a supported setting. For now, simply disabling "bnb_4bit_use_double_quant" resolves the issue with RWKV, and I've not seen it happen elsewhere. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26382/comments | https://api.github.com/repos/huggingface/transformers/issues/26382/events | https://github.com/huggingface/transformers/issues/26382 | 1,911,318,223 | I_kwDOCUB6oc5x7GbP | 26,382 | `Helsinki-NLP/opus-mt-it-en` isn't on HuggingFace Hub | {
"login": "KickItLikeShika",
"id": 54319724,
"node_id": "MDQ6VXNlcjU0MzE5NzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/54319724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KickItLikeShika",
"html_url": "https://github.com/KickItLikeShika",
"followers_url": "https://api.github.com/users/KickItLikeShika/followers",
"following_url": "https://api.github.com/users/KickItLikeShika/following{/other_user}",
"gists_url": "https://api.github.com/users/KickItLikeShika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KickItLikeShika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KickItLikeShika/subscriptions",
"organizations_url": "https://api.github.com/users/KickItLikeShika/orgs",
"repos_url": "https://api.github.com/users/KickItLikeShika/repos",
"events_url": "https://api.github.com/users/KickItLikeShika/events{/privacy}",
"received_events_url": "https://api.github.com/users/KickItLikeShika/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,695 | 1,695 | null | CONTRIBUTOR | null | ### Model description
I have found lots of Opus translation model on HuggingFace Hub but couldn't find Portuguese to English model, however the model already exists in Helsinki repo https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/pt-en#opus-2019-12-05zip
is that something can be added quickly?
### Open source status
- [ ] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/pt-en#opus-2019-12-05zip | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26382/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/26382/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26381/comments | https://api.github.com/repos/huggingface/transformers/issues/26381/events | https://github.com/huggingface/transformers/issues/26381 | 1,911,225,961 | I_kwDOCUB6oc5x6v5p | 26,381 | Remove usage of warnings.warn | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @osanseviero I want to work on this issue. Can you please this issue to me so that I can start working on it?\r\n",
"If I understand correctly, we should use `logger.warning` where `logger = logging.get_logger(__name__)` ? ",
"I can work on this!!\r\n",
"hey @osanseviero, I have started solving the issue. Please assign this issue to me.",
"Hello, Can I work on this issue? Can you please assign me this?",
"Hello, we have worked on a different approach with Omar that will not require us to change all `warnings` to `logging` statements.\r\n\r\nI'm opening a PR here implementing it: https://github.com/huggingface/transformers/pull/26527\r\n\r\nHowever, in order to honor the contributions you have wanted to make (or have already started making!), I have put you all five as co-authors of the commit, @sahilbhosale63, @Adithya4720, @sachinSingh16-09, @riiyaa24.\r\n\r\nI have added the `HACKTOBERFEST-ACCEPTED` tag in case you're participating.\r\n\r\nThanks once again for your contribution, and we're looking forward to the next one :hugs: ",
"Thanks!! ",
"Is this still open? ",
"It will be closed by https://github.com/huggingface/transformers/pull/26527 @AVAniketh0905 "
] | 1,695 | 1,697 | 1,697 | MEMBER | null | Usage of `warnings.warn` has been deprecated in favor of `logging.warning`, which is managed by the `transformers.logging` utility.
* https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+%22warnings.warn%22&type=code
* https://github.com/search?q=repo%3Ahuggingface%2Ftransformers%20UserWarning&type=code
`warnings.warn` may still be leveraged for situations where a single warning per runtime is favored, but this will be deprecated in the future as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26381/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26380/comments | https://api.github.com/repos/huggingface/transformers/issues/26380/events | https://github.com/huggingface/transformers/issues/26380 | 1,911,208,923 | I_kwDOCUB6oc5x6rvb | 26,380 | Padding changes the output of forward pass in Bloom models | {
"login": "tonitervo",
"id": 93580599,
"node_id": "U_kgDOBZPtNw",
"avatar_url": "https://avatars.githubusercontent.com/u/93580599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonitervo",
"html_url": "https://github.com/tonitervo",
"followers_url": "https://api.github.com/users/tonitervo/followers",
"following_url": "https://api.github.com/users/tonitervo/following{/other_user}",
"gists_url": "https://api.github.com/users/tonitervo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonitervo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonitervo/subscriptions",
"organizations_url": "https://api.github.com/users/tonitervo/orgs",
"repos_url": "https://api.github.com/users/tonitervo/repos",
"events_url": "https://api.github.com/users/tonitervo/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonitervo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The problem is not present in BERT model ('TurkuNLP/bert-base-finnish-cased-v1').",
"I tried to track down the problem and it seems that there is a major change from transformers 4.21.3 to 4.22.0.\r\nEven with these older versions the logits seem to differ a lot when using batch of one. So I modified the script so that the reference batch contains two samples of same length: ['Hello', 'hallo']\r\nWith this setup, 'bigscience/bloom-560m' gives the following printout when using transformers 4.21.3:\r\n\r\n> The reference logits:\r\n> [[-16.792621612548828, -4.263307571411133], [-6.076438903808594, -30.44691276550293]]\r\n> \r\n> Adding longer sentence has en effect on padding the first, and the forward pass of first sample changes:\r\n> [[-16.792625427246094, -4.263302803039551], [4.229606628417969, -34.07563400268555]]\r\n> \r\n> Adding even longer sentence changes both shorter ones in the batch:\r\n> [[-16.792625427246094, -4.263302803039551], [-5.1170654296875, -34.66762161254883], [7.155857086181641, -31.741636276245117]]\r\n> \r\n> By keeping the long sentence the same but modifying the middle sample - no change in other samples outputs:\r\n> [[-16.792625427246094, -4.263302803039551], [7.7057037353515625, -31.835201263427734], [7.155857086181641, -31.741636276245117]]\r\n\r\nBy updating to 4.22.0, the printout changes:\r\n> The reference logits:\r\n> [[26.202877044677734, 30.269332885742188], [-3.121173858642578, -25.74514389038086]]\r\n> \r\n> Adding longer sentence has en effect on padding the first, and the forward pass of first sample changes:\r\n> [[6.480306625366211, 3.541583776473999], [-35.4240608215332, -53.11130142211914]]\r\n> \r\n> Adding even longer sentence changes both shorter ones in the batch:\r\n> [[6.715631484985352, 3.287839889526367], [-35.9682731628418, -50.907257080078125], [-36.078636169433594, -52.57697296142578]]\r\n> \r\n> By keeping the long sentence the same but modifying the middle sample - no change in other samples outputs:\r\n> [[6.715631484985352, 3.287839889526367], [-39.45744323730469, -50.4066162109375], [-36.078636169433594, -52.57697296142578]]\r\n\r\n4.21.3 outputs somewhat the same logits for 'Hello' regardless of what is contained in the same batch. Still, there seem to be a separate problem in passing the 'Hello' as a single sample pass, so I don't know if that's another problem. But 4.22.0 seems to make the output logits unstable.",
"Hey! Thanks for reporting! I think this is related to #25921 and I'll have a look! ",
"It might be related, but I think the difference in my observation is that the length of the padding is the key factor, not the batch size.",
"Hey, if you are padding right, and explanation can be found [here](https://github.com/huggingface/transformers/issues/26569#issuecomment-1768157610)",
"> Hey, if you are padding right, and explanation can be found [here](https://github.com/huggingface/transformers/issues/26569#issuecomment-1768157610)\r\n\r\nWell this is interesting as the problem occurs specifically with left padding (which is the default). Once I switch to right padding with \r\ntokenizer.padding_side = \"right\"\r\nthe problem goes away and the output logits for 'Hello' remain the same regardless of the other samples in the batch.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-4.14.322-246.539.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The longest sample within the batch seem to have an effect on outputs of other samples in the batch. Apparently the problem is in the (size of) padding used in those samples. This happens at least with Bloom models (tested 'bigscience/bloom-560m' and 'TurkuNLP/gpt3-finnish-small').
Related previous issues: #18809
- However that issue concentrates in the loss "only" as the problem I observed is present also in the actual output of the model. Also, logits seem to change substantially so I doubt this would be just rounding error as suggested.
#21080
This one focuses on position_ids, which to my knowledge aren't used at all in Bloom models
Script to reproduce:
```import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
MODEL = 'bigscience/bloom-560m'
#MODEL = 'TurkuNLP/gpt3-finnish-small'
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
print('The reference logits:')
tokenized = dict(tokenizer(['Hello'], padding=True))
logits = model(input_ids=torch.tensor(tokenized['input_ids']), attention_mask=torch.tensor(tokenized['attention_mask'])).logits
print(logits.tolist())
print('\nAdding longer sentence has en effect on padding the first, and the forward pass of first sample changes:')
tokenized = dict(tokenizer(['Hello', 'Longer phrase that triggers padding of the first sample'], padding=True))
logits = model(input_ids=torch.tensor(tokenized['input_ids']), attention_mask=torch.tensor(tokenized['attention_mask'])).logits
print(logits.tolist())
print('\nAdding even longer sentence changes both shorter ones in the batch:')
tokenized = dict(tokenizer(['Hello', 'Longer phrase that triggers padding of the first sample', 'Even longer phrase that triggers padding of the first two samples'], padding=True))
logits = model(input_ids=torch.tensor(tokenized['input_ids']), attention_mask=torch.tensor(tokenized['attention_mask'])).logits
print(logits.tolist())
print('\nBy keeping the long sentence the same but modifying the middle sample - no change in other samples outputs:')
tokenized = dict(tokenizer(['Hello', 'Short phrase that does not impact others', 'Even longer phrase that triggers padding of the first two samples'], padding=True))
logits = model(input_ids=torch.tensor(tokenized['input_ids']), attention_mask=torch.tensor(tokenized['attention_mask'])).logits
print(logits.tolist())
```
> The reference logits:
> [[-22.095169067382812, -32.987308502197266]]
>
> Adding longer sentence has en effect on padding the first, and the forward pass of first sample changes:
> [[-6.184600830078125, -12.313322067260742], [-33.231956481933594, -37.62677764892578]]
>
> Adding even longer sentence changes both shorter ones in the batch:
> [[-6.512286186218262, -12.137149810791016], [-28.01362419128418, -36.64024353027344], [-33.26091003417969, -39.2308349609375]]
>
> By keeping the long sentence the same but modifying the middle sample - no change in other samples outputs:
> [[-6.512286186218262, -12.137149810791016], [-33.248321533203125, -40.82627868652344], [-33.26091003417969, -39.2308349609375]]
### Expected behavior
Logits (or any output of the model) shouldn't change when the padding size of the batch changes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26379/comments | https://api.github.com/repos/huggingface/transformers/issues/26379/events | https://github.com/huggingface/transformers/pull/26379 | 1,910,980,666 | PR_kwDOCUB6oc5bGLfk | 26,379 | Add OWLv2 | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would go for simply adding `config.add_objectness_head` in the existing `OwlViT` model file(s).",
"I think current implementation is better than creating another class for OWLv2. One suggestion is to use more descriptive flag for OWLv2 something like `owlvit_v2=True` which enable the objectness head",
"Sorry for the delay I am reviewing now 😉 ",
"Thanks for the reviewing, opened a new PR that adds the model as a new standalone model.",
"Hi, @NielsRogge,thanks for your great work.\r\nI have a simple question: \r\nHow to convert pytorch model into jax model? Do you have any available tools or scripts?\r\nThank you very much\r\n"
] | 1,695 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the OWLv2 checkpoints, and investigates why results weren't the same as the original Colab for v1 as pointed out in #21206. After investigation, it turns out that 1) the model returns the exact same logits as the original implementation on the same input data, but 2) the image preprocessing wasn't done in exactly the same way as the original Scenic repo, which involves padding, causing a difference. This PR fixes that by introducing a new `Owlv2ImageProcessor` which people can use to get equivalent results.
Fixes #26315 #21206
Question:
- the new OWLv2 models have an extra objectness head, which returns `objectness_logits` => we can either add an entirely new `Owlv2ForObjectDetection` which copies 99% of `OwlViTForObjectDetection`, or do it like it's currently implemented (by adding a `config.add_objectness_head` attribute). Happy to read your opinions!
To do:
- [ ] update layer norm eps of LayerNorms, which is 1e-5 in PyTorch but 1e-6 in Flax by default | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26379/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26379",
"html_url": "https://github.com/huggingface/transformers/pull/26379",
"diff_url": "https://github.com/huggingface/transformers/pull/26379.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26379.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26404/comments | https://api.github.com/repos/huggingface/transformers/issues/26404/events | https://github.com/huggingface/transformers/issues/26404 | 1,912,889,095 | I_kwDOCUB6oc5yBF8H | 26,404 | cannot import name 'is_essentia_available' from 'transformers.utils' | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @pacman100 ",
"solved ✅",
"Transfering to `transformers` as it's a `transformers` issue underneath",
"> solved ✅\r\n\r\nHi,I got the same error when I import diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion. \r\n`RuntimeError: Failed to import diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion because of the following error (look up to see its traceback):\r\ncannot import name 'is_essentia_available' from 'transformers.utils' (C:\\Users\\Ziz\\AppData\\Roaming\\Python\\Python311\\site-packages\\transformers\\utils\\__init__.py)`\r\n\r\nCan you kindly show how you solved the problem?thank you!\r\nbest wishes."
] | 1,695 | 1,701 | 1,695 | MEMBER | null | ### System Info
```Shell
- `Accelerate` version: 0.23.0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.12
- Numpy version: 1.21.4
- PyTorch version (GPU?): 2.0.0 (False)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 32.00 GB
- `Accelerate` default config:
Not found
transformers-cli env:
- `transformers` version: 4.33.2
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (False)
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
1. pip install transformers and accelerate
2. try to run the code below
```python
from accelerate.commands.estimate import estimate_command_parser, estimate_command, gather_data
parser = estimate_command_parser()
args = parser.parse_args(["bert-base-cased", "--dtypes", "float32"])
output = gather_data(args)
```
### Expected behavior
works, but error is:
```bash
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
[/var/folders/qt/wszh_95953qc0znj82hjxs5m0000gn/T/ipykernel_53424/1002179948.py](https://file+.vscode-resource.vscode-cdn.net/var/folders/qt/wszh_95953qc0znj82hjxs5m0000gn/T/ipykernel_53424/1002179948.py) in
----> 1 from accelerate.commands.estimate import estimate_command_parser, estimate_command, gather_data
2
3 parser = estimate_command_parser()
4 args = parser.parse_args(["bert-base-cased", "--dtypes", "float32"])
5 output = gather_data(args)
[/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/commands/estimate.py](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/commands/estimate.py) in
19 from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
20
---> 21 from accelerate import init_empty_weights
22 from accelerate.utils import (
23 calculate_maximum_sizes,
[/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/__init__.py](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/__init__.py) in
1 __version__ = "0.23.0"
2
----> 3 from .accelerator import Accelerator
4 from .big_modeling import (
5 cpu_offload,
[/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/accelerator.py](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/accelerator.py) in
34 import torch.utils.hooks as hooks
35
---> 36 from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
37 from .data_loader import DataLoaderDispatcher, prepare_data_loader, skip_first_batches
38 from .logging import get_logger
[/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/checkpointing.py](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/checkpointing.py) in
22 from torch.cuda.amp import GradScaler
23
---> 24 from .utils import (
25 MODEL_NAME,
26 OPTIMIZER_NAME,
[/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/utils/__init__.py](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/utils/__init__.py) in
145 prepare_tpu,
146 )
--> 147 from .megatron_lm import (
148 AbstractTrainStep,
149 BertTrainStep,
[/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/utils/megatron_lm.py](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/accelerate/utils/megatron_lm.py) in
30
31 if is_transformers_available():
---> 32 from transformers.modeling_outputs import (
33 CausalLMOutputWithCrossAttentions,
34 Seq2SeqLMOutput,
[/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/transformers/__init__.py](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/transformers/__init__.py) in
25 # Check the dependencies satisfy the minimal versions required.
26 from . import dependency_versions_check
---> 27 from .utils import (
28 OptionalDependencyNotAvailable,
29 _LazyModule,
ImportError: cannot import name 'is_essentia_available' from 'transformers.utils' (/opt/homebrew/Caskroom/miniforge/base/envs/hf/lib/python3.8/site-packages/transformers/utils/__init__.py)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26378/comments | https://api.github.com/repos/huggingface/transformers/issues/26378/events | https://github.com/huggingface/transformers/issues/26378 | 1,910,908,584 | I_kwDOCUB6oc5x5iao | 26,378 | Bug in Trainer for PeftModel multi-gpu training by accelerate | {
"login": "Orion-Zheng",
"id": 62330719,
"node_id": "MDQ6VXNlcjYyMzMwNzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/62330719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Orion-Zheng",
"html_url": "https://github.com/Orion-Zheng",
"followers_url": "https://api.github.com/users/Orion-Zheng/followers",
"following_url": "https://api.github.com/users/Orion-Zheng/following{/other_user}",
"gists_url": "https://api.github.com/users/Orion-Zheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Orion-Zheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Orion-Zheng/subscriptions",
"organizations_url": "https://api.github.com/users/Orion-Zheng/orgs",
"repos_url": "https://api.github.com/users/Orion-Zheng/repos",
"events_url": "https://api.github.com/users/Orion-Zheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Orion-Zheng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @pacman100 :)",
"Hey! Would recommend you to update to the latest version of both packages and see if this is fixed! ",
"Hello, \r\n\r\nI think @younesbelkada should have a better idea about this.",
"Thanks! Will look into it",
"Hi @Orion-Zheng \r\nThanks for the issue, I was not able to reproduce the issue with the newest transformers version (from source), it seems the model gets un-wrapped from DDP before the call to `_load_best_model()`. \r\n\r\nHere is the snippet I used:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoModelForCausalLM, TrainingArguments\r\nfrom trl import SFTTrainer\r\nfrom accelerate import PartialState\r\n\r\nfrom peft import LoraConfig\r\n\r\ndataset = load_dataset(\"timdettmers/openassistant-guanaco\", split=\"train[:1%]\")\r\n\r\npeft_config = LoraConfig(\r\n r=16,\r\n lora_alpha=32,\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n)\r\n\r\nargs = TrainingArguments(\r\n output_dir=\"./out-test\",\r\n max_steps=2,\r\n load_best_model_at_end=True,\r\n save_steps=1,\r\n save_strategy=\"steps\",\r\n evaluation_strategy=\"steps\",\r\n eval_steps=1,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"EleutherAI/gpt-neo-125m\",\r\n load_in_4bit=True,\r\n device_map={\"\": PartialState().process_index},\r\n)\r\n\r\ntrainer = SFTTrainer(\r\n model,\r\n args=args,\r\n train_dataset=dataset,\r\n eval_dataset=dataset,\r\n dataset_text_field=\"text\",\r\n peft_config=peft_config,\r\n)\r\n\r\ntrainer.train()\r\n```\r\nCan you try your script with the latest versions of transformers and accelerate? `pip install -U transformers accelerate`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,704 | 1,704 | NONE | null | ### System Info
When using the accelerate library to fine-tune LLaMA2 by QLoRA on multiple GPUs, I met an error when I set load_best_model_at_end=True in the TrainingArguments.
<img width="1115" alt="image" src="https://github.com/huggingface/transformers/assets/62330719/a2e7a49b-3627-4249-8fb2-bc8a61f0ede1">
It seems the function '_load_best_model' try to find the weight of the complete model under the peft model's checkpoint dir, but actually there is only adapter's weight saved in this directory.
<img width="350" alt="image" src="https://github.com/huggingface/transformers/assets/62330719/ecbd2f7c-8230-4e63-8265-11ba790e03e7">
I try to find the reason behind the error. I notice that the best peft model should be loaded by these lines of code in '_load_best_model',
<img width="1273" alt="image" src="https://github.com/huggingface/transformers/assets/62330719/e06fbcee-035d-458a-b348-82f783d1a073">
but in the multi-gpu training, the trainer execute another code, which is not intended for the peft model's checkpoint.
<img width="1110" alt="image" src="https://github.com/huggingface/transformers/assets/62330719/21bf760b-a208-4a91-b718-d6aec13dbf39">
But why the correct code is skipped? I finally found the reason.
In multi-GPU code, the Peft Model was wrapped into DistributedDataParallel that looks like:
DistributedDataParallel(
(module): PeftModelForCausalLM(
which makes 'model' is not a PeftModel anymore, so this condition can't be satisfied and the correct code is skipped
<img width="1113" alt="image" src="https://github.com/huggingface/transformers/assets/62330719/52035c34-5c19-4bfd-a68c-83a7b542ca82">
To solve this problem, I add some code to this function. I know this is not a elegant way, but it works for me.
<img width="1106" alt="image" src="https://github.com/huggingface/transformers/assets/62330719/3bbbb438-4b69-46c8-9c48-a7dbd1dcd956">
Besides, I think the original Trainer didn't consider training PeftModel on multiple GPUs well, which also lead to other bugs in the Trainer.save_model when doing multi-gpu training for peft model. Because I don't have enough time to read all of the code of Trainer, I can't solve this problem by myself. So could anyone work on this? Thank you very much!😃
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Transformer Version: 4.31.0
(sorry, my code consists of several files. I'll attach them to this issue once I've organized them into one.)
### Expected behavior
The PeftModel can be saved and load correctly by the Trainer during multi gpu training. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26378/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26377/comments | https://api.github.com/repos/huggingface/transformers/issues/26377/events | https://github.com/huggingface/transformers/issues/26377 | 1,910,667,809 | I_kwDOCUB6oc5x4noh | 26,377 | RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | {
"login": "phamkhactu",
"id": 42369268,
"node_id": "MDQ6VXNlcjQyMzY5MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42369268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phamkhactu",
"html_url": "https://github.com/phamkhactu",
"followers_url": "https://api.github.com/users/phamkhactu/followers",
"following_url": "https://api.github.com/users/phamkhactu/following{/other_user}",
"gists_url": "https://api.github.com/users/phamkhactu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phamkhactu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phamkhactu/subscriptions",
"organizations_url": "https://api.github.com/users/phamkhactu/orgs",
"repos_url": "https://api.github.com/users/phamkhactu/repos",
"events_url": "https://api.github.com/users/phamkhactu/events{/privacy}",
"received_events_url": "https://api.github.com/users/phamkhactu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @phamkhactu, we'd love to help but there is a significant amount of code here.\r\n\r\nWould you mind sharing a reproducible code example so that we may investigate the issue? Thanks!",
"Hi @LysandreJik \r\n\r\nI will share with you about my task. I want to fine-tune llama2 with peft. when I load pretrain model from checkpoint [line 610](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/scripts/training/run_clm_pt_with_peft.py#L610), I get error above.\r\n\r\nWith my task training, create some steps:\r\n1. Create some class, loss, metrics for eval. You don't need to concern about it.\r\n2. Load again checkpoint from pre-trained llama. At this step, I get error.\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,698 | 1,698 | NONE | null | ### System Info
transformers: 4.32.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have code:
```
#!/usr/bin/env python
# coding=utf-8
# Copyright 2020 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset.
Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
https://huggingface.co/models?filter=text-generation
"""
# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments.
import logging
import numpy as np
import math
import os
import sys
from dataclasses import dataclass, field
from itertools import chain
from typing import Optional, List, Dict, Any, Mapping
from pathlib import Path
import datasets
import torch
from datasets import load_dataset, concatenate_datasets
import transformers
from transformers import (
CONFIG_MAPPING,
MODEL_FOR_CAUSAL_LM_MAPPING,
AutoConfig,
AutoModelForCausalLM,
LlamaForCausalLM,
LlamaTokenizer,
AutoTokenizer,
HfArgumentParser,
Trainer,
TrainingArguments,
is_torch_tpu_available,
set_seed,
)
from transformers.testing_utils import CaptureLogger
from transformers.trainer_utils import get_last_checkpoint
from transformers.utils import send_example_telemetry
from transformers.utils.versions import require_version
from sklearn.metrics import accuracy_score
from peft import LoraConfig, TaskType, get_peft_model, PeftModel, get_peft_model_state_dict
from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR
class SavePeftModelCallback(transformers.TrainerCallback):
def save_model(self, args, state, kwargs):
if state.best_model_checkpoint is not None:
checkpoint_folder = os.path.join(state.best_model_checkpoint, "pt_lora_model")
else:
checkpoint_folder = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}")
peft_model_path = os.path.join(checkpoint_folder, "pt_lora_model")
kwargs["model"].save_pretrained(peft_model_path)
kwargs["tokenizer"].save_pretrained(peft_model_path)
def on_save(self, args, state, control, **kwargs):
self.save_model(args, state, kwargs)
return control
def on_train_end(self, args, state, control, **kwargs):
peft_model_path = os.path.join(args.output_dir, "pt_lora_model")
kwargs["model"].save_pretrained(peft_model_path)
kwargs["tokenizer"].save_pretrained(peft_model_path)
def accuracy(predictions, references, normalize=True, sample_weight=None):
return {
"accuracy": float(
accuracy_score(references, predictions, normalize=normalize, sample_weight=sample_weight)
)
}
def compute_metrics(eval_preds):
preds, labels = eval_preds
# preds have the same shape as the labels, after the argmax(-1) has been calculated
# by preprocess_logits_for_metrics but we need to shift the labels
labels = labels[:, 1:].reshape(-1)
preds = preds[:, :-1].reshape(-1)
return accuracy(predictions=preds, references=labels)
def preprocess_logits_for_metrics(logits, labels):
if isinstance(logits, tuple):
# Depending on the model and config, logits may contain extra tensors,
# like past_key_values, but logits always come first
logits = logits[0]
return logits.argmax(dim=-1)
def fault_tolerance_data_collator(features: List) -> Dict[str, Any]:
if not isinstance(features[0], Mapping):
features = [vars(f) for f in features]
first = features[0]
batch = {}
# Special handling for labels.
# Ensure that tensor is created with the correct type
# (it should be automatically the case, but let's make sure of it.)
if "label" in first and first["label"] is not None:
label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"]
dtype = torch.long if isinstance(label, int) else torch.float
batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
elif "label_ids" in first and first["label_ids"] is not None:
if isinstance(first["label_ids"], torch.Tensor):
batch["labels"] = torch.stack([f["label_ids"] for f in features])
else:
dtype = torch.long if isinstance(first["label_ids"][0], int) else torch.float
batch["labels"] = torch.tensor([f["label_ids"] for f in features], dtype=dtype)
# Handling of all other possible keys.
# Again, we will use the first element to figure out which key/values are not None for this model.
try:
for k, v in first.items():
if k not in ("label", "label_ids") and v is not None and not isinstance(v, str):
if isinstance(v, torch.Tensor):
batch[k] = torch.stack([f[k] for f in features])
elif isinstance(v, np.ndarray):
batch[k] = torch.tensor(np.stack([f[k] for f in features]))
else:
batch[k] = torch.tensor([f[k] for f in features])
except ValueError: # quick fix by simply take the first example
for k, v in first.items():
if k not in ("label", "label_ids") and v is not None and not isinstance(v, str):
if isinstance(v, torch.Tensor):
batch[k] = torch.stack([features[0][k]] * len(features))
elif isinstance(v, np.ndarray):
batch[k] = torch.tensor(np.stack([features[0][k]] * len(features)))
else:
batch[k] = torch.tensor([features[0][k]] * len(features))
return batch
MODEL_CONFIG_CLASSES = list(MODEL_FOR_CAUSAL_LM_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": (
"The model checkpoint for weights initialization.Don't set if you want to train a model from scratch."
)
},
)
tokenizer_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": (
"The tokenizer for weights initialization.Don't set if you want to train a model from scratch."
)
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_overrides: Optional[str] = field(
default=None,
metadata={
"help": (
"Override some existing default config settings when a model is trained from scratch. Example: "
"n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"
)
},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": (
"Will use the token generated when running `huggingface-cli login` (necessary to use this script "
"with private models)."
)
},
)
torch_dtype: Optional[str] = field(
default=None,
metadata={
"help": (
"Override the default `torch.dtype` and load the model under this dtype. If `auto` is passed, the "
"dtype will be automatically derived from the model's weights."
),
"choices": ["auto", "bfloat16", "float16", "float32"],
},
)
def __post_init__(self):
if self.config_overrides is not None and (self.config_name is not None or self.model_name_or_path is not None):
raise ValueError(
"--config_overrides can't be used in combination with --config_name or --model_name_or_path"
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
dataset_dir: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
validation_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
)
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
)
},
)
streaming: bool = field(default=False, metadata={"help": "Enable streaming mode"})
block_size: Optional[int] = field(
default=None,
metadata={
"help": (
"Optional input sequence length after tokenization. "
"The training dataset will be truncated in block of this size for training. "
"Default to the model max input length for single sentence inputs (take into account special tokens)."
)
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
validation_split_percentage: Optional[float] = field(
default=0.05,
metadata={
"help": "The percentage of the train set used as validation set in case there's no validation split"
},
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
keep_linebreaks: bool = field(
default=True, metadata={"help": "Whether to keep line breaks when using TXT files or not."}
)
data_cache_dir: Optional[str] = field(default="./", metadata={"help": "The datasets processed stored"})
def __post_init__(self):
if self.streaming:
require_version("datasets>=2.0.0", "The streaming feature requires `datasets>=2.0.0`")
@dataclass
class MyTrainingArguments(TrainingArguments):
trainable : Optional[str] = field(default="q_proj,v_proj")
lora_rank : Optional[int] = field(default=8)
lora_dropout : Optional[float] = field(default=0.1)
lora_alpha : Optional[float] = field(default=32.)
modules_to_save : Optional[str] = field(default=None)
debug_mode : Optional[bool] = field(default=False)
peft_path : Optional[str] = field(default=None)
logger = logging.getLogger(__name__)
def main():
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, MyTrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
# information sent is the one passed as arguments along with your Python/PyTorch versions.
send_example_telemetry("run_clm", model_args, data_args)
# Setup logging
logging.basicConfig(format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO, # if training_args.local_rank in [-1, 0] else logging.WARN,
handlers=[logging.StreamHandler(sys.stdout)],)
if training_args.should_log:
# The default of training_args.log_level is passive, so we set log level at info here to have that default.
transformers.utils.logging.set_verbosity_info()
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
# transformers.tokenization_utils.logging.set_verbosity_warning()
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
# Detecting last checkpoint.
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
# Set seed before initializing model.
set_seed(training_args.seed)
config_kwargs = {
"cache_dir": model_args.cache_dir,
"revision": model_args.model_revision,
"use_auth_token": True if model_args.use_auth_token else None,
}
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, **config_kwargs)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.config_overrides is not None:
logger.info(f"Overriding config: {model_args.config_overrides}")
config.update_from_string(model_args.config_overrides)
logger.info(f"New config: {config}")
tokenizer_kwargs = {
"cache_dir": model_args.cache_dir,
"use_fast": model_args.use_fast_tokenizer,
"revision": model_args.model_revision,
"use_auth_token": True if model_args.use_auth_token else None,
}
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
elif model_args.tokenizer_name_or_path:
tokenizer = LlamaTokenizer.from_pretrained(model_args.tokenizer_name_or_path, **tokenizer_kwargs)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
# Preprocessing the datasets.
# First we tokenize all the texts.
# since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function
tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
def tokenize_function(examples):
with CaptureLogger(tok_logger) as cl:
output = tokenizer(examples["text"])
# clm input could be much much longer than block_size
if "Token indices sequence length is longer than the" in cl.out:
tok_logger.warning(
"^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits"
" before being passed to the model."
)
return output
if data_args.block_size is None:
block_size = tokenizer.model_max_length
if block_size > 1024:
logger.warning(
"The chosen tokenizer supports a `model_max_length` that is longer than the default `block_size` value"
" of 1024. If you would like to use a longer `block_size` up to `tokenizer.model_max_length` you can"
" override this default with `--block_size xxx`."
)
block_size = 1024
else:
if data_args.block_size > tokenizer.model_max_length:
logger.warning(
f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model"
f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
)
block_size = min(data_args.block_size, tokenizer.model_max_length)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
with training_args.main_process_first(desc="dataset map tokenization and grouping"):
lm_datasets = []
path = Path(data_args.dataset_dir)
files = [file.name for file in path.glob("*.txt")]
if training_args.debug_mode is True:
files = [files[0]]
for idx, file in enumerate(files):
data_file = os.path.join(path, file)
filename = ''.join(file.split(".")[:-1])
cache_path = os.path.join(data_args.data_cache_dir, filename)
os.makedirs(cache_path, exist_ok=True)
try:
processed_dataset = datasets.load_from_disk(cache_path, keep_in_memory=False)
logger.info(f'training datasets-{filename} has been loaded from disk')
except Exception:
cache_dir = os.path.join(data_args.data_cache_dir, filename+"_text")
os.makedirs(cache_dir, exist_ok=True)
raw_dataset = load_dataset("text", data_files=data_file, cache_dir=cache_dir, keep_in_memory=False)
logger.info(f"{file} has been loaded")
tokenized_dataset = raw_dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns="text",
load_from_cache_file=True,
keep_in_memory=False,
cache_file_names = {k: os.path.join(cache_dir, 'tokenized.arrow') for k in raw_dataset},
desc="Running tokenizer on dataset",
)
grouped_datasets = tokenized_dataset.map(
group_texts,
batched=True,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=True,
keep_in_memory=False,
cache_file_names = {k: os.path.join(cache_dir, 'grouped.arrow') for k in tokenized_dataset},
desc=f"Grouping texts in chunks of {block_size}",
)
processed_dataset = grouped_datasets
processed_dataset.save_to_disk(cache_path)
if idx == 0:
lm_datasets = processed_dataset['train']
else:
assert lm_datasets.features.type == processed_dataset["train"].features.type
lm_datasets = concatenate_datasets([lm_datasets, processed_dataset["train"]])
lm_datasets = lm_datasets.train_test_split(test_size = data_args.validation_split_percentage)
if training_args.do_train:
train_dataset = lm_datasets['train']
if data_args.max_train_samples is not None:
max_train_samples = min(len(train_dataset), data_args.max_train_samples)
train_dataset = train_dataset.select(range(max_train_samples))
logger.info(f"Num train_samples {len(train_dataset)}")
logger.info("training example:")
logger.info(tokenizer.decode(train_dataset[0]['input_ids']))
if training_args.do_eval:
eval_dataset = lm_datasets["test"]
if data_args.max_eval_samples is not None:
max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)
eval_dataset = eval_dataset.select(range(max_eval_samples))
logger.info(f"Num eval_samples {len(eval_dataset)}")
logger.info("training example:")
logger.info(tokenizer.decode(eval_dataset[0]['input_ids']))
if model_args.model_name_or_path:
torch_dtype = (
model_args.torch_dtype
if model_args.torch_dtype in ["auto", None]
else getattr(torch, model_args.torch_dtype)
)
model = LlamaForCausalLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True
)
else:
model = AutoModelForCausalLM.from_config(config)
n_params = sum({p.data_ptr(): p.numel() for p in model.parameters()}.values())
logger.info(f"Training new model from scratch - Total size={n_params/2**20:.2f}M params")
model_vocab_size = model.get_output_embeddings().weight.size(0)
# if not (
# (model_vocab_size==32000 and len(tokenizer)==49953) or \
# (model_vocab_size==32000 and len(tokenizer)==32000) or \
# (model_vocab_size==49953 and len(tokenizer)==49953) or \
# (model_vocab_size==49954 and len(tokenizer)==49954)
# ):
# raise ValueError(
# f"The combination of base model (size: {model_vocab_size}) and tokenizer (size: {len(tokenizer)}) is not a valid configuration. Please check our project wiki for further information. \n"
# "Valid configurations (base model / tokenizer):\n"
# "- Continue pre-training original LLaMA: 32000 / 32000 \n"
# "- Pre-training Chinese LLaMA based on original LLaMA: 32000 / 49953 \n"
# "- Continue pre-training Chinese LLaMA: 49953 / 49953 \n"
# "- Continue pre-training Chinese Alpaca: 49954 / 49954 \n")
model.resize_token_embeddings(len(tokenizer))
if training_args.peft_path is not None:
logger.info("Peft from pre-trained model")
model = PeftModel.from_pretrained(model, training_args.peft_path)
else:
logger.info("Init new peft model")
target_modules = training_args.trainable.split(',')
modules_to_save = training_args.modules_to_save
if modules_to_save is not None:
modules_to_save = modules_to_save.split(',')
lora_rank = training_args.lora_rank
lora_dropout = training_args.lora_dropout
lora_alpha = training_args.lora_alpha
logger.info(f"target_modules: {target_modules}")
logger.info(f"lora_rank: {lora_rank}")
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
target_modules=target_modules,
inference_mode=False,
r=lora_rank, lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
modules_to_save=modules_to_save)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
old_state_dict = model.state_dict
model.state_dict = (
lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
).__get__(model, type(model))
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=fault_tolerance_data_collator,
compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
preprocess_logits_for_metrics=preprocess_logits_for_metrics
if training_args.do_eval and not is_torch_tpu_available()
else None,
)
trainer.add_callback(SavePeftModelCallback)
# Training
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
train_result = trainer.train(resume_from_checkpoint=checkpoint)
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
# Evaluation
if training_args.do_eval:
logger.info("*** Evaluate ***")
metrics = trainer.evaluate()
max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
try:
perplexity = math.exp(metrics["eval_loss"])
except OverflowError:
perplexity = float("inf")
metrics["perplexity"] = perplexity
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
if __name__ == "__main__":
main()
```
I get errors:
```
0%| | 0/691168 [00:00<?, ?it/s][WARNING|logging.py:305] 2023-09-25 11:56:23,032 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
[WARNING|logging.py:305] 2023-09-25 11:56:23,091 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/checkpoint.py:31: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn("None of the inputs have requires_grad=True. Gradients will be None")
Traceback (most recent call last):
File "run_clm_pt_with_peft.py", line 642, in <module>
main()
File "run_clm_pt_with_peft.py", line 610, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1837, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 2693, in training_step
self.accelerator.backward(loss)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/accelerator.py", line 1838, in backward
Traceback (most recent call last):
File "run_clm_pt_with_peft.py", line 642, in <module>
main()
File "run_clm_pt_with_peft.py", line 610, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1837, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 2693, in training_step
self.accelerator.backward(loss)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/accelerator.py", line 1838, in backward
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/utils/deepspeed.py", line 167, in backward
self.engine.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1923, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1958, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
wandb: Waiting for W&B process to finish... (failed 1).
self.deepspeed_engine_wrapped.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/accelerate/utils/deepspeed.py", line 167, in backward
self.engine.backward(loss, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1923, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1958, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
wandb: You can sync this run to the cloud by running:
wandb: wandb sync /home/tupk/tupk/nlp/Chinese-LLaMA-Alpaca/scripts/training/wandb/offline-run-20230925_115602-v55ehf07
wandb: Find logs at: ./wandb/offline-run-20230925_115602-v55ehf07/logs
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 715019) of binary: /home/tupk/anaconda3/envs/nlp/bin/python
Traceback (most recent call last):
File "/home/tupk/anaconda3/envs/nlp/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
```
I've search some issues like that, I found some issue relevant with version, I try another version from newest to 4.32. But all of us get same errors
### Expected behavior
It runs okie | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26376/comments | https://api.github.com/repos/huggingface/transformers/issues/26376/events | https://github.com/huggingface/transformers/issues/26376 | 1,910,646,925 | I_kwDOCUB6oc5x4iiN | 26,376 | Early stopping Abnormality | {
"login": "GeorgeBGM",
"id": 26595839,
"node_id": "MDQ6VXNlcjI2NTk1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26595839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeorgeBGM",
"html_url": "https://github.com/GeorgeBGM",
"followers_url": "https://api.github.com/users/GeorgeBGM/followers",
"following_url": "https://api.github.com/users/GeorgeBGM/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgeBGM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeorgeBGM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgeBGM/subscriptions",
"organizations_url": "https://api.github.com/users/GeorgeBGM/orgs",
"repos_url": "https://api.github.com/users/GeorgeBGM/repos",
"events_url": "https://api.github.com/users/GeorgeBGM/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeorgeBGM/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Maybe cc @pacman100 regarding the `Trainer`",
"Dear @pacman100,\r\n\r\n Thanks in advance, I'm looking forward for your reply.\r\n \r\n Best,Du"
] | 1,695 | 1,708 | null | NONE | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.32.1
- Platform: Linux-4.18.0-372.9.1.el8_lustre.x86_64-x86_64-with-glibc2.17
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### 1. function definition
def compute_metrics_binary(eval_preds):
logits, labels = eval_preds
prediction_scores = torch.nn.functional.softmax(torch.from_numpy(logits).double(), dim=-1).numpy()
predictions = np.argmax(prediction_scores, axis=-1)
# 计算各种评价指标
accuracy = accuracy_score(labels, predictions)
eval_loss=1 - accuracy
f1 = f1_score(labels, predictions)
recall = recall_score(labels, predictions)
precision = precision_score(labels, predictions)
roc_auc_macro = roc_auc_score(labels, prediction_scores[:, 1], average='macro')
roc_auc_weighted = roc_auc_score(labels, prediction_scores[:, 1], average='weighted')
pr_auc = average_precision_score(labels, prediction_scores[:, 1])
mcc = matthews_corrcoef(labels, predictions) # 添加MCC指标
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hello eval_loss",eval_loss)
# 返回评价指标
return {
'loss':eval_loss,
'accuracy': accuracy,
'f1': f1,
'recall': recall,
'precision': precision,
'roc_auc_macro': roc_auc_macro,
'roc_auc_weighted': roc_auc_weighted,
'pr_auc': pr_auc,
'mcc': mcc # 返回MCC指标
}
compute_metrics = compute_metrics_binary if len(labels) == 2 else compute_metrics_multi
### 2. Model training parameters
args=TrainingArguments(output_dir='outputsN', learning_rate=LEARNING_RATE, warmup_ratio=warmup_ratio, lr_scheduler_type='cosine',fp16=True,evaluation_strategy="epoch",
per_device_train_batch_size=BATCH_SIZE,per_device_eval_batch_size=eval_batch_size,gradient_accumulation_steps=ACCUMULATION,num_train_epochs=EPOCHS,
weight_decay=0.01,save_strategy='epoch', report_to='none',load_best_model_at_end=True,seed=seed_val,metric_for_best_model='eval_f1',eval_steps=5,
)#linear
#early stopping 5 epochs
callbacks= [EarlyStoppingCallback(early_stopping_patience=5, early_stopping_threshold=0.05),CometCallback()]
### 3. Model training Training Errors
early stopping required metric_for_best_model, but did not find eval_f1 so early stopping is disabled
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Hello eval_loss 0.3282442748091603
### Expected behavior
Question 1:
Do you have any suggestions about Early stopping? Is there a complete pre-trained model fine-tuning code as a reference?
Question 2:
How to modify the **Trainer** function to make it suitable for multiclassification problems with class imbalance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26376/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26375/comments | https://api.github.com/repos/huggingface/transformers/issues/26375/events | https://github.com/huggingface/transformers/issues/26375 | 1,910,597,551 | I_kwDOCUB6oc5x4Wev | 26,375 | [LLM Tokenizer] Tokenizer loads too slowly | {
"login": "Sakurakdx",
"id": 48399040,
"node_id": "MDQ6VXNlcjQ4Mzk5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sakurakdx",
"html_url": "https://github.com/Sakurakdx",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions",
"organizations_url": "https://api.github.com/users/Sakurakdx/orgs",
"repos_url": "https://api.github.com/users/Sakurakdx/repos",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sakurakdx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @Sakurakdx, by \"loading speed\", are you talking about the model or the tokenizer? Would you have a reproducible code snippet I could try to see what you mean?",
"Sorry, my statement may not be accurate. The following is a clarification: When I use `AutoTokenizer` to load the tokenzier of LLAMA2, I need to wait for a long time.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"chinese-llama-2-7b\", fast=True, trust_remote_code=True)\r\n```",
"Do you mind trying to install from main to see if it fixes your issue? You can do so with:\r\n```\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"Thank you this is super helpful!"
] | 1,695 | 1,695 | 1,695 | NONE | null | When I use LLM such as LLAMA's tokenzier, its loading speed is very slow, even taking half an hour. I would like to ask how long is the normal loading time and is there any way to speed up the loading? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26374/comments | https://api.github.com/repos/huggingface/transformers/issues/26374/events | https://github.com/huggingface/transformers/pull/26374 | 1,910,584,899 | PR_kwDOCUB6oc5bE15T | 26,374 | Control first downsample stride in ResNet | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ArthurZucker @rafaelpadilla .\r\n\r\nThanks for your advice. The resnet backbone which downsamples in bottleneck layer can be found in [resnet](https://huggingface.co/Jiqing/resnet-backbone-downsample_in_bottleneck).",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26374). All of your documentation changes will be reflected on that endpoint.",
"> LGTM appart for the arg that is not used. Let's have explicit and useful args, doing a if else is okay to chose the layer and more understandable\r\n\r\nSure!"
] | 1,695 | 1,700 | 1,696 | CONTRIBUTOR | null | Hi @ArthurZucker
Relate to [25856](https://github.com/huggingface/transformers/pull/25856). I added a new parameter in config to control the stride for the first bottleneck layer in stages. Would you please help me review it? Thx! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26374",
"html_url": "https://github.com/huggingface/transformers/pull/26374",
"diff_url": "https://github.com/huggingface/transformers/pull/26374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26374.patch",
"merged_at": 1696913124000
} |
https://api.github.com/repos/huggingface/transformers/issues/26373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26373/comments | https://api.github.com/repos/huggingface/transformers/issues/26373/events | https://github.com/huggingface/transformers/issues/26373 | 1,910,469,237 | I_kwDOCUB6oc5x33J1 | 26,373 | Use github.com/apssouza22/chatflow as a conversational layer. It would enable actual API requests to be carried out from natural language inputs. | {
"login": "GiovanniSmokes",
"id": 138458840,
"node_id": "U_kgDOCEC22A",
"avatar_url": "https://avatars.githubusercontent.com/u/138458840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GiovanniSmokes",
"html_url": "https://github.com/GiovanniSmokes",
"followers_url": "https://api.github.com/users/GiovanniSmokes/followers",
"following_url": "https://api.github.com/users/GiovanniSmokes/following{/other_user}",
"gists_url": "https://api.github.com/users/GiovanniSmokes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GiovanniSmokes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GiovanniSmokes/subscriptions",
"organizations_url": "https://api.github.com/users/GiovanniSmokes/orgs",
"repos_url": "https://api.github.com/users/GiovanniSmokes/repos",
"events_url": "https://api.github.com/users/GiovanniSmokes/events{/privacy}",
"received_events_url": "https://api.github.com/users/GiovanniSmokes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @GiovanniSmokes, we're happy to provide support in case you would like to integrate `transformers` within `chatflow`. \r\n\r\nWe have a lot of requests so we prefer to centralize everything in our GitHub issues, feel free to open an issue in case you run into problems! Thank you :hugs: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,698 | 1,698 | NONE | null | ### Feature request
Adding this conversational UI would enable people to 'talk' directly with the backend and API requests to be carried out more effectively. RAG can help with some of the problems function calling by language models face at the moment.
https://youtu.be/r3cegH2kviQ
### Motivation
I'm trying to accelerate the adoption of natural language interfaces.
### Your contribution
We're developing in public in our Discord and we would love for you guys to join us. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26372/comments | https://api.github.com/repos/huggingface/transformers/issues/26372/events | https://github.com/huggingface/transformers/pull/26372 | 1,910,408,060 | PR_kwDOCUB6oc5bEQbK | 26,372 | testing doc-builder | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/404
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26372/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26372",
"html_url": "https://github.com/huggingface/transformers/pull/26372",
"diff_url": "https://github.com/huggingface/transformers/pull/26372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26372.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26371/comments | https://api.github.com/repos/huggingface/transformers/issues/26371/events | https://github.com/huggingface/transformers/pull/26371 | 1,910,275,026 | PR_kwDOCUB6oc5bD2e6 | 26,371 | testing doc-builder svelte kit migration | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/396
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26371/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26371",
"html_url": "https://github.com/huggingface/transformers/pull/26371",
"diff_url": "https://github.com/huggingface/transformers/pull/26371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26371.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26370/comments | https://api.github.com/repos/huggingface/transformers/issues/26370/events | https://github.com/huggingface/transformers/pull/26370 | 1,910,247,787 | PR_kwDOCUB6oc5bDxJ- | 26,370 | Fix MusicGen logging error | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | MEMBER | null | # What does this PR do?
Issue: #26369
Loading
```python
pipe = pipeline("text-to-audio", model="facebook/musicgen-small")
data = pipe("latin salsa with violins")
```
at the moment, throws a large error
## Who can review?
@ylacombe @sanchit-gandhi
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26370/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26370",
"html_url": "https://github.com/huggingface/transformers/pull/26370",
"diff_url": "https://github.com/huggingface/transformers/pull/26370.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26370.patch",
"merged_at": 1695640106000
} |
https://api.github.com/repos/huggingface/transformers/issues/26369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26369/comments | https://api.github.com/repos/huggingface/transformers/issues/26369/events | https://github.com/huggingface/transformers/issues/26369 | 1,910,247,472 | I_kwDOCUB6oc5x3BAw | 26,369 | MusicGen with TextToAudioPipeline issues | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Narsil as well :)",
"Re: the issue with `max_new_tokens` - We should take care of this in the `santize_parameters` function, right?\r\n\r\n`max_new_tokens` is part of the `generate_kwargs` - It should be considered as here: https://github.com/huggingface/transformers/blob/546e7679e7f692ebeefcfc5063cec271a55bae20/src/transformers/pipelines/automatic_speech_recognition.py#L384\r\n\r\nCurrently, `max_new_tokens` works with `forward_params`: https://github.com/Vaibhavs10/scratchpad/blob/main/text_to_audio_pipeline_repro.ipynb\r\n\r\ncc: @sanchit-gandhi @ylacombe ",
"The Automatic Speech Recognition pipeline takes `max_new_tokens` as a standalone argument: https://github.com/huggingface/transformers/blob/033ec57c038bbc1a85a936196c9a63072088d221/src/transformers/pipelines/automatic_speech_recognition.py#L344-L345\r\n\r\nThis is in contrary to the Text To Speech pipeline, which expects it to be passed as a `forward_param`: https://github.com/huggingface/transformers/blob/033ec57c038bbc1a85a936196c9a63072088d221/src/transformers/pipelines/text_to_audio.py#L129-L130\r\n\r\nThis again is different to the Text Generation pipeline, which expects `generate_kwargs`:\r\nhttps://github.com/huggingface/transformers/blob/033ec57c038bbc1a85a936196c9a63072088d221/src/transformers/pipelines/text_generation.py#L193-L195\r\n\r\nIMO we can unify them all to have the same argument for the forward params - WDYT @Narsil? At least for the TTS pipeline, we can accept `generate_kwargs`, since these are used in all the other generation based pipelines (cc @ylacombe)",
"IMHO, we should unify `max_new_tokens` to top-level calls for all the pipelines. Pipelines are supposed to reduce friction for our end-user. Expecting them to know the different ways to do something as basic as controlling the length of generation causes a great deal of friction.\r\n\r\nIf there is disagreement with the above, then I think we should consider unifying all under `generate_kwargs` - FYI I've been recommending `generate_kwargs` for ASR pipeline as it also allows user to play around with other generation strategies directly through there.\r\n\r\nI'm not sure if using `forward_params` in TTS pipeline is such a good idea. Can these be folded into `generate_kwargs`? Would that make much sense?\r\n",
"I'm not sure about the correct naming (`forward_params` or `generate_params`), but keep in mind that the TTS/TTA pipeline is also compatible with models that are not generative, so using `generate_params` or having a `max_new_tokens` param can be misleading in some cases.\r\n\r\nHowever, I would be in favor of adding these naming and common parameters, provided that the documents are clear enough to specify that they will only be used with generative models.\r\n",
"> IMO we can unify them all to have the same argument for the forward params - WDYT @Narsil? At least for the TTS pipeline, we can accept generate_kwargs, since these are used in all the other generation based pipelines (cc @ylacombe)\r\n\r\n`max_new_tokens` is what I call a lifted arg. It's a top-level one because it's very useful one in text-generation (basically to prevent long generations).\r\n\r\nIs is really that useful here ? AFAIK it's not defined in ASR for instance, where you want all the generated text and don't really care how many tokens that is. For TTA, maybe we want to generate audio of a specific length ? Feels here you just tried something because you saw the warning. Maybe the warning is the issue ? (And is fixed by modifying the generate_config.json iiuc).\r\n\r\nIn general for pipeline, which is aimed at non ML users:\r\n\r\n -> No param is best, we provide sane defaults wherever it fits (we expect 80% of calls are this)\r\n -> Simple param control, which are model agnostic (they work, no matter what's underneath, like top_k for fill-mask) (10% of the calls)\r\n -> Advanced param control (everything else including `generate_kwargs`) . This should be quite rare imho, and if anything is missing, dropping down to Model + tokenizer + feature_extractor +... is the actual way to go, not overparametrizing the pipeline.\r\n As long as their support is easy for the pipeline, they are a nice escape hatch for advanced usage, but they are not by any means core intended usage imho).\r\n \r\n Note that I didn't say if `max_new_tokens` is or isn't relevant here, I'm raising the question that maybe it isn't, you can decide if it actually is.\r\n \r\n+1 to making it top-level whereever it might fit -> WITH proper sanitation (which is a bit tedious, it must be defined at top-level, not in generate_kwargs at the same time, and it must actually be do something special if the pipeline is not going to use it, for instance if the model is not generative)\r\n\r\n`{generate|forward}_kwargs` are \"drop-all\" kind of kwargs, enabling powerusers to do all the customization they want. They are an escape hatch, no need to overthink them I think."
] | 1,695 | 1,698 | 1,698 | MEMBER | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.2
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (gpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
### Who can help?
@sanchit-gandhi @ylacombe @Vaibhavs10
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
#### Issue 1 - logging error
Load musicgen with pipeline will have a logging error
```python
from transformers import pipeline
pipe = pipeline("text-to-audio", model="facebook/musicgen-small")
data = pipe("latin salsa with violins")
```
error
```
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.10/logging/__init__.py", line 1100, in emit
msg = self.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 943, in format
return fmt.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 678, in format
record.message = record.getMessage()
File "/usr/lib/python3.10/logging/__init__.py", line 368, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module>
ColabKernelApp.launch_instance()
File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance
app.start()
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start
self.io_loop.start()
File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
self._run_once()
File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
handle._run()
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback
ret = callback()
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner
self.ctx_run(self.run)
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run
yielded = self.gen.send(value)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one
yield gen.maybe_future(dispatch(*args))
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request
self.do_execute(
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell
result = self._run_cell(
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell
return runner(coro)
File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner
coro.send(None)
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-d27dc111b0bf>", line 4, in <cell line: 4>
data = pipe("latin salsa with violins")
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_to_audio.py", line 138, in __call__
return super().__call__(text_inputs, **forward_params)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1140, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1147, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1046, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_to_audio.py", line 112, in _forward
output = self.model.generate(**model_inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/musicgen/modeling_musicgen.py", line 2335, in generate
logger.warning(
Message: 'Using the model-agnostic default `max_length` (=1500) to control the generation length. recommend setting `max_new_tokens` to control the maximum length of the generation.'
Arguments: (<class 'UserWarning'>,)
```
#### Issue 2 - Not clear how to specify max_new_tokens
```
data = pipe("latin salsa with violins", max_new_tokens=256)
```
will give an error
```
TypeError: TextToAudioPipeline._sanitize_parameters() got an unexpected keyword argument 'max_new_tokens'
```
I would have expected the kwargs to be passed
### Expected behavior
Pipeline works for MusicGen | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26369/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26368/comments | https://api.github.com/repos/huggingface/transformers/issues/26368/events | https://github.com/huggingface/transformers/pull/26368 | 1,910,222,187 | PR_kwDOCUB6oc5bDsOM | 26,368 | feat: add enable_gradient_checkpoint for roberta | {
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @pphuc25 - I believe that the `enable_gradient_checkpointing()` method is working for Flax Roberta. See code snippet below that demonstrates:\r\n```python\r\nfrom transformers import FlaxRobertaForMaskedLM\r\n\r\nmodel = FlaxRobertaForMaskedLM.from_pretrained(\"hf-internal-testing/tiny-random-roberta\", from_pt=True)\r\nprint(\"Gradient checkpointing before enabling: \", model.module.gradient_checkpointing)\r\n\r\nmodel.enable_gradient_checkpointing()\r\nprint(\"Gradient checkpointing after enabling: \", model.module.gradient_checkpointing)\r\n```\r\n**Print Output:**\r\n```\r\nGradient checkpointing before enabling: False\r\nGradient checkpointing after enabling: True\r\n```\r\n\r\nWhat behaviour are you observing that makes you think otherwise?",
"Oh, it my mistake, I have download with not the lastest version of transformers, new update have fix this, sorry very much."
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
While reviewing the "run_mlm_flax.py" file, I encountered an issue when attempting to execute the "model.enable_gradient_checkpointing()" function. Upon further investigation, I observed that there was no "enable_gradient_checkpointing" available. As a solution, I configured FlaxRobertaForMasked to include the "enable_gradient_checkpoint" feature.
I would like to cc @sanchit-gandhi to review my code | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26368/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26368",
"html_url": "https://github.com/huggingface/transformers/pull/26368",
"diff_url": "https://github.com/huggingface/transformers/pull/26368.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26368.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26367/comments | https://api.github.com/repos/huggingface/transformers/issues/26367/events | https://github.com/huggingface/transformers/pull/26367 | 1,910,217,028 | PR_kwDOCUB6oc5bDrOZ | 26,367 | testing doc-builder | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/401
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26367/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26367",
"html_url": "https://github.com/huggingface/transformers/pull/26367",
"diff_url": "https://github.com/huggingface/transformers/pull/26367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26367.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26366/comments | https://api.github.com/repos/huggingface/transformers/issues/26366/events | https://github.com/huggingface/transformers/issues/26366 | 1,910,202,237 | I_kwDOCUB6oc5x2199 | 26,366 | add_tokens misses to add some certain words in tokenizer vocab | {
"login": "Kushdesh",
"id": 10446551,
"node_id": "MDQ6VXNlcjEwNDQ2NTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10446551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kushdesh",
"html_url": "https://github.com/Kushdesh",
"followers_url": "https://api.github.com/users/Kushdesh/followers",
"following_url": "https://api.github.com/users/Kushdesh/following{/other_user}",
"gists_url": "https://api.github.com/users/Kushdesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kushdesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kushdesh/subscriptions",
"organizations_url": "https://api.github.com/users/Kushdesh/orgs",
"repos_url": "https://api.github.com/users/Kushdesh/repos",
"events_url": "https://api.github.com/users/Kushdesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kushdesh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am really sorry I found that 'abs' is already in T5Tokenizer.from_pretrained(\"t5-small\", legacy=False). I checked by running following\r\n```python\r\nfrom transformers import T5Tokenizer\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-small\", legacy=False)\r\ntokenizer.get_vocab()['abs']\r\n```\r\nwhich prints 14623\r\n\r\nProbably when we call tokenizer('abs', add_special_tokens=False) it is being tokenized using other smaller tokens which have token_ids 703 and 7 rather than tokenized by 'abs' token146323\r\n"
] | 1,695 | 1,695 | 1,695 | NONE | null | ### System Info
transformers : 4.33.2
torch: 2.1.0a0+29c30b1
Python: 3.10.12
OS: Ubuntu 22.04.3 LTS
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I wanted to add a few words in tokenizer vocabulary so that encoding it result in a single integer. This I want to use in my custom head. But for some reason I couldn't create it for few words. For example
```python
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-small", legacy=False)
tokenizer.add_tokens(['abs'])
tokenizer('abs', add_special_tokens=False)
```
prints
`{'input_ids': [703, 7], 'attention_mask': [1, 1]}`
'abs' word is not in tokenizer's vocab and it is not being added.
### Expected behavior
'abs' should have been added to tokenizer vocab and input_ids of tokenizing 'abs' should produce single integer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26365/comments | https://api.github.com/repos/huggingface/transformers/issues/26365/events | https://github.com/huggingface/transformers/pull/26365 | 1,910,150,705 | PR_kwDOCUB6oc5bDec7 | 26,365 | Update add_new_model.md | {
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | fixed typos
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26365/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26365",
"html_url": "https://github.com/huggingface/transformers/pull/26365",
"diff_url": "https://github.com/huggingface/transformers/pull/26365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26365.patch",
"merged_at": 1695639492000
} |
https://api.github.com/repos/huggingface/transformers/issues/26364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26364/comments | https://api.github.com/repos/huggingface/transformers/issues/26364/events | https://github.com/huggingface/transformers/pull/26364 | 1,910,150,575 | PR_kwDOCUB6oc5bDebX | 26,364 | supporting different position embeddings | {
"login": "michalozeryflato",
"id": 104420142,
"node_id": "U_kgDOBjlTLg",
"avatar_url": "https://avatars.githubusercontent.com/u/104420142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michalozeryflato",
"html_url": "https://github.com/michalozeryflato",
"followers_url": "https://api.github.com/users/michalozeryflato/followers",
"following_url": "https://api.github.com/users/michalozeryflato/following{/other_user}",
"gists_url": "https://api.github.com/users/michalozeryflato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michalozeryflato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michalozeryflato/subscriptions",
"organizations_url": "https://api.github.com/users/michalozeryflato/orgs",
"repos_url": "https://api.github.com/users/michalozeryflato/repos",
"events_url": "https://api.github.com/users/michalozeryflato/events{/privacy}",
"received_events_url": "https://api.github.com/users/michalozeryflato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker @zhangjunyi111 I forked transformers for investigating different positional embeddings in a bioinformatics study. I created the PR to highlight code changes to my colleagues. I did not intend to suggest merging my code changes into transformers. I changed the PR to a draft mode. Let me know if I need to delete it.",
"Closing this PR - was created by mistake",
"No worries thanks for telling me! "
] | 1,695 | 1,698 | 1,698 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26364/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26364",
"html_url": "https://github.com/huggingface/transformers/pull/26364",
"diff_url": "https://github.com/huggingface/transformers/pull/26364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26364.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26363/comments | https://api.github.com/repos/huggingface/transformers/issues/26363/events | https://github.com/huggingface/transformers/issues/26363 | 1,910,150,197 | I_kwDOCUB6oc5x2pQ1 | 26,363 | training t5-11b on 32GB GPU V100 | {
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stas00 Just quick question please: could we use the same steps to finetune t5-11b or ul2 on single gpu with 32GB memory V100? Or I need 40GB memory? \r\nif yes, what is maximim input length which can I use?\r\nToday I have tried to finetune t5-11b using this yaml config\r\n\r\n`\r\ncompute_environment: LOCAL_MACHINE\r\ndeepspeed_config:\r\n offload_optimizer_device: cpu\r\n zero_stage: 2\r\ndistributed_type: DEEPSPEE\r\ndowncast_bf16: 'no'\r\nfsdp_config: {}\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmain_process_port: 20683\r\nmixed_precision: 'no'\r\nnum_machines: 1\r\nnum_processes: 1\r\nrdzv_backend: static\r\nsame_network: true\r\nuse_cpu: false`\r\n\r\nfor model I have loaded it as folloing:\r\n\r\n`\r\n # Load model\r\n peft_config = LoraConfig(\r\n task_type=TaskType.SEQ_2_SEQ_LM,\r\n inference_mode=False,\r\n r=8,\r\n lora_alpha=32,\r\n target_modules=[\"q\", \"v\"],\r\n lora_dropout=0.1,\r\n )\r\n\r\n model = AutoModelForSeq2SeqLM.from_pretrained(\r\n args.model_name_or_path, # t5-11b\r\n load_in_8bit=True,\r\n device_map=\"auto\",\r\n )\r\n`\r\n\r\nthis is my run script\r\n\r\n`export CUDA_LAUNCH_BLOCKING=0 CUDA_VISIBLE_DEVICES=5 WANDB_CONSOLE=off TORCH_DISTRIBUTED_DEBUG=INFO\r\naccelerate launch --config_file='./accelerate.yaml' train_seqtoseq_PEFT.py --seed=42 --preprocessing_num_workers=1 --weight_decay='0.001' --output_dir=\"PEFT_t5_11b_kv\" --per_device_train_batch_size=1 --per_device_eval_batch_size=1 --num_train_epochs=10 --model_name_or_path='t5-11b' --num_beams=5 --with_tracking --report_to='wandb' --checkpointing_steps='epoch' --dataset_name=\"databricks-dolly-15k\" --dataset_config \"3.0.0\" --gradient_accumulation_steps=1 --tokenizer_name='t5-11b' --max_target_length=1024`\r\n\r\nI got this error:\r\n\r\ntrainable params: 5,898,240 || all params: 2,857,496,576 || trainable%: 0.20641284575943442\r\ntrainable params: 5,898,240 || all params: 2,857,496,576 || trainable%: 0.20641284575943442\r\n09/24/2023 11:48:35 - INFO - root - model.print_trainable_parameters():\r\nns key of the tokenizer\r\ndolly before add text columns: ['instruction', 'context', 'response', 'category']\r\ndolly after add text columns: ['input', 'target']\r\nraw_datasets[0]: {'input': \"### Instruction: \\nWhen did Virgin Australia start operating?\\n### Context: \\nVirgin Australia, the trading name of Virgin Australia Airlines Pty Ltd, is an Australian-base\r\nRunning tokenizer on dataset: 100%|█████████████████████| 1502/1502 [00:01<00:00, 1409.67 examples/s]\r\n validation: Dataset({\r\n09/24/2023 11:48:48 - INFO - root - Val dataset length :1501\r\n19, 3, 9, 11820, 40, 11417, 58, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [1255\r\n5146, 13, 331, 6, 11, 79, 54, 92, 560, 650, 42, 1561, 3, 31761, 2865, 147, 97, 5, 12558, 13154, 661, \r\ns to 8 from 1.\r\nUsing /home/arij/.cache/torch_extensions/py39_cu117 as PyTorch extensions root...\r\nEmitting ninja build file /home/arij/.cache/torch_extensions/py39_cu117/cpu_adam/build.ninja...\r\ner for key:store_based_barrier_key:2 with 1 nodes.\r\n File \"/mnt/ssd/arij/NeurIPS/NeurIPSS/lib/python3.9/site-packages/accelerate/accelerator.py\", line 1\r\n in log_wrapper\r\n return func(*args, **kwargs)\r\n File \"/mnt/ssd/arij/NeurIPS/NeurIPSS/lib/python3.9/site-packages/deepspeed/comm/comm.py\", line 224, in broadcast\r\n return cdb.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)\r\n File \"/mnt/ssd/arij/NeurIPS/NeurIPSS/lib/python3.9/site-packages/deepspeed/comm/torch.py\", line 192, in broadcast\r\n return torch.distributed.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)\r\n File \"/mnt/ssd/arij/NeurIPS/NeurIPSS/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py\", line 1451, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/mnt/ssd/arij/NeurIPS/NeurIPSS/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py\", line 1570, in broadcast\r\n work = group.broadcast([tensor], opts)\r\nRuntimeError: Tensors must be contiguous\r\n\r\nWhat had I missed?\r\n\r\n",
"tagging @pacman100 who is the current maintainer of the Accelerate/Deepspeed integration.",
"@pacman100 I have tried ith t5-3b loss is nan after first epoch and accuracy metric 0 :/",
"@stas00 who can help?",
"Hello @Arij-Aladel, DeepSpeed and BitsandBytes aren't compatible with each other. Could you please try to use only BitsandBytes + PEFT for training 11B on a single 32GB V100 GPU."
] | 1,695 | 1,695 | 1,695 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26363/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26362/comments | https://api.github.com/repos/huggingface/transformers/issues/26362/events | https://github.com/huggingface/transformers/issues/26362 | 1,910,119,139 | I_kwDOCUB6oc5x2hrj | 26,362 | [CLAP] Acc drop after converting "HTSAT-base" type from origin model to huggingface model | {
"login": "happylittlecat2333",
"id": 43379755,
"node_id": "MDQ6VXNlcjQzMzc5NzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43379755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/happylittlecat2333",
"html_url": "https://github.com/happylittlecat2333",
"followers_url": "https://api.github.com/users/happylittlecat2333/followers",
"following_url": "https://api.github.com/users/happylittlecat2333/following{/other_user}",
"gists_url": "https://api.github.com/users/happylittlecat2333/gists{/gist_id}",
"starred_url": "https://api.github.com/users/happylittlecat2333/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/happylittlecat2333/subscriptions",
"organizations_url": "https://api.github.com/users/happylittlecat2333/orgs",
"repos_url": "https://api.github.com/users/happylittlecat2333/repos",
"events_url": "https://api.github.com/users/happylittlecat2333/events{/privacy}",
"received_events_url": "https://api.github.com/users/happylittlecat2333/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada ",
"Hi @happylittlecat2333, thanks for the very thorough analysis here!\r\n\r\nI've opened a PR (#27153) to convert the weights from the new clap checkpoints. I believe that you missed some parameters when you converted the weights!\r\n\r\nYou can find the converted weights ([here](https://huggingface.co/ylacombe/larger_clap_music), [here](https://huggingface.co/ylacombe/larger_clap_general) and [here](https://huggingface.co/ylacombe/larger_clap_music_and_speech) - yet to be moved to laion organization). Would you mind running your benchmark on it again ? Thanks!\r\n",
"Great Job!!! The converted models have the similar results with the new clap checkpoints! \r\n\r\nBelow is my result after converting the models.\r\n\r\n### Evaluate Result\r\n- 630k-audioset-best **before** convert, HTSAT-tiny type\r\n\r\n> Zeroshot Classification Results: mean_rank: 1.1450 median_rank: 1.0000 R@1: 0.9275 R@5: 0.9975 R@10: 1.0000 mAP@10: 0.9556\r\n\r\n- 630k-audioset-best (**after** convert, HTSAT-tiny type)\r\n\r\n> Zeroshot Classification Results: mean_rank: 1.1850 median_rank: 1.0000 R@1: 0.9000 R@5: 0.9975 R@10: 1.0000 mAP@10: 0.9400\r\n\r\n- music_audioset_epoch_15_esc_90.14 (**before** convert, HTSAT-base type)\r\n\r\n> Zeroshot Classification Results: mean_rank: 1.1850 median_rank: 1.0000 R@1: 0.9175 R@5: 0.9950 R@10: 0.9975 mAP@10: 0.9513\r\n\r\n- music_audioset_epoch_15_esc_90.14 (**after** convert, HTSAT-base type)\r\n\r\n> Zeroshot Classification Results: mean_rank: 1.2325 median_rank: 1.0000 R@1: 0.9100 R@5: 0.9900 R@10: 0.9950 mAP@10: 0.9467\r\n\r\n- music_speech_audioset_epoch_15_esc_89.98 (**before** convert, HTSAT-base type)\r\n\r\n> Zeroshot Classification Results: mean_rank: 1.1450 median_rank: 1.0000 R@1: 0.9275 R@5: 0.9900 R@10: 1.0000 mAP@10: 0.9568\r\n\r\n- music_speech_audioset_epoch_15_esc_89.98 (**after** convert, HTSAT-base type)\r\n> Zeroshot Classification Results: mean_rank: 1.1100 median_rank: 1.0000 R@1: 0.9350 R@5: 0.9975 R@10: 1.0000 mAP@10: 0.9622\r\n\r\nPS: I converted the models using PR (https://github.com/huggingface/transformers/pull/27153), and the converted model work great! But I find preprocessor config and tokenizer config are not saved, including `preprocessor_config.json`, `special_tokens_map.json`, `tokenizer_config.json`, `tokenizer.json` and `vocab.json`. It will be perfect if the converting code incude the whole saving process!\r\n\r\nThanks for your wonderful work!!",
"Hey @happylittlecat2333, \r\nmany thanks for running the benchmark so promptly! Happy to see that it fixed the benchmark ! I\r\n will merge the PR asap!\r\n\r\n> PS: I converted the models using PR (https://github.com/huggingface/transformers/pull/27153), and the converted model work great! But I find preprocessor config and tokenizer config are not saved, including preprocessor_config.json, special_tokens_map.json, tokenizer_config.json, tokenizer.json and vocab.json. It will be perfect if the converting code incude the whole saving process!\r\n\r\nI've manually added the processor (feature extractor and tokenizer) to the repos, as it was the same than the previous checkpoints! For now, I'll leave the PR as it is, but I keep that in mind if the issue appears again! \r\n\r\nBTW, you can now find the weights (including the processor configs) in the LAION organization on the hub - [here](https://huggingface.co/laion/larger_clap_general), [here](https://huggingface.co/laion/larger_clap_music) and [here](https://huggingface.co/laion/larger_clap_music_and_speech). Feel free to use these checkpoints if you use them again!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,701 | 1,701 | NONE | null | ### System Info
### Question Description
I want to use CLAP in huggingface model style but only find ["laion/clap-htsat-unfused"](https://huggingface.co/laion/clap-htsat-fused) and ["laion/clap-htsat-fused"](https://huggingface.co/laion/clap-htsat-unfused) in huggingface Models. However, I wish to use the music CLAP model, which are recently updated in https://github.com/LAION-AI/CLAP, such as [music_speech_epoch_15_esc_89.25.pt](https://huggingface.co/lukewys/laion_clap/blob/main/music_speech_epoch_15_esc_89.25.pt), so I find [convert_clap_original_pytorch_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clap/convert_clap_original_pytorch_to_hf.py) to convert the clap model. But I find that the newly update model(like [music_speech_audioset_epoch_15_esc_89.98.pt](https://huggingface.co/lukewys/laion_clap/blob/main/music_speech_audioset_epoch_15_esc_89.98.pt)) are based on `HTSAT-base` model, the `hidden_size` and `patch_embeds_hidden_size` are different. So I revise the [convert_clap_original_pytorch_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clap/convert_clap_original_pytorch_to_hf.py) to below. But after test three model( including `HTSAT-base` and `HTSAT-tiny` based model), I find Acc drop for `HTSAT-base` model, can you please help me find out the problem, and maybe upload huggingface version of CLAP model for newly updated CLAP model in original repo, and maybe give a new PR to be compatible with both `HTSAT-base` and `HTSAT-tiny`?
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
#### My revised `convert_clap_original_pytorch_to_hf.py`
```python
import argparse
import re
import torch
# from CLAP import create_model
from laion_clap.clap_module import create_model
from transformers import AutoFeatureExtractor, ClapConfig, ClapModel, ClapAudioConfig, ClapProcessor
KEYS_TO_MODIFY_MAPPING = {
"text_branch": "text_model",
"audio_branch": "audio_model.audio_encoder",
"attn": "attention.self",
"self.proj": "output.dense",
"attention.self_mask": "attn_mask",
"mlp.fc1": "intermediate.dense",
"mlp.fc2": "output.dense",
"norm1": "layernorm_before",
"norm2": "layernorm_after",
"bn0": "batch_norm",
}
processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused", truncation="rand_trunc")
# ADDED
CLAP_AUDIO_CONFIG_DICT = {
"HTSAT-tiny": {},
"HTSAT-base": {
"hidden_size": 1024,
"patch_embeds_hidden_size": 128,
}
}
def init_clap(checkpoint_path, amodel="HTSAT-tiny", enable_fusion=False):
model, model_cfg = create_model(
amodel,
"roberta",
checkpoint_path,
precision="fp32",
device="cuda:0" if torch.cuda.is_available() else "cpu",
enable_fusion=enable_fusion,
fusion_type="aff_2d" if enable_fusion else None,
)
return model, model_cfg
def rename_state_dict(state_dict):
model_state_dict = {}
sequential_layers_pattern = r".*sequential.(\d+).*"
text_projection_pattern = r".*_projection.(\d+).*"
for key, value in state_dict.items():
# check if any key needs to be modified
for key_to_modify, new_key in KEYS_TO_MODIFY_MAPPING.items():
if key_to_modify in key:
key = key.replace(key_to_modify, new_key)
if re.match(sequential_layers_pattern, key):
# replace sequential layers with list
sequential_layer = re.match(sequential_layers_pattern, key).group(1)
key = key.replace(f"sequential.{sequential_layer}.", f"layers.{int(sequential_layer)//3}.linear.")
elif re.match(text_projection_pattern, key):
projecton_layer = int(re.match(text_projection_pattern, key).group(1))
# Because in CLAP they use `nn.Sequential`...
transformers_projection_layer = 1 if projecton_layer == 0 else 2
key = key.replace(f"_projection.{projecton_layer}.", f"_projection.linear{transformers_projection_layer}.")
if "audio" and "qkv" in key:
# split qkv into query key and value
mixed_qkv = value
qkv_dim = mixed_qkv.size(0) // 3
query_layer = mixed_qkv[:qkv_dim]
key_layer = mixed_qkv[qkv_dim : qkv_dim * 2]
value_layer = mixed_qkv[qkv_dim * 2 :]
model_state_dict[key.replace("qkv", "query")] = query_layer
model_state_dict[key.replace("qkv", "key")] = key_layer
model_state_dict[key.replace("qkv", "value")] = value_layer
else:
model_state_dict[key] = value
return model_state_dict
def convert_clap_checkpoint(checkpoint_path, pytorch_dump_folder_path, config_path, amodel, enable_fusion=False):
clap_model, clap_model_cfg = init_clap(checkpoint_path, amodel=amodel, enable_fusion=enable_fusion)
clap_model.eval()
state_dict = clap_model.state_dict()
state_dict = rename_state_dict(state_dict)
# ADDED
clap_audio_config = CLAP_AUDIO_CONFIG_DICT[amodel]
transformers_config = ClapConfig(audio_config=clap_audio_config)
transformers_config.audio_config.enable_fusion = enable_fusion
model = ClapModel(transformers_config)
# ignore the spectrogram embedding layer
model.load_state_dict(state_dict, strict=False)
model.save_pretrained(pytorch_dump_folder_path)
transformers_config.save_pretrained(pytorch_dump_folder_path)
processor.save_pretrained(pytorch_dump_folder_path)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model.")
parser.add_argument("--checkpoint_path", default=None, type=str, help="Path to fairseq checkpoint")
parser.add_argument("--config_path", default=None, type=str, help="Path to hf config.json of model to convert")
parser.add_argument("--amodel", default="HTSAT-tiny", type=str, help="Whether to enable fusion or not")
parser.add_argument("--enable_fusion", action="store_true", help="Whether to enable fusion or not")
args = parser.parse_args()
convert_clap_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.config_path, args.amodel, args.enable_fusion)
```
#### convert script:
```bash
python convert_clap_original_pytorch_to_hf.py \
--pytorch_dump_folder_path ./clap-htsat-base-unfused-music-audioset \
--checkpoint_path ./pretrained_model/music_audioset_epoch_15_esc_90.14.pt \
--config_path ./clap-htsat-base-unfused-music-audioset/config.json \
--amodel HTSAT-base
python convert_clap_original_pytorch_to_hf.py \
--pytorch_dump_folder_path ./clap-htsat-base-unfused-music-speech-audioset \
--checkpoint_path ./pretrained_model/music_speech_audioset_epoch_15_esc_89.98.pt \
--config_path ./clap-htsat-base-unfused-music-speech-audioset/config.json \
--amodel HTSAT-base
python convert_clap_original_pytorch_to_hf.py \
--pytorch_dump_folder_path ./630k-audioset-best \
--checkpoint_path ./pretrained_model/630k-audioset-best.pt \
--config_path ./630k-audioset-best/config.json \
--amodel HTSAT-tiny
```
#### My evalute on ESC50 adopted by clap eval code in original repo [esc50_api.py](https://github.com/LAION-AI/CLAP/blob/main/experiment_scripts/esc50_api.py)
```python
import glob
import json
import torch
import numpy as np
from transformers import ClapModel, ClapProcessor
import librosa
device = torch.device('cuda:0')
# download https://drive.google.com/drive/folders/1scyH43eQAcrBz-5fAw44C6RNBhC3ejvX?usp=sharing and extract ./ESC50_1/test/0.tar to ./ESC50_1/test/
esc50_test_dir = './ESC50_1/test/*/'
class_index_dict_path = './class_labels/ESC50_class_labels_indices_space.json'
# Load the model (for different converted model)
pretrained_model_path = "./clap-htsat-base-unfused-music-speech-audioset"
# pretrained_model_path = "./clap-htsat-base-unfused-music-audioset"
# pretrained_model_path = "./630k-audioset-best"
# pretrained_model_path = "laion/clap-htsat-unfused"
processor = ClapProcessor.from_pretrained(pretrained_model_path)
model = ClapModel.from_pretrained(pretrained_model_path)
# Get the class index dict
class_index_dict = {v: k for v, k in json.load(open(class_index_dict_path)).items()}
# Get all the data
audio_files = sorted(glob.glob(esc50_test_dir + '**/*.flac', recursive=True))
json_files = sorted(glob.glob(esc50_test_dir + '**/*.json', recursive=True))
print("audio_files: ", len(audio_files))
print("json_files: ", len(json_files))
ground_truth_idx = [class_index_dict[json.load(open(jf))['tag'][0]] for jf in json_files]
with torch.no_grad():
ground_truth = torch.tensor(ground_truth_idx).view(-1, 1)
# Get text features
all_texts = ["This is a sound of " + t for t in class_index_dict.keys()]
inputs = processor(text=all_texts, return_tensors="pt", padding=True)
text_embed = model.get_text_features(**inputs)
print("text_embed: ", text_embed.shape)
audio_input = []
for audio_file in audio_files:
audio_waveform, _ = librosa.load(audio_file, sr=48000)
audio_input.append(audio_waveform)
inputs = processor(audios=audio_input, return_tensors="pt", padding=True, sampling_rate=48000)
audio_embed = model.get_audio_features(**inputs)
print("audio_embed: ", audio_embed.shape)
# audio_embed = model.get_audio_embedding_from_filelist(x=audio_files)
ranking = torch.argsort(torch.tensor(audio_embed) @ torch.tensor(text_embed).t(), descending=True)
preds = torch.where(ranking == ground_truth)[1]
preds = preds.cpu().numpy()
metrics = {}
metrics[f"mean_rank"] = preds.mean() + 1
metrics[f"median_rank"] = np.floor(np.median(preds)) + 1
for k in [1, 5, 10]:
metrics[f"R@{k}"] = np.mean(preds < k)
# map@10
metrics[f"mAP@10"] = np.mean(np.where(preds < 10, 1 / (preds + 1), 0.0))
print(
f"Zeroshot Classification Results: "
+ "\t".join([f"{k}: {round(v, 4):.4f}" for k, v in metrics.items()])
)
```
### Expected behavior
#### Evaluate Result
- 630k-audioset-best (before convert, HTSAT-tiny type)
>Zeroshot Classification Results: mean_rank: 1.1450 median_rank: 1.0000 R@1: 0.9275 R@5: 0.9975 R@10: 1.0000 mAP@10: 0.9556
- 630k-audioset-best (after convert, HTSAT-tiny type)
>Zeroshot Classification Results: mean_rank: 1.1850 median_rank: 1.0000 R@1: 0.9000 R@5: 0.9975 R@10: 1.0000 mAP@10: 0.9400
- music_audioset_epoch_15_esc_90.14 (before convert, HTSAT-base type)
>Zeroshot Classification Results: mean_rank: 1.1850 median_rank: 1.0000 R@1: 0.9175 R@5: 0.9950 R@10: 0.9975 mAP@10: 0.9513
- music_audioset_epoch_15_esc_90.14 (after convert, HTSAT-base type)
>Zeroshot Classification Results: mean_rank: 3.2800 median_rank: 2.0000 R@1: 0.4700 R@5: 0.8425 R@10: 0.9300 mAP@10: 0.6312
- music_speech_audioset_epoch_15_esc_89.98 (before convert, HTSAT-base type)
>Zeroshot Classification Results: mean_rank: 1.1450 median_rank: 1.0000 R@1: 0.9275 R@5: 0.9900 R@10: 1.0000 mAP@10: 0.9568
- music_speech_audioset_epoch_15_esc_89.98 (after convert, HTSAT-base type)
>Zeroshot Classification Results: mean_rank: 3.3575 median_rank: 1.0000 R@1: 0.5200 R@5: 0.8375 R@10: 0.9325 mAP@10: 0.6491
Therefore, we can see that `HTSAT-base` type have Acc drop after converting to huggingface type, could you please help us figure out this bug, and maybe upload huggingface version of CLAP model for [music_speech_epoch_15_esc_89.25.pt](https://huggingface.co/lukewys/laion_clap/blob/main/music_speech_epoch_15_esc_89.25.pt), [music_speech_audioset_epoch_15_esc_89.98.pt](https://huggingface.co/lukewys/laion_clap/blob/main/music_speech_audioset_epoch_15_esc_89.98.pt), and and maybe give a new PR to be compatible with both `HTSAT-base` and `HTSAT-tiny`? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26362/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26361/comments | https://api.github.com/repos/huggingface/transformers/issues/26361/events | https://github.com/huggingface/transformers/pull/26361 | 1,910,111,426 | PR_kwDOCUB6oc5bDW6n | 26,361 | docs: change dataset name | {
"login": "pphuc25",
"id": 81808312,
"node_id": "MDQ6VXNlcjgxODA4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuc25",
"html_url": "https://github.com/pphuc25",
"followers_url": "https://api.github.com/users/pphuc25/followers",
"following_url": "https://api.github.com/users/pphuc25/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions",
"organizations_url": "https://api.github.com/users/pphuc25/orgs",
"repos_url": "https://api.github.com/users/pphuc25/repos",
"events_url": "https://api.github.com/users/pphuc25/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuc25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
In the sample code for running a language model in the Flax framework, I believe that "unshuffled_deduplicated_no" is not a suitable example due to its size, which consists of 3.18M samples. Loading such a large dataset can be time-consuming, especially for beginners. Therefore, I suggest changing the dataset to something smaller, like 1M samples, for testing purposes. As a result, I have modified the data source to "unshuffled_deduplicated_vi"
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26361/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26361",
"html_url": "https://github.com/huggingface/transformers/pull/26361",
"diff_url": "https://github.com/huggingface/transformers/pull/26361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26361.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26360/comments | https://api.github.com/repos/huggingface/transformers/issues/26360/events | https://github.com/huggingface/transformers/issues/26360 | 1,909,970,070 | I_kwDOCUB6oc5x19SW | 26,360 | Add ProPainter to transformers | {
"login": "shauray8",
"id": 39147312,
"node_id": "MDQ6VXNlcjM5MTQ3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/39147312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauray8",
"html_url": "https://github.com/shauray8",
"followers_url": "https://api.github.com/users/shauray8/followers",
"following_url": "https://api.github.com/users/shauray8/following{/other_user}",
"gists_url": "https://api.github.com/users/shauray8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauray8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauray8/subscriptions",
"organizations_url": "https://api.github.com/users/shauray8/orgs",
"repos_url": "https://api.github.com/users/shauray8/repos",
"events_url": "https://api.github.com/users/shauray8/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauray8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"LGTM, I would also like to be a part this contrib... @shauray8 ",
"@mahimairaja of course, let's wait for approval from a core maintainer.",
"Looks good :)\r\n\r\nWhat do you think @rafaelpadilla ?",
"Great suggestion @shauray8! \r\nThe original repo has 1.4k stars in 3 weeks. I thiink it is a great model to add! :) ",
"Okay then, working on it!",
"Hi @shauray8, I would like to connect with you...\r\nregard this issue!",
"@mahimairaja what is your preferred mode of conversation?",
"Yeah, shall we connect on discord? my username - mahimairaja",
"I find this model and idea awesome\r\nI would like to work on this issue, can you please assign it to me?",
"Glad to see that you have been working on ProPainter together :) \r\nWhen it's ready, please, ping me for the first review. "
] | 1,695 | 1,697 | null | CONTRIBUTOR | null | ### Model description
ProPainter can be an excellent addition to Transformers given its impressive inpainting capabilities.
*Relevant Links*
Paper - https://arxiv.org/abs/2309.03897
Project Page - https://shangchenzhou.com/projects/ProPainter/
Original Code - https://github.com/sczhou/ProPainter
Weights - https://github.com/sczhou/ProPainter/releases/tag/v0.1.0 (Needs to be merged into a unified one)
Author - @sczhou
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
@amyeroberts @ArthurZucker
If you think this is a valuable addition I'm more than happy to work on it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26360/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26359/comments | https://api.github.com/repos/huggingface/transformers/issues/26359/events | https://github.com/huggingface/transformers/pull/26359 | 1,909,932,186 | PR_kwDOCUB6oc5bCz5S | 26,359 | Adding a new model AugViT | {
"login": "ushareng",
"id": 34335028,
"node_id": "MDQ6VXNlcjM0MzM1MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/34335028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ushareng",
"html_url": "https://github.com/ushareng",
"followers_url": "https://api.github.com/users/ushareng/followers",
"following_url": "https://api.github.com/users/ushareng/following{/other_user}",
"gists_url": "https://api.github.com/users/ushareng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ushareng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ushareng/subscriptions",
"organizations_url": "https://api.github.com/users/ushareng/orgs",
"repos_url": "https://api.github.com/users/ushareng/repos",
"events_url": "https://api.github.com/users/ushareng/events{/privacy}",
"received_events_url": "https://api.github.com/users/ushareng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5724035499,
"node_id": "LA_kwDOCUB6oc8AAAABVS3Zqw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20on%20the%20Hub",
"name": "Model on the Hub",
"color": "9CA0E9",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
".",
"Hey! I think you should share this model [on the hub](https://huggingface.co/docs/transformers/custom_models)! Will be a lot easier to share and a good way to see if the community is riled up for this new model! ",
"> Hey! I think you should share this model [on the hub](https://huggingface.co/docs/transformers/custom_models)! Will be a lot easier to share and a good way to see if the community is riled up for this new model!\r\n\r\n@ArthurZucker It's uploaded to the hub already. Link: https://huggingface.co/tensorgirl/TFaugvit",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,701 | 1,701 | NONE | null | ### Model description
I would like to contribute AugViT Tensorflow implementation to the Transformers library. You can find the implementation in TensorFlow here https://github.com/ushareng/AugViT
I have created Model card here https://huggingface.co/tensorgirl/TFaugvit/tree/main
Kindly let me know if the above is correct.
### Open source status
- [X] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
TensorFlow implementation of AugViT https://github.com/ushareng/AugViT | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26359",
"html_url": "https://github.com/huggingface/transformers/pull/26359",
"diff_url": "https://github.com/huggingface/transformers/pull/26359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26359.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26358/comments | https://api.github.com/repos/huggingface/transformers/issues/26358/events | https://github.com/huggingface/transformers/issues/26358 | 1,909,910,316 | I_kwDOCUB6oc5x1uss | 26,358 | [Flax][BART Example] Always do map to generate cache files each run when running examples/flax/language-modeling/run_bart_dlm_flax.py | {
"login": "jameszhouyi",
"id": 7697925,
"node_id": "MDQ6VXNlcjc2OTc5MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7697925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameszhouyi",
"html_url": "https://github.com/jameszhouyi",
"followers_url": "https://api.github.com/users/jameszhouyi/followers",
"following_url": "https://api.github.com/users/jameszhouyi/following{/other_user}",
"gists_url": "https://api.github.com/users/jameszhouyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameszhouyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameszhouyi/subscriptions",
"organizations_url": "https://api.github.com/users/jameszhouyi/orgs",
"repos_url": "https://api.github.com/users/jameszhouyi/repos",
"events_url": "https://api.github.com/users/jameszhouyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameszhouyi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nCould anyone please have a look this issue ? Thanks",
"Hey! Sorry we might have missed this one. I think this is expected given that you have to process examples that you don't have. Once they are processed once I expected them to be cached. You need enough space for the full dataset you are using ",
"hi @ArthurZucker \r\nThanks for your response. This issue is that the more cache files will be stored in local disk, the more test case run repeatedly. This will lead to occupy a lot of local disk space :-( . BTW, in other test case(e.g.,examples/flax/language-modeling/run_clm_flax.py ), there is no new cache files to be generated if the cache files has been existed. I am not sure why the case of run_bart_dlm_flax.py has different behavior. Please help me have a look this issue. Thanks in advance.",
"Thanks for checking that. I'll ping @sanchit-gandhi if he has time! 🤗 ",
"Hey @jameszhouyi - are you setting the flag `overwrite_cache=True`? Could you make sure you're setting this to `False` so that we load from cache:\r\nhttps://github.com/huggingface/transformers/blob/c832bcb812fc962830c11ea64c5ff623240a3d6d/examples/flax/language-modeling/run_clm_flax.py#L578",
"Hi @sanchit-gandhi , thanks for your support!\r\n\r\nI have set load_from_cache_file=True like below code snippet in https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_bart_dlm_flax.py#L701C15 \r\n\r\n tokenized_datasets = tokenized_datasets.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n load_from_cache_file=True,\r\n )\r\n\r\nBut i found run the BART model and it still do mapping and generate the cache files (e.g., cache-697b211c6e710ef2.arrow) even if above setting:\r\npython run_bart_dlm_flax.py \\\r\n> --output_dir=\"./norwegian-bart-base\" \\\r\n> --model_type=\"bart\" \\\r\n> --config_name=\"./norwegian-bart-base\" \\\r\n> --tokenizer_name=\"./norwegian-bart-base\" \\\r\n> --dataset_name=\"/mnt/disk1/hf_datasets/oscar\" \\\r\n> --do_train \\\r\n> --dataset_config_name=\"unshuffled_deduplicated_no\" \\\r\n> --max_seq_length=\"1024\" \\\r\n> --per_device_train_batch_size=\"16\" \\\r\n> --per_device_eval_batch_size=\"16\" \\\r\n> --learning_rate=\"1e-4\" \\\r\n> --warmup_steps=\"5\" \\\r\n> --overwrite_output_dir \\\r\n> --num_train_epochs=\"3\" \\\r\n> --logging_steps=\"500\" \\\r\n> --save_steps=\"2000\" \\\r\n> --eval_steps=\"2000\" \\\r\n> --dtype=\"bfloat16\" \\\r\n> --num_iterations=\"100\" \\\r\n> --cache_dir=\"/mnt/disk1/hf_datasets/oscar_bart\"\r\n[14:39:34] - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='./norwegian-bart-base', overwrite_output_dir=True, do_train=True, do_eval=False, per_device_train_batch_size=16, per_device_eval_batch_size=16, learning_rate=0.0001, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, adafactor=False, num_train_epochs=3.0, warmup_steps=5, logging_steps=500, save_steps=2000, eval_steps=2000, seed=42, push_to_hub=False, hub_model_id=None, hub_token=None, num_iterations=100)\r\n/mnt/disk1/miniconda3/envs/test/lib/python3.10/site-packages/datasets/table.py:1421: FutureWarning: promote has been superseded by mode='default'.\r\n table = cls._concat_blocks(blocks, axis=0)\r\n[nltk_data] Downloading package punkt to /root/nltk_data...\r\n[nltk_data] Package punkt is already up-to-date!\r\nMap: 70%|████████████████████████████████████████████████▉ | 2145625/3068443 [09:37<04:02, 3800.19 examples/s",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,705 | 1,705 | NONE | null | ### System Info
Hi,
I am running Flax BRAT example model on Ubuntun 22.04 with run_bart_dlm_flax.py(commit ID: 0afa5071bd84e44301750fdc594e33db102cf374). But I found it always do Map like below before each running training model even if there are already cache* file in local disk. It causes lots of caches file will occupy my disk space due to increasing cache files. Do you please help me to check this issue ? thanks in advanced.
Map: 100%| 3068443/3068443 [14:21<00:00, 3561.40 examples/s]
Map: 100%| 161497/161497 [00:44<00:00, 3641.38 examples/s]
Map: 11%| 338000/3068443 [01:29<13:25, 3388.28 examples/s
-rw-r--r-- 1 root root 5062419936 Sep 21 10:49 cache-d06d4112eceed0a0.arrow
-rw-r--r-- 1 root root 265334832 Sep 21 10:50 cache-e28e13db394c2fd6.arrow
-rw-r--r-- 1 root root 13110709912 Sep 21 11:05 cache-bf0c9463974edb34.arrow
-rw-r--r-- 1 root root 687149936 Sep 21 11:06 cache-101c2523a1342475.arrow
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running examples/flax/language-modeling/run_bart_dlm_flax.py for reproducing.
### Expected behavior
if there are already cache files in local disk, it will not do again map when running the model example with run_bart_dlm_flax.py. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26357/comments | https://api.github.com/repos/huggingface/transformers/issues/26357/events | https://github.com/huggingface/transformers/issues/26357 | 1,909,795,083 | I_kwDOCUB6oc5x1SkL | 26,357 | DataCollatorForLanguageModeling ignores manual -100 labels for loss masking | {
"login": "kabachuha",
"id": 14872007,
"node_id": "MDQ6VXNlcjE0ODcyMDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/14872007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kabachuha",
"html_url": "https://github.com/kabachuha",
"followers_url": "https://api.github.com/users/kabachuha/followers",
"following_url": "https://api.github.com/users/kabachuha/following{/other_user}",
"gists_url": "https://api.github.com/users/kabachuha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kabachuha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kabachuha/subscriptions",
"organizations_url": "https://api.github.com/users/kabachuha/orgs",
"repos_url": "https://api.github.com/users/kabachuha/repos",
"events_url": "https://api.github.com/users/kabachuha/events{/privacy}",
"received_events_url": "https://api.github.com/users/kabachuha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @kabachuha, this seems like a reasonable feature request! Would you like to open a PR to offer your fix?",
"Thanks for your interest! Sure, I can, it is quite short.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,698 | 1,698 | NONE | null | ### System Info
transformers version https://github.com/huggingface/transformers/commit/5936c8c57ccb2bda3b3f28856a7ef992c5c9f451
Kubuntu 22.04 LTS
NVIDIA 4090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Info from https://github.com/oobabooga/text-generation-webui/issues/4031
Consider oobabooga's training script
https://github.com/oobabooga/text-generation-webui/blob/ee7bf49804bfb53775424d4c12294445497b610f/modules/training.py#L341-L358
It uses -100 labels in attempt to mask the loss for the parts we don't need to train or, like when we use `## RESPONSE` masking for better training of Alpaca lora, excluding retraining on the beginning part of the dialogue.
---
To reproduce just put prints to DataCollatorForLanguageModeling's `torch_call` function:
```python
labels = batch["input_ids"].clone()
print('INPUT LABELS')
print(labels)
if self.tokenizer.pad_token_id is not None:
labels[labels == self.tokenizer.pad_token_id] = -100
print('OVERWRITTEN LABELS')
batch["labels"] = labels
print(batch["labels"])
```
### Expected behavior
The collator should not overwrite the manual set masks for loss calculation, as it creates confusion when the trainers set the labels to -100 only to find out it doesn't actually mask the loss. This is extremely non-obvious (need to look up the very base transformers code) and has already cost me (and countless clueless users) days of wasted high-end GPU time | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26357/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26356/comments | https://api.github.com/repos/huggingface/transformers/issues/26356/events | https://github.com/huggingface/transformers/issues/26356 | 1,909,768,155 | I_kwDOCUB6oc5x1L_b | 26,356 | During distributed training, there are unused parameters | {
"login": "aihao2000",
"id": 51043929,
"node_id": "MDQ6VXNlcjUxMDQzOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aihao2000",
"html_url": "https://github.com/aihao2000",
"followers_url": "https://api.github.com/users/aihao2000/followers",
"following_url": "https://api.github.com/users/aihao2000/following{/other_user}",
"gists_url": "https://api.github.com/users/aihao2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aihao2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aihao2000/subscriptions",
"organizations_url": "https://api.github.com/users/aihao2000/orgs",
"repos_url": "https://api.github.com/users/aihao2000/repos",
"events_url": "https://api.github.com/users/aihao2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/aihao2000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,696 | 1,696 | NONE | null | discard | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26356/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26355/comments | https://api.github.com/repos/huggingface/transformers/issues/26355/events | https://github.com/huggingface/transformers/issues/26355 | 1,909,754,755 | I_kwDOCUB6oc5x1IuD | 26,355 | Tranformers Entire documentation translation to Japanese 🇯🇵 | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [] | 1,695 | 1,697 | 1,697 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Japanese-speaking community 🌐
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26355/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26355/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26354/comments | https://api.github.com/repos/huggingface/transformers/issues/26354/events | https://github.com/huggingface/transformers/issues/26354 | 1,909,716,290 | I_kwDOCUB6oc5x0_VC | 26,354 | IDEFICS Processor will not respect `padding_side="right"` when `padding="longest"` | {
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@VictorSanh @ArthurZucker Not sure, but maybe this is because idefics processor forces the tokenizer to take the prompts one by one and for `padding=\"longest\"` to work it has to take all the prompts at once.\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/idefics/processing_idefics.py#L285-L333\r\n\r\n*a potential fix would be to get the text encoding part of it out of the loop and some changes to prompt preprocessing, it should be a fairly quick fix (if there are no border conditions following this)*\r\n",
"This indeed looks like a bug cc @leot13 @ArthurZucker ",
"This is definitely a bug. with a quick dive into the code, i think what @shauray8 makes sense.\r\nIt's mostly about moving things around: unindenting the block 324-330, let the tokenizer handle the padding logic (block 346-352).\r\n(i might be wrong, just having a quick look)\r\nIf you feel like it, feel free to open a PR @shauray8 / @xhluca ",
"On it!\r\n"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.34
- Python version: 3.8.16
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@VictorSanh
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics-9b", padding_side="right")
sents = [["hello world"], [" this is a longer sentence to testing padding"]]
# This is the correct behavior:
a = processor(sents, padding="max_length", truncation=True, max_length=20)
print(processor.tokenizer.decode(a['input_ids'][0]))
# => <s> hello world<unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>
# This is the incorrect behavior:
b = processor(sents, padding="longest", truncation=True, max_length=30)
print(processor.tokenizer.decode(b['input_ids'][0]))
# => <unk><unk><unk><unk><unk><unk><s> hello world
```
### Expected behavior
It should pad on right side | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26354/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26353/comments | https://api.github.com/repos/huggingface/transformers/issues/26353/events | https://github.com/huggingface/transformers/pull/26353 | 1,909,545,452 | PR_kwDOCUB6oc5bBltI | 26,353 | Timm to ViT Model conversion fix #26219 | {
"login": "prabhuteja12",
"id": 11191577,
"node_id": "MDQ6VXNlcjExMTkxNTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11191577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabhuteja12",
"html_url": "https://github.com/prabhuteja12",
"followers_url": "https://api.github.com/users/prabhuteja12/followers",
"following_url": "https://api.github.com/users/prabhuteja12/following{/other_user}",
"gists_url": "https://api.github.com/users/prabhuteja12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabhuteja12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabhuteja12/subscriptions",
"organizations_url": "https://api.github.com/users/prabhuteja12/orgs",
"repos_url": "https://api.github.com/users/prabhuteja12/repos",
"events_url": "https://api.github.com/users/prabhuteja12/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabhuteja12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@rwightman do you agree with this change?",
"Just wanted to check in if a review has been possible. ",
"It looks like an improvement on the original w/ better coverage, but there are less 'hard coded' ways of grabbing the needed dims and params from the timm model, avoiding model dependent ifs, etc",
"Can you please give me more details on the less hard coded ways? The simplest way I could think of was to use regex. ",
"@prabhuteja12 all of the values you need should be extracted without relying on the model names, use combination of model/submodule attributes and parameter shapes. All needed widths, depths, patch/img sizes, can be extracted that way.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,699 | 1,699 | NONE | null | A problem with timm to Vit converter code was identified in #26219
This is caused by a rather strict index based parsing of the timm model identifier which fails for several cases. This PR does two things:
1. Fixes that issue by using a regex parser for patch and image size.
2. Fixes ViT configuration differences (ViT Small, missing Vit-Tiny, Vit-giant/gigantic).
@amyeroberts, @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26353",
"html_url": "https://github.com/huggingface/transformers/pull/26353",
"diff_url": "https://github.com/huggingface/transformers/pull/26353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26353.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26352/comments | https://api.github.com/repos/huggingface/transformers/issues/26352/events | https://github.com/huggingface/transformers/issues/26352 | 1,909,489,197 | I_kwDOCUB6oc5x0H4t | 26,352 | Failed to import transformers.trainer, cannot import name 'TypedDict' from 'huggingface_hub.utils._typing' | {
"login": "texasdave2",
"id": 34406740,
"node_id": "MDQ6VXNlcjM0NDA2NzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/34406740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/texasdave2",
"html_url": "https://github.com/texasdave2",
"followers_url": "https://api.github.com/users/texasdave2/followers",
"following_url": "https://api.github.com/users/texasdave2/following{/other_user}",
"gists_url": "https://api.github.com/users/texasdave2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/texasdave2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/texasdave2/subscriptions",
"organizations_url": "https://api.github.com/users/texasdave2/orgs",
"repos_url": "https://api.github.com/users/texasdave2/repos",
"events_url": "https://api.github.com/users/texasdave2/events{/privacy}",
"received_events_url": "https://api.github.com/users/texasdave2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"having same issue did you find the solution???",
"Hello both! Please try upgrading the `huggingface_hub` library and let us know if this fixes your issue?\r\n\r\nYou can do so with the following command:\r\n```\r\npip install -U huggingface_hub\r\n```\r\n\r\nIf not, please share the contents of your environment by doing `transformers-cli env` and pasting the results here. Thank you both! ",
"Upgrading `huggingface_hub` package works for me.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,699 | 1,699 | NONE | null | ### System Info
Hello, thanks again for looking over this, always appreciate the help.
Was running a notebook that uses peft and lora.
@muellerzr @pacman100
Transformers version:
4.34.0.dev0
Python version:
3.9.17
```
!pip install bitsandbytes datasets accelerate loralib
!pip install git+https://github.com/huggingface/peft.git
!pip install git+https://github.com/huggingface/transformers
```
model:
```
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
import torch.nn as nn
import bitsandbytes as bnb
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = AutoModelForCausalLM.from_pretrained(
"facebook/opt-1.3b",
quantization_config=bnb_config,
device_map='auto',
trust_remote_code=True,
)
model.config.use_cache = False
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
```
Trainer:
```
import transformers
from datasets import load_dataset
data = load_dataset("Abirate/english_quotes")
data = data.map(lambda samples: tokenizer(samples['quote']), batched=True)
trainer = transformers.Trainer(
model=model,
train_dataset=data['train'],
args=transformers.TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
warmup_steps=100,
max_steps=200,
learning_rate=2e-4,
fp16=True,
logging_steps=25,
output_dir='outputs'
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
```
Throws this error:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py:1240, in _LazyModule._get_module(self, module_name)
1239 try:
-> 1240 return importlib.import_module("." + module_name, self.__name__)
1241 except Exception as e:
File ~/miniconda3/envs/pytorch/lib/python3.9/importlib/__init__.py:127, in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1030, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1007, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:986, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:680, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:850, in exec_module(self, module)
File <frozen importlib._bootstrap>:228, in _call_with_frames_removed(f, *args, **kwds)
File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/trainer.py:52
51 import torch.distributed as dist
---> 52 from huggingface_hub import Repository, create_repo, upload_folder
53 from packaging import version
File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/huggingface_hub/__init__.py:332, in __getattr__(name)
330 pkg.__dict__[name] = attr
--> 332 return attr
333 else:
File ~/miniconda3/envs/pytorch/lib/python3.9/importlib/__init__.py:127, in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/huggingface_hub/repository.py:25
17 from .utils import (
18 HfFolder,
19 SoftTemporaryDirectory,
(...)
23 validate_hf_hub_args,
24 )
---> 25 from .utils._typing import TypedDict
28 logger = logging.get_logger(__name__)
ImportError: cannot import name 'TypedDict' from 'huggingface_hub.utils._typing' (/home/demouser/miniconda3/envs/pytorch/lib/python3.9/site-packages/huggingface_hub/utils/_typing.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[7], line 6
3 data = load_dataset("Abirate/english_quotes")
4 data = data.map(lambda samples: tokenizer(samples['quote']), batched=True)
----> 6 trainer = transformers.Trainer(
7 model=model,
8 train_dataset=data['train'],
9 args=transformers.TrainingArguments(
10 per_device_train_batch_size=4,
11 gradient_accumulation_steps=4,
12 warmup_steps=100,
13 max_steps=200,
14 learning_rate=2e-4,
15 fp16=True,
16 logging_steps=25,
17 output_dir='outputs'
18 ),
19 data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
20 )
21 model.config.use_cache = False # silence the warnings. Please re-enable for inference!
22 trainer.train()
File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py:1230, in _LazyModule.__getattr__(self, name)
1228 value = self._get_module(name)
1229 elif name in self._class_to_module.keys():
-> 1230 module = self._get_module(self._class_to_module[name])
1231 value = getattr(module, name)
1232 else:
File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py:1242, in _LazyModule._get_module(self, module_name)
1240 return importlib.import_module("." + module_name, self.__name__)
1241 except Exception as e:
-> 1242 raise RuntimeError(
1243 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1244 f" traceback):\n{e}"
1245 ) from e
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'TypedDict' from 'huggingface_hub.utils._typing' (/home/demouser/miniconda3/envs/pytorch/lib/python3.9/site-packages/huggingface_hub/utils/_typing.py)
```
notebook code is located here, using the same notebook:
https://github.com/ashishpatel26/LLM-Finetuning/blob/main/5.Finetune_Meta_OPT-6-1b_Model_bnb_peft.ipynb
I have tried to downgrade the transformers version but that causes other dependency errors further up.
Thanks so much for any help!
Dave
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
After running the notebook and several other notebooks, this mixture of libraries and functions causes the trainer to fail.
I have run many public domain notebooks, so, it's safe to assume the versions of many libraries are mixed up and potentially not compatible.
### Expected behavior
It has worked in the past but I'm trying to get the right version or compatible mixture, does anyone know which version mix this would work best on? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26352/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26352/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26351/comments | https://api.github.com/repos/huggingface/transformers/issues/26351/events | https://github.com/huggingface/transformers/pull/26351 | 1,909,462,019 | PR_kwDOCUB6oc5bBTrp | 26,351 | Updated convert_llama_weights_to_hf.py to support llama2 chat models and default llama director name | {
"login": "varunfb",
"id": 21091406,
"node_id": "MDQ6VXNlcjIxMDkxNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/21091406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varunfb",
"html_url": "https://github.com/varunfb",
"followers_url": "https://api.github.com/users/varunfb/followers",
"following_url": "https://api.github.com/users/varunfb/following{/other_user}",
"gists_url": "https://api.github.com/users/varunfb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varunfb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varunfb/subscriptions",
"organizations_url": "https://api.github.com/users/varunfb/orgs",
"repos_url": "https://api.github.com/users/varunfb/repos",
"events_url": "https://api.github.com/users/varunfb/events{/privacy}",
"received_events_url": "https://api.github.com/users/varunfb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing the request due to errors. Will create a new request post fixing the issue. "
] | 1,695 | 1,695 | 1,695 | NONE | null | # What does this PR do?
When running the ./downloads.sh the options available to download are ("7B,13B,70B,7B-chat,13B-chat,70B-chat"). The download.sh script creates the folder in the following format based on the model chosen ('llama-2' + model_size.lower())
Hugging face script does not support the above directory naming convention. Resulting in having to rename the folders required by hugging face options ("7B", "7Bf", "13B", "13Bf", "30B", "34B", "65B", "70B", "70Bf", "tokenizer_only").
Also, the existing options does not support 7B-chat, 13B-chat and 70B-chat.
This scripts adds support to the following
1. llama-chat models
2. New directory naming format
## Before submitting
- [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@chauhang @HamidShojanazeri | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26351/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26351",
"html_url": "https://github.com/huggingface/transformers/pull/26351",
"diff_url": "https://github.com/huggingface/transformers/pull/26351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26351.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26350/comments | https://api.github.com/repos/huggingface/transformers/issues/26350/events | https://github.com/huggingface/transformers/issues/26350 | 1,909,152,925 | I_kwDOCUB6oc5xy1yd | 26,350 | Community contribution: Adding Flash Attention 2 support for more architectures | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"Hi @younesbelkada - I want to work on adding Flash Attention 2 support for GPTBigCode (Starcoder). Can I take this task? Can you please assign this task to me?\r\n",
"Will definitely take a look next week \r\nGreat to see it merged now 💪",
"I would like to work on `MPT` @younesbelkada ",
"I would like to work on `OPT`.",
"Is it possible to add FlashAttention2 to GPT2 models?",
"@sahilbhosale63 @flozi00 @rajveer43 @susnato thanks very much for your interest! Indeed it would be great if you could help us!\r\nBefore assigning you to this issue can you confirm you have access to a GPU that does support Flash Attention 2: https://github.com/Dao-AILab/flash-attention#installation-and-features in order to be able to run the tests ?\r\n@ZeusFSX , yes I think that it is possible, I'll update the list accodingly",
"@younesbelkada Yes I have",
"OK perfect, I will assign you to MPT ! Feel free to let me know if you need any help or if you have any question, as a starting point, I would recommend to have a look at #25598 and see if you can replicate the PR for MPT. For running flash attention tests you can just run (once PR is ready):\r\n```bash\r\nRUN_SLOW=1 pytest -m flash_attn_test tests/models/mpt/\r\n```",
"@younesbelkada yes I have.",
"Thanks @susnato , perfect then, let me know whenever you start the PR and if you have any question ! Check out my instructions above for more details",
"@younesbelkada Unfortunately, My GPU is not supported",
"> OK perfect, I will assign you to MPT ! Feel free to let me know if you need any help or if you have any question, as a starting point, I would recommend to have a look at #25598 and see if you can replicate the PR for MPT. For running flash attention tests you can just run (once PR is ready):\r\n> \r\n> ```shell\r\n> RUN_SLOW=1 pytest -m flash_attn_tests tests/models/mpt/\r\n> ```\r\n\r\nSure I will work on it!",
"@younesbelkada Would like to work on Persimmon. I have access to A4000, A5000, and A6000, which I believe should be compatible with FA2.",
"Perfect sounds great, thanks for your help, I will assign you to Persimmon !",
"Since @sahilbhosale63 is not working on ` GPTBigCode (Starcoder)`(as he said [here](https://github.com/huggingface/transformers/issues/26350#issuecomment-1733685691)) can I take that @younesbelkada?",
"Yes no problem, thanks very much for proposing your help on this ! As a starting point you can have a look at @pacman100 's implementation here: https://github.com/pacman100/DHS-LLM-Workshop/blob/main/personal_copilot/training/starcoder_flash_attn_monkey_patch.py",
"@younesbelkada I would like to implement it for BERT if it hasn't already been done? A lot of the models topping MTEB are still relying on this architecture! I have tested that i can run flash attention 2 on my nvidia geforce RTX 3060 TI.",
"Awesome, thanks a lot for your help, ok I will assign you to BERT then!",
"Hi everyone, I would like to help implement this with GPT2 if you want.",
"@younesbelkada\r\n\r\nI have a working version for `Persimmon` that passes the `flash_attn_v2` tests except for `generate_padding_right` as the original `PersimmonFlashAttention` does not have `padding_mask` as a kw input (as opposed to the `Llama` and `Falcon` flash implementations). Is this something that needs to be changed in both Persimmon Flash v1 and v2? \r\n\r\nAlso, any plans on incorporating additional optimizations, e.g., `Flash Attention` repo has fused layers for `dense`, `rotary`, and `layer norm` for[ faster training](https://github.com/Dao-AILab/flash-attention/blob/main/README.md#full-model-code-and-training-script); and [Triton](https://github.com/Dao-AILab/flash-attention/blob/main/README.md#triton-implementation-of-flashattention) kernels, more generally? Happy to investigate more!\r\n\r\nAlso, would like to help with `Mistral-7b` (just released). They use `xformers` `memory efficient attention` in their released implementation but also mention Tri Dao's FA in the blogpost. ",
"Hi @DougTrajano \r\nAwesome! Can you confirm you have access to a hardware that is supported by FA-2? \r\n\r\n\r\n\r\n@jeromeku awesome thanks! Can you move forward for Persimmon by opening a PR so that I can have a look?\r\n\r\n> Also, any plans on incorporating additional optimizations, e.g., Flash Attention repo has fused layers for dense, rotary, and layer norm for[ faster training](https://github.com/Dao-AILab/flash-attention/blob/main/README.md#full-model-code-and-training-script); and [Triton](https://github.com/Dao-AILab/flash-attention/blob/main/README.md#triton-implementation-of-flashattention) kernels, more generally? Happy to investigate more!\r\n\r\nIf that is something that can nicely fit into the API without any breaking behaviour that would be great !\r\n\r\n> Also, would like to help with Mistral-7b (just released). They use xformers memory efficient attention in their released implementation but also mention Tri Dao's FA in the blogpost.\r\n\r\nI think Mistral's attention has been released in the latest version of FA-2 --> Would you be happy to open a PoC PR so that I can play with it and see what we can do?\r\n\r\nAgain thanks a lot!",
"Hi @jeromeku \r\nI had to check internally for Mistral, given the very recent release and the urgency, we'll take this over (https://github.com/huggingface/transformers/pull/26464); if you have started a PR, I'm very happy to start from it or to add you as a co-author to the PR ! \r\nWe might also refactor things a bit to support Local attention introduced by Mistral, so that needs further investigation, I'll keep you posted",
"@younesbelkada what is the expected deadline to complete `MPT`, I have other issues to tackle on so I can plan accordingly",
"Hi @younesbelkada , I am talking this up for `GPT-neo`.",
"Awesome @susnato ! Thanks ! \r\n@rajveer43 thanks for taking up MPT, will check it out!",
"> Hi @DougTrajano Awesome! Can you confirm you have access to a hardware that is supported by FA-2?\r\n> \r\n> \r\n> \r\n\r\nYes, I'll work on AWS SageMaker.\r\n\r\n",
"Would love to take on GPT2!",
"Thanks for confirming @DougTrajano !\r\n@marcasty thanks a lot for your interest, @DougTrajano has taken up GPT2, would be happy taking another model? 🙏 \r\nCan you also confirm you have access to a hardware that support FA-2 ?",
"Hi @younesbelkada, I am taking this up for `DistillBERT`.",
"@younesbelkada what about T5? I have access to compatible hardware "
] | 1,695 | 1,707 | null | CONTRIBUTOR | null | ### Feature request
Flash Attention 2 is a library that provides attention operation kernels for faster and more memory efficient inference and training: https://github.com/Dao-AILab/flash-attention

Let's try to add Flash Attention 2 support for more architectures! Currently supported architectures are
- [x] Llama
- [x] Falcon
It would be great to add the support for more architectures such as
- [x] Bark
- [x] Bart
- [ ] BERT | @sorenmc
- [x] DistilBERT
- [ ] GPT-2
- [ ] GPT-J
- [x] GPTBigCode (Starcoder) | @susnato
- [x] GPT-neo
- [x] GPT-neo-x | @younesbelkada #26463
- [x] OPT | @susnato #26414
- [x] Llava
- [x] VipLlava
- [x] mBART
- [x] Mistral
- [x] Mixtral
- [ ] MPT | @rajveer43
- [ ] T5
- [ ] Persimmon | @jeromeku
- [x] Phi
- [x] Whisper
- [x] Qwen2
... and many more
Adding this feature would require to follow the same protocol as in https://github.com/huggingface/transformers/pull/25598
. First create a new module inside the corresponding modeling file termed as `xxxFlashAttention` that inherits from `xxxAttention` and override the foward method to use the public methods from `flash-attn`. Make sure to have access to a GPU that supports Flash Attention 2.
Given the slight challenge of the issue, labelling it as a good second issue!
If you are interested to take up the challenge, comment below with the architecture name you want to integrate and open a PR!
Once you open a PR, feel free to ping @LysandreJik @ArthurZucker @amyeroberts @younesbelkada @fxmarty @SunMarc @pacman100 for a review
### Motivation
Making LLMs more memory efficient and faster !
### Your contribution
Reviewing PRs and possibly adding the support for more models | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26350/reactions",
"total_count": 25,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 25,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26350/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26349/comments | https://api.github.com/repos/huggingface/transformers/issues/26349/events | https://github.com/huggingface/transformers/pull/26349 | 1,909,096,258 | PR_kwDOCUB6oc5bAGEm | 26,349 | [Wav2Vec2] Fix tokenizer set lang | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry to hear that @andergisomon - note that running `transformers` on the latest PyPi version should bypass this error. It's only on `main` at the moment:\r\n```\r\npip uninstall transformers -y\r\npip install --upgrade transformers\r\n```",
"Thanks for the review! Applied the suggestions from your comment - look good to you?"
] | 1,695 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
The PR #23909 removed `unique_no_split_tokens` as an attribute of the Wav2Vec2 tokenizer, however it is required to set the language in the method `.set_target_lang`:
https://github.com/huggingface/transformers/blob/dcbfd93d7aeb14f8ff08a48866d2a68950d4c69a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L237
Thus, calling `.set_target_lang` currently throws an error:
```python
from transformers import Wav2Vec2CTCTokenizer
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("facebook/mms-1b-all")
tokenizer.set_target_lang("spa")
```
**Output:**
```
Traceback (most recent call last):
File "/Users/sanchitgandhi/transformers/debug_tokenizer.py", line 4, in <module>
tokenizer.set_target_lang("spa")
File "/Users/sanchitgandhi/transformers/src/transformers/models/wav2vec2/tokenization_wav2vec2.py", line 237, in set_target_lang
self.unique_no_split_tokens.append(token)
AttributeError: 'Wav2Vec2CTCTokenizer' object has no attribute 'unique_no_split_tokens'
```
This PR re-instates `unique_no_split_tokens` as an attribute of the tokenizer. Note that this should already be tested for in the following test: https://github.com/huggingface/transformers/blob/dcbfd93d7aeb14f8ff08a48866d2a68950d4c69a/tests/models/wav2vec2/test_tokenization_wav2vec2.py#L798 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26349/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26349",
"html_url": "https://github.com/huggingface/transformers/pull/26349",
"diff_url": "https://github.com/huggingface/transformers/pull/26349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26349.patch",
"merged_at": 1696435930000
} |
https://api.github.com/repos/huggingface/transformers/issues/26348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26348/comments | https://api.github.com/repos/huggingface/transformers/issues/26348/events | https://github.com/huggingface/transformers/pull/26348 | 1,909,086,323 | PR_kwDOCUB6oc5bAD7l | 26,348 | [TTA Pipeline] Fix MusicGen test | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
The PR #26136 added the `sampling_rate` as a **property** to the MusicGen config, such that it is compatible with the TTA pipeline. However, using a property means it is not registered when we call `config.to_dict()`, since it is not an attribute of the config.
This PR updates the TTA pipeline to avoid converting the config to a dict, thus keeping compatibility with MusicGen. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26348",
"html_url": "https://github.com/huggingface/transformers/pull/26348",
"diff_url": "https://github.com/huggingface/transformers/pull/26348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26348.patch",
"merged_at": 1695398155000
} |
https://api.github.com/repos/huggingface/transformers/issues/26347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26347/comments | https://api.github.com/repos/huggingface/transformers/issues/26347/events | https://github.com/huggingface/transformers/pull/26347 | 1,909,081,504 | PR_kwDOCUB6oc5bAC4g | 26,347 | [docs] removed MaskFormerSwin and TimmBackbone from the table on index.md | {
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
The PR addresses #25967:
`MaskFormerSwin` and `TimmBackbone` should not be listed in the framework support table on the index.md page because these models are backbones and so not meant to be loaded and used on their own. Instead, they define architectures which can be loaded using the AutoBackbone API. See the [comment](https://github.com/huggingface/transformers/issues/25967#issuecomment-1717818649) in the issue.
This PR updates the table, and the script that builds the table so that these models are excluded when building the table.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26347/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26347",
"html_url": "https://github.com/huggingface/transformers/pull/26347",
"diff_url": "https://github.com/huggingface/transformers/pull/26347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26347.patch",
"merged_at": 1695649319000
} |
https://api.github.com/repos/huggingface/transformers/issues/26346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26346/comments | https://api.github.com/repos/huggingface/transformers/issues/26346/events | https://github.com/huggingface/transformers/pull/26346 | 1,909,073,055 | PR_kwDOCUB6oc5bABDV | 26,346 | [AMD] Add initial version for run_tests_multi_gpu | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,696 | 1,696 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26346/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26346",
"html_url": "https://github.com/huggingface/transformers/pull/26346",
"diff_url": "https://github.com/huggingface/transformers/pull/26346.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26346.patch",
"merged_at": 1696324426000
} |
https://api.github.com/repos/huggingface/transformers/issues/26345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26345/comments | https://api.github.com/repos/huggingface/transformers/issues/26345/events | https://github.com/huggingface/transformers/pull/26345 | 1,909,070,242 | PR_kwDOCUB6oc5bAAcI | 26,345 | Add initial version for run_tests_multi_gpu for AMDGPU | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,695 | 1,695 | 1,695 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26345/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26345",
"html_url": "https://github.com/huggingface/transformers/pull/26345",
"diff_url": "https://github.com/huggingface/transformers/pull/26345.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26345.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26344/comments | https://api.github.com/repos/huggingface/transformers/issues/26344/events | https://github.com/huggingface/transformers/issues/26344 | 1,908,955,342 | I_kwDOCUB6oc5xyFjO | 26,344 | usage of past_key_values produces different output than the whole sequence at once | {
"login": "IvanSedykh",
"id": 46825716,
"node_id": "MDQ6VXNlcjQ2ODI1NzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/46825716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvanSedykh",
"html_url": "https://github.com/IvanSedykh",
"followers_url": "https://api.github.com/users/IvanSedykh/followers",
"following_url": "https://api.github.com/users/IvanSedykh/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanSedykh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvanSedykh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanSedykh/subscriptions",
"organizations_url": "https://api.github.com/users/IvanSedykh/orgs",
"repos_url": "https://api.github.com/users/IvanSedykh/repos",
"events_url": "https://api.github.com/users/IvanSedykh/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvanSedykh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for opening and issue. This is pretty much a duplicate of #25420, where we deep dive into this!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @IvanSedykh 👋 \r\n\r\nAs Arthur wrote, this is a duplicate of #25420 -- you can find a detailed answer [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535)",
"Hi @gante !\nThan you for this investigation, it's much more clear now. 🤗",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
transformers 4.33.1
### Who can help?
@ArthurZucker @younesbelkada @gan
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
when I use `past_key_values` the model produces not the same logits as when I input the whole sequence at once.
Please, follow the code snippet below for more details.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "codellama/CodeLlama-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto"
)
prompt = """
import json
fname = 'some_file.json'
with open(fname) as f:
data = json."""
all_input_ids = tokenizer([prompt], return_tensors='pt').input_ids
# process the whole sequence
with torch.no_grad():
all_outputs = model(all_input_ids)
# get logits for the last token
last_token_logits = all_outputs.logits[0][-1:]
with torch.no_grad():
# process the sequence except the last token
kv = model(all_input_ids[:, :-1]).past_key_values
# input only the last token with previous kv_cache
new_output = model(all_input_ids[:, -1:], past_key_values=kv)
# extract the last token logits
new_last_token_logits = new_output.logits[0][-1:]
# theese two distributions should be equal, but they are not.
print(torch.dist(last_token_logits, new_last_token_logits))
# tensor(0.4462)
assert torch.allclose(last_token_logits, new_last_token_logits) #fails
```
### Expected behavior
If I've got the idea of kv_caching correctly the outputs should be exactly the same. This is important because the `generate` method heavily relies on `past_key_values`. So if there is a bug somewhere, it affects a lot of applications. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26344/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26343/comments | https://api.github.com/repos/huggingface/transformers/issues/26343/events | https://github.com/huggingface/transformers/pull/26343 | 1,908,907,439 | PR_kwDOCUB6oc5a_dYW | 26,343 | [doc] fixed indices in obj detection example | {
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | # What does this PR do?
PR addresses #25662, and fixes the indices in the doc example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26343/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26343",
"html_url": "https://github.com/huggingface/transformers/pull/26343",
"diff_url": "https://github.com/huggingface/transformers/pull/26343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26343.patch",
"merged_at": 1695392967000
} |
https://api.github.com/repos/huggingface/transformers/issues/26342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26342/comments | https://api.github.com/repos/huggingface/transformers/issues/26342/events | https://github.com/huggingface/transformers/pull/26342 | 1,908,893,566 | PR_kwDOCUB6oc5a_aX- | 26,342 | Update __init__.py | {
"login": "zhangjunyi111",
"id": 67990935,
"node_id": "MDQ6VXNlcjY3OTkwOTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/67990935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangjunyi111",
"html_url": "https://github.com/zhangjunyi111",
"followers_url": "https://api.github.com/users/zhangjunyi111/followers",
"following_url": "https://api.github.com/users/zhangjunyi111/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangjunyi111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangjunyi111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangjunyi111/subscriptions",
"organizations_url": "https://api.github.com/users/zhangjunyi111/orgs",
"repos_url": "https://api.github.com/users/zhangjunyi111/repos",
"events_url": "https://api.github.com/users/zhangjunyi111/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangjunyi111/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,700 | 1,700 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26342/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26342",
"html_url": "https://github.com/huggingface/transformers/pull/26342",
"diff_url": "https://github.com/huggingface/transformers/pull/26342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26342.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26341/comments | https://api.github.com/repos/huggingface/transformers/issues/26341/events | https://github.com/huggingface/transformers/pull/26341 | 1,908,813,538 | PR_kwDOCUB6oc5a_I_Y | 26,341 | Add AMD daily CI workflow file | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,698 | 1,698 | COLLABORATOR | null | # What does this PR do?
AMD daily CI workflow file. There are something to do with the runners and report channels, but nothing is big. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26341/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26341",
"html_url": "https://github.com/huggingface/transformers/pull/26341",
"diff_url": "https://github.com/huggingface/transformers/pull/26341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26341.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26340/comments | https://api.github.com/repos/huggingface/transformers/issues/26340/events | https://github.com/huggingface/transformers/issues/26340 | 1,908,731,962 | I_kwDOCUB6oc5xxPA6 | 26,340 | AttributeError: 'InternLMTokenizer' object has no attribute 'sp_model' | {
"login": "KnutJaegersberg",
"id": 17965169,
"node_id": "MDQ6VXNlcjE3OTY1MTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/17965169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KnutJaegersberg",
"html_url": "https://github.com/KnutJaegersberg",
"followers_url": "https://api.github.com/users/KnutJaegersberg/followers",
"following_url": "https://api.github.com/users/KnutJaegersberg/following{/other_user}",
"gists_url": "https://api.github.com/users/KnutJaegersberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KnutJaegersberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KnutJaegersberg/subscriptions",
"organizations_url": "https://api.github.com/users/KnutJaegersberg/orgs",
"repos_url": "https://api.github.com/users/KnutJaegersberg/repos",
"events_url": "https://api.github.com/users/KnutJaegersberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/KnutJaegersberg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same error",
"Hey! You are probably using `main` which introduced #23909. The InternLM repo relies on `\"transformers_version\": \"4.33.1\",` as you can see from the `config.json`! Don't worry, internLM will be add to transformers super soon see #26302 ",
"Thank you. After installing 4.33.1, I get another exception. Perhaps it is because autotrain setup updates to the newest versions? I already tried an older peft version, but that did not help. There seems to be a new target_modules argument in autotrain advanced. Wonder how to use that. \r\n\r\n\r\n> ERROR train has failed due to an exception:\r\n> ERROR Traceback (most recent call last):\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/autotrain/utils.py\", line 280, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/autotrain/trainers/clm/__main__.py\", line 152, in train\r\n model = get_peft_model(model, peft_config)\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/peft/mapping.py\", line 98, in get_peft_model\r\n return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/peft/peft_model.py\", line 893, in __init__\r\n super().__init__(model, peft_config, adapter_name)\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/peft/peft_model.py\", line 112, in __init__\r\n self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type](\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/peft/tuners/lora.py\", line 180, in __init__\r\n self.add_adapter(adapter_name, self.peft_config[adapter_name])\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/peft/tuners/lora.py\", line 192, in add_adapter\r\n config = self._prepare_lora_config(config, model_config)\r\n File \"/home/knut/transformers/lib/python3.9/site-packages/peft/tuners/lora.py\", line 434, in _prepare_lora_config\r\n raise ValueError(\"Please specify `target_modules` in `peft_config`\")\r\nValueError: Please specify `target_modules` in `peft_config`",
"check init function of InternLMTokenizer, then, move super init code to the end of the function.",
"It works! I used the pull request and moved the init function. Also manually specified the target modules, didn't know how to do that before. I guess I just have to grab the linear layers from print(model)",
"> Hey! You are probably using `main` which introduced #23909. The InternLM repo relies on `\"transformers_version\": \"4.33.1\",` as you can see from the `config.json`! Don't worry, internLM will be add to transformers super soon see #26302\r\n\r\nAny chance you could give me a hint as to what changes I need to make to InternLM to make it work with main branch? Sorry to be greedy but I want the best of both worlds!",
"Hi @jph00! We're working on making proper library ports of InternLM (by converting them to LLaMA checkpoints, the model their code is based on). These should be available in a couple of days.\r\n\r\nIf you really want to get it working on `main` right now, though, the underlying problem is caused by the base tokenizer `__init__()` looking for `self.sp_model` before the child class has created that attribute. Moving the call to `super().__init__()` to a line after the creation of `self.sp_model` in `tokenization_internlm.py` should resolve the issue.",
"Message ID: ***@***.***>Awesome - thanks so much I'll try that. :D",
"the port internlm as llama branch has been deleted. \r\nI'm not sure what the 'canon' way to use or fine tune in autotrain-advanced this model is now. \r\nGot newest transformers version, but I seem to have to use the adjusted tokenization_internlm file in the folder of internlm. I tried to fine tune the model before but it kept stopping at 11 %, weirdly merging the model and acting as if it finished, but it could never have been as fast as that. ",
"This is my trainer_state.json. I have the feeling it does not count the epochs i a right way. \r\nThe model that comes out is not useless, also those from earlier runs. It stops prematurely it seems. the run I share first apparently almost finished, it had a larger batch size, but also a smaller dataset. It's like your training has to outspeed the bug. ;) \r\n\r\n{\r\n \"best_metric\": null,\r\n \"best_model_checkpoint\": null,\r\n \"epoch\": 4.5464684014869885,\r\n \"eval_steps\": 500,\r\n \"global_step\": 735,\r\n \"is_hyper_param_search\": false,\r\n \"is_local_process_zero\": true,\r\n \"is_world_process_zero\": true,\r\n \"log_history\": [\r\n {\r\n \"epoch\": 0.2,\r\n \"learning_rate\": 1.1777777777777778e-05,\r\n \"loss\": 1.6026,\r\n \"step\": 53\r\n },\r\n {\r\n \"epoch\": 0.39,\r\n \"learning_rate\": 2.3555555555555556e-05,\r\n \"loss\": 1.4208,\r\n \"step\": 106\r\n },\r\n {\r\n \"epoch\": 1.04,\r\n \"learning_rate\": 2.9404958677685954e-05,\r\n \"loss\": 1.392,\r\n \"step\": 159\r\n },\r\n {\r\n \"epoch\": 1.24,\r\n \"learning_rate\": 2.8090909090909092e-05,\r\n \"loss\": 1.2228,\r\n \"step\": 212\r\n },\r\n {\r\n \"epoch\": 1.44,\r\n \"learning_rate\": 2.6776859504132234e-05,\r\n \"loss\": 1.3286,\r\n \"step\": 265\r\n },\r\n {\r\n \"epoch\": 2.09,\r\n \"learning_rate\": 2.546280991735537e-05,\r\n \"loss\": 1.2465,\r\n \"step\": 318\r\n },\r\n {\r\n \"epoch\": 2.29,\r\n \"learning_rate\": 2.4148760330578513e-05,\r\n \"loss\": 1.1794,\r\n \"step\": 371\r\n },\r\n {\r\n \"epoch\": 2.48,\r\n \"learning_rate\": 2.2834710743801655e-05,\r\n \"loss\": 1.2161,\r\n \"step\": 424\r\n },\r\n {\r\n \"epoch\": 3.13,\r\n \"learning_rate\": 2.1520661157024793e-05,\r\n \"loss\": 1.2076,\r\n \"step\": 477\r\n },\r\n {\r\n \"epoch\": 3.33,\r\n \"learning_rate\": 2.0206611570247934e-05,\r\n \"loss\": 1.1163,\r\n \"step\": 530\r\n },\r\n {\r\n \"epoch\": 3.53,\r\n \"learning_rate\": 1.8892561983471072e-05,\r\n \"loss\": 1.2432,\r\n \"step\": 583\r\n },\r\n {\r\n \"epoch\": 4.18,\r\n \"learning_rate\": 1.7578512396694214e-05,\r\n \"loss\": 1.2346,\r\n \"step\": 636\r\n },\r\n {\r\n \"epoch\": 4.38,\r\n \"learning_rate\": 1.6264462809917355e-05,\r\n \"loss\": 1.2157,\r\n \"step\": 689\r\n }\r\n ],\r\n \"logging_steps\": 53,\r\n \"max_steps\": 1345,\r\n \"num_train_epochs\": 5,\r\n \"save_steps\": 500,\r\n \"total_flos\": 3.528343138861056e+17,\r\n \"trial_name\": null,\r\n \"trial_params\": null\r\n}\r\n\r\n\r\nAnother run, different dataset: \r\n\r\n{\r\n \"best_metric\": null,\r\n \"best_model_checkpoint\": null,\r\n \"epoch\": 2.108743232346504,\r\n \"eval_steps\": 500,\r\n \"global_step\": 2832,\r\n \"is_hyper_param_search\": false,\r\n \"is_local_process_zero\": true,\r\n \"is_world_process_zero\": true,\r\n \"log_history\": [\r\n {\r\n \"epoch\": 1.09,\r\n \"learning_rate\": 1.9992322456813822e-05,\r\n \"loss\": 1.1724,\r\n \"step\": 1736\r\n }\r\n ],\r\n \"logging_steps\": 1736,\r\n \"max_steps\": 26043,\r\n \"num_train_epochs\": 3,\r\n \"save_steps\": 500,\r\n \"total_flos\": 1.265552258531328e+18,\r\n \"trial_name\": null,\r\n \"trial_params\": null\r\n}\r\n\r\nYet another: \r\n\r\n{\r\n \"best_metric\": null,\r\n \"best_model_checkpoint\": null,\r\n \"epoch\": 4.11,\r\n \"eval_steps\": 500,\r\n \"global_step\": 660,\r\n \"is_hyper_param_search\": false,\r\n \"is_local_process_zero\": true,\r\n \"is_world_process_zero\": true,\r\n \"log_history\": [\r\n {\r\n \"epoch\": 1.09,\r\n \"learning_rate\": 1.2e-05,\r\n \"loss\": 1.3913,\r\n \"step\": 240\r\n },\r\n {\r\n \"epoch\": 3.07,\r\n \"learning_rate\": 2.4e-05,\r\n \"loss\": 1.1288,\r\n \"step\": 480\r\n }\r\n ],\r\n \"logging_steps\": 240,\r\n \"max_steps\": 6000,\r\n \"num_train_epochs\": 5,\r\n \"save_steps\": 500,\r\n \"total_flos\": 2.9493802635264e+17,\r\n \"trial_name\": null,\r\n \"trial_params\": null\r\n}\r\n",
"but it merges the model, it finishes the script, despite apparently it does not finish the fine tuning process. ",
"the trainer state files are from the last checkpoint. the saved, merged model files are a few minutes younger than that. I dont think it finished. ",
"I found an unmerged pull request with files that seem to load internlm-7b as llama. \r\n\r\n\r\nhttps://huggingface.co/internlm/internlm-7b/discussions/4/files#d2h-846292\r\n\r\nI copied the files to my internlm20b local copy, but I get this exception:\r\n\r\n2023-10-15 09:16:02 ERROR:Failed to load the model.\r\nTraceback (most recent call last):\r\n File \"/run/media/knut/HD/text-generation-webui/modules/ui_model_menu.py\", line 201, in load_model_wrapper\r\n shared.model, shared.tokenizer = load_model(shared.model_name, loader)\r\n File \"/run/media/knut/HD/text-generation-webui/modules/models.py\", line 87, in load_model\r\n tokenizer = load_tokenizer(model_name, model)\r\n File \"/run/media/knut/HD/text-generation-webui/modules/models.py\", line 106, in load_tokenizer\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n File \"/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 735, in from_pretrained\r\n tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)\r\n File \"/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/dynamic_module_utils.py\", line 497, in get_class_from_dynamic_module\r\n return get_class_in_module(class_name, final_module.replace(\".py\", \"\"))\r\n File \"/home/knut/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/dynamic_module_utils.py\", line 199, in get_class_in_module\r\n module = importlib.import_module(module_path)\r\n File \"/home/knut/miniconda3/envs/textgen/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"/home/knut/.cache/huggingface/modules/transformers_modules/deacon-20b2/tokenization_internlm.py\", line 18, in <module>\r\n from transformers.tokenization_utils import LlamaTokenizer\r\n",
"Same for baichuan2 model: AttributeError: 'BaichuanTokenizer' object has no attribute 'sp_model'",
"Hi @lucasjinreal - in all cases this is caused by the custom modelling code not being compatible with the latest versions of `transformers`. You should file an issue on the model repos and tell them to rearrange the tokenizer `__init__` so that `self.sp_model` is created before calling `super().__init__()`!",
"@Rocketknight1 tansformers should consider back compatible I think, lots of code are broke by this changes, and if they apply new changes, might not compatible to old ones too.\r\n\r\nA good lib should always consider back compatible, this is why tensorflow dying.",
"Hey, backward compatibility with remote code is not really possible, it's one of the downside of having code on the hub, you have to maintain it as well",
"> transformers_version\": \"4.33.1\r\n\r\n`pip install transformers_version==4.33.1` fixed my issue.",
"This should be fixed by https://github.com/InternLM/InternLM/pull/419",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.10.194-1-MANJARO-x86_64-with-glibc2.38
- Python version: 3.9.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.9 (cpu)
- Jax version: 0.4.8
- JaxLib version: 0.4.7
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@abhishekkrthakur
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Update autotrain-advanced and run autotrain-setup
2. Try to fine tune the new internlm-20b with peft in 8 bit
3. Using the command line tool

### Expected behavior
Fine tune the model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26340/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26339/comments | https://api.github.com/repos/huggingface/transformers/issues/26339/events | https://github.com/huggingface/transformers/pull/26339 | 1,908,712,752 | PR_kwDOCUB6oc5a-y6O | 26,339 | Add numpy alternative to FE using torchaudio | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @ArthurZucker and @sanchit-gandhi, thanks for the review!\r\n\r\nHowever, I'm not sure about what you meant [here](https://github.com/huggingface/transformers/pull/26339#pullrequestreview-1643752078):\r\n\r\n> Looks good to me, aligned with @sanchit-gandhi on not adding the np support\r\n\r\nAnd [here](https://github.com/huggingface/transformers/pull/26339#discussion_r1336885695):\r\n\r\n> I'm fine with adding a comment somewhere or a section in the doc to not lose the info on how to use numpy to get the same results as torchaudio for futur references when we'll improve or numpy port!\r\n\r\n@sanchit-gandhi seems to be in favor of removing torchaudio support to only focus on the numpy port here, whereas @ArthurZucker seems to be in favor on not adding the numpy support.\r\n\r\nMaybe I misunderstood the comments here! Thanks for your help!\r\n",
"Sorry I was confused! I agree that we should remove the old code, but worried about the performance issue, since we had to re introduce torch STFT for Whisper for example. (Performance wise backward compatible)",
"I've made a quick benchmark, on AST, with results here:\r\n\r\n\r\nBasically, torchaudio is at least 19 faster than the numpy porting. If I haven't made any mistake in my benchmark, I'll be strongly in favor of keeping torchaudio compatibility.\r\n\r\nWDYT @ArthurZucker and @sanchit-gandhi ? Can you also take a quick look at the benchmark code to make sure that my results are correct (or redirect me to an expert at HF haha) ? \r\n\r\nFor reference, here is the benchmark code:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pytest\r\nfrom transformers import ASTFeatureExtractor\r\n\r\nds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nspeech_samples = ds.sort(\"id\").select(range(64))[:64][\"audio\"]\r\nspeech_samples = [x[\"array\"] for x in speech_samples]\r\n\r\n\r\ndef torchaudio_unbatch():\r\n fe = ASTFeatureExtractor(use_torchaudio=True)\r\n \r\n for sample in speech_samples:\r\n input_features = fe(sample, padding=True, return_tensors=\"pt\")\r\n\r\ndef np_unbatch():\r\n fe = ASTFeatureExtractor(use_torchaudio=False)\r\n \r\n for sample in speech_samples:\r\n input_features = fe(sample, padding=True, return_tensors=\"pt\")\r\n\r\ndef torchaudio_batch_8():\r\n fe = ASTFeatureExtractor(use_torchaudio=True)\r\n \r\n for i in range(0,len(speech_samples),8):\r\n samples = speech_samples[i:i+8]\r\n input_features = fe(samples, padding=True, return_tensors=\"pt\")\r\n\r\ndef np_batch_8():\r\n fe = ASTFeatureExtractor(use_torchaudio=False)\r\n \r\n for i in range(0,len(speech_samples),8):\r\n samples = speech_samples[i:i+8]\r\n input_features = fe(samples, padding=True, return_tensors=\"pt\")\r\n\r\[email protected](\r\n min_rounds=5, disable_gc=True, warmup=False\r\n)\r\ndef test_torchaudio_unbatch(benchmark):\r\n benchmark(torchaudio_unbatch)\r\n\r\[email protected](\r\n min_rounds=5, disable_gc=True, warmup=False\r\n)\r\ndef test_torchaudio_batch_8(benchmark):\r\n benchmark(torchaudio_batch_8)\r\n\r\n\r\[email protected](\r\n min_rounds=5, disable_gc=True, warmup=False\r\n)\r\ndef test_np_unbatch(benchmark):\r\n benchmark(np_unbatch)\r\n\r\[email protected](\r\n min_rounds=5, disable_gc=True, warmup=False\r\n)\r\ndef test_np_batch_8(benchmark):\r\n benchmark(np_batch_8)\r\n\r\n```\r\n",
"For future reference, here is the same benchmark with `Speech2TextFeatureExtractor`:\r\nPrevious conclusions still hold:\r\n\r\n",
"It's also possible that we can optimize our `audio_utils.py`, WDYT?",
"Alright that's quite a significant difference - this probably requires overhauling the `audio_utils` file as you've suggested (use `torch`/`torchaudio` if available, **or** see where our numpy implementation is bottlenecked and try to improve it here).",
"Hey @sanchit-gandhi, thanks for the review here!\r\n\r\n> My only thought is that maybe we should overhaul audio_utils.py with these changes, rather than do the if/else in the feature extraction code? \r\n\r\nWe'd have to create a `fbank` method to `audio_utils` which would create `mel_filters` and `window` on-the-fly in that case right ? (with hindsight, it doesn't matter much since creating `mel_filters` and `window isn't the bottleneck here)\r\n \r\nIn any case, I'd rather refactor that in another PR, which would maybe add the `torch` correspondence for every possible case in `audio_utils` ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Following on from #26182, which ported `torchaudio.compliance.kaldi.fbank` to `numpy` in `audio_utils`, this PR aims to enable the use of numpy porting in previous Feature Extractors (AST and SpeechToText) that used `torchaudio`. It was discussed [here](https://github.com/huggingface/transformers/pull/26182#pullrequestreview-1631301297).
This serves two purposes:
1. to give some examples of how to use `audio_utils` instead of `torchaudio` for future Feature Extractors
2. the possibility of removing torchaudio altogether in the future.
A next step would be to port `audio_utils` to torch, which might be faster (cc @sanchit-gandhi), but this is still open to discussion. Is this really relevant? And will it be really faster?
cc @ArthurZucker and @sanchit-gandhi | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26339/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26339",
"html_url": "https://github.com/huggingface/transformers/pull/26339",
"diff_url": "https://github.com/huggingface/transformers/pull/26339.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26339.patch",
"merged_at": 1699429178000
} |
https://api.github.com/repos/huggingface/transformers/issues/26338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26338/comments | https://api.github.com/repos/huggingface/transformers/issues/26338/events | https://github.com/huggingface/transformers/issues/26338 | 1,908,606,413 | I_kwDOCUB6oc5xwwXN | 26,338 | RuntimeError: Error(s) in loading state_dict for the converted MarianMTModel due to the source vocab not different with target vocab | {
"login": "nana-na-nana-na",
"id": 33016235,
"node_id": "MDQ6VXNlcjMzMDE2MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/33016235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nana-na-nana-na",
"html_url": "https://github.com/nana-na-nana-na",
"followers_url": "https://api.github.com/users/nana-na-nana-na/followers",
"following_url": "https://api.github.com/users/nana-na-nana-na/following{/other_user}",
"gists_url": "https://api.github.com/users/nana-na-nana-na/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nana-na-nana-na/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nana-na-nana-na/subscriptions",
"organizations_url": "https://api.github.com/users/nana-na-nana-na/orgs",
"repos_url": "https://api.github.com/users/nana-na-nana-na/repos",
"events_url": "https://api.github.com/users/nana-na-nana-na/events{/privacy}",
"received_events_url": "https://api.github.com/users/nana-na-nana-na/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I'll have a look 😉 ",
"> Hey! I'll have a look 😉\r\n\r\nHi, Arthur, I create a PR to fix the issue. Could you please help me review it? \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I reviewed the PR which is stale, if there is a strong need from the community I can take it over 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,707 | 1,707 | NONE | null | ### System Info
WARNING:tensorflow:From d:\git\transformers\src\transformers\commands\env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-09-22 17:41:23.252507: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical opera
tions: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.34.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### **Errors I faced:**
RuntimeError: Error(s) in loading state_dict for MarianMTModel:
size mismatch for final_logits_bias: copying a param with shape torch.Size([1, 52]) from checkpoint, the shape in current model is torch.Size([1, 36]).
size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([52, 256]) from checkpoint, the shape in current model is torch.Size([36, 256]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
### **How to repro the issue:**
run https://github.com/huggingface/transformers/blob/b880508440f43f80e35a78ccd2a32f3bde91cb23/src/transformers/models/marian/convert_marian_to_pytorch.py
the model I want to convert is the one I trained myself ( tied-embeddings-src is set as false which indicates **source vocab and target vocab is not shared** )
### **Root cause:**
for my model, source vocab has 36 words. the target vocab has 52 words. Current python code treats the source vocab as the target vocab. That's why the errors said copying a param with shape torch.Size(**[52,** 256]) from checkpoint, the shape in current model is torch.Size(**[36,** 256]).
I saw the comment in the marianMTModel, there is an assumption for the model, that the source and target vocab should be shared. But the assumption is not suitable for all marian models. at least the model I trained is not shared. (I can not retrain the model from scratch due to some reason)
### Expected behavior
update the MarianMTModel to support the source vocab and target vocab could be different. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26338/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26337/comments | https://api.github.com/repos/huggingface/transformers/issues/26337/events | https://github.com/huggingface/transformers/issues/26337 | 1,908,381,606 | I_kwDOCUB6oc5xv5em | 26,337 | transformer bark error | {
"login": "wanglg007",
"id": 40556406,
"node_id": "MDQ6VXNlcjQwNTU2NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/40556406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wanglg007",
"html_url": "https://github.com/wanglg007",
"followers_url": "https://api.github.com/users/wanglg007/followers",
"following_url": "https://api.github.com/users/wanglg007/following{/other_user}",
"gists_url": "https://api.github.com/users/wanglg007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wanglg007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanglg007/subscriptions",
"organizations_url": "https://api.github.com/users/wanglg007/orgs",
"repos_url": "https://api.github.com/users/wanglg007/repos",
"events_url": "https://api.github.com/users/wanglg007/events{/privacy}",
"received_events_url": "https://api.github.com/users/wanglg007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! According to the contribution guidelines we need a full reproducer to be able to help you! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,695 | 1,698 | 1,698 | NONE | null | ### System Info
transformer bark error
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. use transformer's bark
2. python 3.8
### Expected behavior

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26337/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26336/comments | https://api.github.com/repos/huggingface/transformers/issues/26336/events | https://github.com/huggingface/transformers/issues/26336 | 1,908,320,921 | I_kwDOCUB6oc5xvqqZ | 26,336 | TabR: Retrieval-Augmented Tabular Deep Learning | {
"login": "Samoed",
"id": 36135455,
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samoed",
"html_url": "https://github.com/Samoed",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"repos_url": "https://api.github.com/users/Samoed/repos",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,695 | 1,707 | 1,707 | NONE | null | ### Model description
TabR is a retrieval-augmented transformer model specifically designed for tabular data tasks.
I would like to add this model, but I'm not sure if it would fit in Transformers or if it should be a separate library.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
[Paper](https://arxiv.org/abs/2307.14338)
[Author's implementation](https://github.com/yandex-research/tabular-dl-tabr) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26335/comments | https://api.github.com/repos/huggingface/transformers/issues/26335/events | https://github.com/huggingface/transformers/issues/26335 | 1,908,237,477 | I_kwDOCUB6oc5xvWSl | 26,335 | Slow tokenizer decode | {
"login": "peregilk",
"id": 9079808,
"node_id": "MDQ6VXNlcjkwNzk4MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peregilk",
"html_url": "https://github.com/peregilk",
"followers_url": "https://api.github.com/users/peregilk/followers",
"following_url": "https://api.github.com/users/peregilk/following{/other_user}",
"gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peregilk/subscriptions",
"organizations_url": "https://api.github.com/users/peregilk/orgs",
"repos_url": "https://api.github.com/users/peregilk/repos",
"events_url": "https://api.github.com/users/peregilk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peregilk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! That is indeed a long decoding time! I believe that #26299 fixes this! ",
"Running on a local MacBook with `transformers` on main now that https://github.com/huggingface/transformers/pull/26299 is merged:\r\n```\r\nTime taken for encoding: 0.6956140995025635 seconds\r\nTime taken for decoding: 0.010346174240112305 seconds\r\n```\r\nLooks like we've recovered performance! Hope this is good for you @peregilk.",
"Yes. It absolutely did!"
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | ### System Info
transformers 4.34.0.dev0. Running this on tpu v4-8. Might happen on other plattforms as well.
### Who can help?
@ArthurZucker
### Reproduction
Decoding is extremely slow using Transformers 4.34.0.dev0.
A small script to reproduce:
```
import argparse, time
from transformers import AutoTokenizer
def measure_tokenization_speed(tokenizer, sentences):
start_time = time.time()
outputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
end_time = time.time()
print(f"Time taken for encoding: {end_time - start_time} seconds")
return outputs["input_ids"]
def measure_detokenization_speed(tokenizer, input_ids):
start_time = time.time()
decoded_sentences = tokenizer.batch_decode(input_ids)
end_time = time.time()
print(f"Time taken for decoding: {end_time - start_time} seconds")
def main(args):
tokenizer = AutoTokenizer.from_pretrained("openai/whisper-medium", use_fast=True)
# Create an array of 1000 sentences
sentences = ["This is a sample sentence."] * 1000
input_ids = measure_tokenization_speed(tokenizer, sentences)
measure_detokenization_speed(tokenizer, input_ids)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Measure the speed of HuggingFace tokenizer.")
args = parser.parse_args()
main(args)
```
tpu v4-8 (transformers 4.34.0.dev0)
Time taken for encoding: 1.1659502983093262 seconds
Time taken for decoding: 39.807389974594116 seconds
tpu v4-8 (transformers 4.30.1)
Time taken for encoding: 1.2527313232421875 seconds
Time taken for decoding: 1.8215229511260986 seconds
### Expected behavior
Decoder should take approximately as long as encoding. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26335/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26334/comments | https://api.github.com/repos/huggingface/transformers/issues/26334/events | https://github.com/huggingface/transformers/issues/26334 | 1,908,007,649 | I_kwDOCUB6oc5xueLh | 26,334 | [peft] model will have no weight with `require_grads=True` if gradient checkpointing is enabled after PeftModel is created | {
"login": "yundai424",
"id": 43726198,
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yundai424",
"html_url": "https://github.com/yundai424",
"followers_url": "https://api.github.com/users/yundai424/followers",
"following_url": "https://api.github.com/users/yundai424/following{/other_user}",
"gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yundai424/subscriptions",
"organizations_url": "https://api.github.com/users/yundai424/orgs",
"repos_url": "https://api.github.com/users/yundai424/repos",
"events_url": "https://api.github.com/users/yundai424/events{/privacy}",
"received_events_url": "https://api.github.com/users/yundai424/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"should be fixed by https://github.com/huggingface/transformers/pull/25846/ already"
] | 1,695 | 1,697 | 1,697 | CONTRIBUTOR | null | ### System Info
peft==0.5.0
transformers==4.33.2
torch==2.0.1
### Who can help?
trainer: @muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
code snippet:
```python
import transformers
import datasets
import trl
from dataclasses import dataclass, field
import peft
import torch
import time
import callbacks
@dataclass
class CustomArguments():
model_path: str
data_path: str = field(default="alpaca_data.json")
max_seq_length: int = field(default=512)
lora: bool = field(default=False)
lora_r: int = field(default=32)
lora_alpha: int = field(default=16)
lora_dropout: float = field(default=0.1)
def formatting_func(example):
output_texts = []
for i in range(len(example['instruction'])):
output_texts.append(f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{example["instruction"][i]}
### Input:
{example["input"][i]}
### Response:
{example["output"][i]}""")
return output_texts
def main():
parser = transformers.HfArgumentParser((transformers.TrainingArguments, CustomArguments))
training_args, custom_args = parser.parse_args_into_dataclasses()
dataset = datasets.load_dataset("json", data_files=custom_args.data_path, split=[
'train'])[0].train_test_split(test_size=0.2, shuffle=True)
dataset_train, dataset_eval = dataset['train'], dataset['test']
if torch.distributed.get_rank() == 0:
print(custom_args, training_args)
model = transformers.AutoModelForCausalLM.from_pretrained(custom_args.model_path,
trust_remote_code=True,
use_cache=False)
peft_config = peft.LoraConfig(task_type=peft.TaskType.CAUSAL_LM,
inference_mode=False,
r=custom_args.lora_r,
lora_alpha=custom_args.lora_alpha,
lora_dropout=custom_args.lora_dropout) if custom_args.lora else None
tokenizer = transformers.AutoTokenizer.from_pretrained(custom_args.model_path)
tokenizer.pad_token = tokenizer.eos_token
trainer = trl.SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset_train,
eval_dataset=dataset_eval,
formatting_func=formatting_func,
max_seq_length=custom_args.max_seq_length,
peft_config=peft_config,
args=training_args,
)
if torch.distributed.get_rank() == 0:
for name, param in trainer.model.named_parameters():
print(name, param.requires_grad)
trainer.train()
trainer.evaluate()
eval_results = trainer.evaluate()
print(f"Evaluation result", eval_results)
tokenizer.save_pretrained(training_args.output_dir)
trainer.save_model()
if __name__ == "__main__":
main()
```
command:
```
torchrun --nnodes=1 --nproc-per-node=$NUM_GPUS src/training.py \
--model_path <any decoder model such as falcon> \
--data_path src/alpaca_data.json \
--bf16 True \
--num_train_epochs 1 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--gradient_accumulation_steps 8 \
--gradient_checkpointing \
--save_strategy "no" \
--learning_rate 2e-5 \
--logging_steps 1 \
--lora True \
--output_dir "$OUTPUT_FULL_PATH" \
--deepspeed src/ds_config_no_offload.json
```
This will result in error saying `[RuntimeError: element 0 of variables does not require grad and does not have a grad_fn](https://discuss.pytorch.org/t/runtimeerror-element-0-of-variables-does-not-require-grad-and-does-not-have-a-grad-fn/11074)`. the reason is that `peft` will call `model.enable_input_require_grads` [here](https://github.com/huggingface/peft/blob/v0.5.0/src/peft/peft_model.py#L118) if it finds the model is enabled with gradient checkpointing. However, **this requires the model to be MANUALLY set with `model.gradient_checkpointing_enable()` before wrapping it with `get_peft_model`**. If the user doesn't explicitly do this before but instead delegates the activation checkpointing enable to `transformers.Trainer` [here](https://github.com/huggingface/transformers/blob/v4.33.1/src/transformers/trainer.py#L1660), then none of the model will require grad.
### Expected behavior
if the model is `PeftModel` and if `args.gradient_checkpointing` is True, `transformers.Trainer` should check if the model has been configured with grad ckpt enabled. If not, it should throw an error with explicit message.
i.e. something like
```python
if args.gradient_checkpointing and not getattr(model, "is_gradient_checkpointing", True):
if isinstance(model, PeftModel):
throw RuntimeError(...)
```
Would be better if this can be automatically gracefully handled as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26334/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26333/comments | https://api.github.com/repos/huggingface/transformers/issues/26333/events | https://github.com/huggingface/transformers/pull/26333 | 1,907,958,230 | PR_kwDOCUB6oc5a8Qta | 26,333 | Simplify getting the number of embedding tokens. | {
"login": "kwonmha",
"id": 8953934,
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kwonmha",
"html_url": "https://github.com/kwonmha",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26333). All of your documentation changes will be reflected on that endpoint.",
"@pacman100 \r\nhello, please have a look at this PR.\r\nIt's been a while since I created this PR.",
"From #26112, I understood that `weight.shape[0]` always has accurate info and `num_embedding` doesn't.\r\nBut I can say that in this case, right after embedding parameter has been created, it looks OK.\r\n\r\nAnyway, it's up to you.\r\nFeel free to close this PR.",
"Hello @kwonmha , yes, as per that comment, this PR doesn't look like a good fix as you are changing the code which get the embedding size post resizing of the embedding layer. Closing this PR, thank you!"
] | 1,695 | 1,700 | 1,700 | CONTRIBUTOR | null | Simplifies the way to get the number of embedding tokens.
Following additional discussions on PR #26024
## Who can review?
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26333/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26333",
"html_url": "https://github.com/huggingface/transformers/pull/26333",
"diff_url": "https://github.com/huggingface/transformers/pull/26333.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26333.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26332/comments | https://api.github.com/repos/huggingface/transformers/issues/26332/events | https://github.com/huggingface/transformers/issues/26332 | 1,907,778,994 | I_kwDOCUB6oc5xtmWy | 26,332 | Beam Search Fails for Llama 70b | {
"login": "jconley-deloitte",
"id": 116034543,
"node_id": "U_kgDOBuqL7w",
"avatar_url": "https://avatars.githubusercontent.com/u/116034543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jconley-deloitte",
"html_url": "https://github.com/jconley-deloitte",
"followers_url": "https://api.github.com/users/jconley-deloitte/followers",
"following_url": "https://api.github.com/users/jconley-deloitte/following{/other_user}",
"gists_url": "https://api.github.com/users/jconley-deloitte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jconley-deloitte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jconley-deloitte/subscriptions",
"organizations_url": "https://api.github.com/users/jconley-deloitte/orgs",
"repos_url": "https://api.github.com/users/jconley-deloitte/repos",
"events_url": "https://api.github.com/users/jconley-deloitte/events{/privacy}",
"received_events_url": "https://api.github.com/users/jconley-deloitte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also tried installing from github and received the same error. See the update environment below (same script/error)\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.34.0.dev0\r\n- Platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.3\r\n- Accelerate version: 0.23.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes, 6 A100\r\n- Using distributed or parallel set-up in script?: It is device_map=\"auto\" which I believe is distributing the layers between GPUs",
"It's also worth mentioning, I noticed that for lower numbers of tokens (10 tokens generated) this error did not occur. It only happened for longer generations, such as the up to 512 token runs above.",
"cc @Rocketknight1 an example of how to produce the `nan`.\r\nThis is somewhat expected, we have quite a few issues relating to `Llama` and `nan` with batch generation. @gante is OOO but we might merge a fix similar to #25284 ",
"Got it! I'll see if I can reproduce this and push a fix to LLaMA (which might also help bringing the code into line with the InternLM code)",
"I made a much shorter reproduction script for the issue that doesn't need `llama-70b` - debugging is easier when I don't need to spin up 8 A100s!\r\n\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\npath = \"meta-llama/Llama-2-7b-chat-hf\"\r\n# Setup Tokenizer\r\nmodel = AutoModelForCausalLM.from_pretrained(path)\r\ntokenizer = AutoTokenizer.from_pretrained(path)\r\n\r\nprompts = [\r\n \"<s>[INST]Hi, how are you?[/INST]\",\r\n \"<s>[INST]Who is the president?[/INST]\",\r\n \"<s>[INST]What continent is Ireland in?[/INST]\",\r\n \"<s>[INST]How fast can a chicken run?[/INST]\",\r\n \r\n]\r\n\r\nfor prompt in prompts:\r\n input_ids = tokenizer(prompt, return_tensors=\"pt\", truncation=True, max_length=4096, add_special_tokens=False).input_ids\r\n with torch.no_grad():\r\n output = model.generate(input_ids, num_beams=4, max_new_tokens=512, temperature=0.5, top_p=0.9)\r\n gen_text = tokenizer.batch_decode(output)\r\n print(gen_text[0])\r\n```\r\n\r\nThe issue occurs on GPU and CPU, in float16/bfloat16/float32. It is only triggered by beam search, and doesn't occur with standard generation. Working on it!",
"Further update: This issue only occurs in 'beam sample' decoding, not 'beam search'. As a temporary workaround @jconley-deloitte , you can add `do_sample=False` to the `generate` arguments to use beam search instead.",
"Got it: This is nothing to do with LLaMA's code at all! The cause is that LLaMA's `generation_config` specifies a combination of options `(temperature=0.6, top_p=0.9)` that interact badly with beam search. \r\n\r\nThe reason seems to be that `temperature=0.6` produces very sharp distributions, which means that `top_p` removes all or almost all of the tokens that can be selected in each iteration. As generation continues, this eventually results in **all** of the possible choices being removed by `top_p`. As a result, [next_token_scores](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L3374) contains only masked `-inf` values, and so the [softmax](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L3398) over this creates NaN outputs due to division by zero (because `exp(-inf) == 0`).\r\n\r\nPossible solutions include tweaking the `top_p` warper to always return at least 1 logit, or altering beam search somehow when these warpers are present. Since it touches core generation code, I'm going to leave this fix until I can discuss it with @gante next week!\r\n\r\nIn the meantime @jconley-deloitte, you can either use `do_sample=False`, or call generate with different values for those arguments (e.g. `temperature=1.0, top_p=1.0`)",
"@Rocketknight1 thank you for diving in!\r\n\r\n> Possible solutions include tweaking the top_p warper to always return at least 1 logit\r\n\r\nWe already do this. The root of the issue is that in `beam_sample` we apply the logits processors before adding the beam scores, as usual in beam methods. However, for [legacy reasons](https://github.com/huggingface/transformers/pull/5420#discussion_r449779867), we apply the logits warpers after adding the beam scores, which causes the scores to explode with temperatures below `1.0`. \r\n\r\nThere has been a [similar issue in the past ](https://github.com/huggingface/transformers/issues/22914), and, regardless of being a bug that causes crashes, I think that it makes more sense to apply the logits warpers before adding the scores:\r\n1 - the most important ones, like top_p and top_k, keep the same tokens regardless of where the operation is applied :)\r\n2 - we also apply the logits processors before adding the scores\r\n\r\nAll this to say that I'm going to open a PR to break the legacy behavior, as it is a recurrent issue that up takes significant time every time it pops up :) I've tested locally, and changing this detail fixes the crashing snippets!",
"➕ on breaking this as we have had quite a lot of issues. Having a self.legacy flag might be ok to have a deprecation cycle / just keep both for"
] | 1,695 | 1,697 | 1,697 | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: Linux-5.15.0-1041-aws-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.2
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, 6 A100 GPUs
- Using distributed or parallel set-up in script?: It is device_map="auto" which I believe is distributing the layers between GPUs
### Who can help?
@gante Appears to the be the relevant developer because this is an issue with model.generate
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The below script only needs a change to have a TOKEN and CACHE_DIR and can be ran to generate the error.
I have tried with and without autocast, and it does not affect this.
I have also verified that the GPUs/Machine are not memory constrained.
Greedy generation works as expected, only beam-search is failing.
```
import os
import sys
from copy import deepcopy
import pandas as pd
import torch
from tqdm import tqdm
sys.path.append("/app")
from src.secrets import TOKEN
from src.constants import CACHE_DIR
from transformers import AutoModelForCausalLM, AutoTokenizer
tqdm.pandas()
# Use 6/8 A100 GPUs
os.environ["CUDA_VISIBLE_DEVICES"] = "2,3,4,5,6,7"
# Load the model
path = "meta-llama/Llama-2-70b-chat-hf"
model = AutoModelForCausalLM.from_pretrained(path, cache_dir=CACHE_DIR, device_map="auto", torch_dtype=torch.float16, use_auth_token=TOKEN, use_cache=True)
# Setup Tokenizer
tokenizer = AutoTokenizer.from_pretrained(path, truncation_side="left", token=TOKEN)
prompts = [
"<s>[INST]Hi, how are you?[/INST]",
"<s>[INST]Who is the president?[/INST]",
"<s>[INST]What continent is Ireland in?[/INST]",
"<s>[INST]How fast can a chicken run?[/INST]",
]
for prompt in prompts:
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096, add_special_tokens=False).input_ids
# Move tokens to GPU
input_ids = input_ids.to("cuda")
with torch.no_grad():
with torch.cuda.amp.autocast():
output = model.generate(input_ids, num_beams=4, max_new_tokens=512, temperature=0.5, top_p=0.9)
gen_text = tokenizer.batch_decode(output)
print(gen_text[0])
```
The resulting error
```
RuntimeError Traceback (most recent call last)
Cell In[5], line 7
4 input_ids = input_ids.to("cuda")
5 with torch.no_grad():
6 #with torch.cuda.amp.autocast():
----> 7 output = model.generate(input_ids, num_beams=4, max_new_tokens=512, temperature=0.5, top_p=0.9)
8 gen_text = tokenizer.batch_decode(output)
9 print(gen_text[0])
File /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1718, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
1710 input_ids, model_kwargs = self._expand_inputs_for_generation(
1711 input_ids=input_ids,
1712 expand_size=generation_config.num_beams,
1713 is_encoder_decoder=self.config.is_encoder_decoder,
1714 **model_kwargs,
1715 )
1717 # 14. run beam sample
-> 1718 return self.beam_sample(
1719 input_ids,
1720 beam_scorer,
1721 logits_processor=logits_processor,
1722 logits_warper=logits_warper,
1723 stopping_criteria=stopping_criteria,
1724 pad_token_id=generation_config.pad_token_id,
1725 eos_token_id=generation_config.eos_token_id,
1726 output_scores=generation_config.output_scores,
1727 return_dict_in_generate=generation_config.return_dict_in_generate,
1728 synced_gpus=synced_gpus,
1729 **model_kwargs,
1730 )
1732 elif generation_mode == GenerationMode.GROUP_BEAM_SEARCH:
1733 # 11. prepare beam search scorer
1734 beam_scorer = BeamSearchScorer(
1735 batch_size=batch_size,
1736 num_beams=generation_config.num_beams,
(...)
1742 max_length=generation_config.max_length,
1743 )
File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:3392, in GenerationMixin.beam_sample(self, input_ids, beam_scorer, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
3388 next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size)
3390 probs = nn.functional.softmax(next_token_scores, dim=-1)
-> 3392 next_tokens = torch.multinomial(probs, num_samples=2 * num_beams)
3393 next_token_scores = torch.gather(next_token_scores, -1, next_tokens)
3395 next_token_scores, _indices = torch.sort(next_token_scores, descending=True, dim=1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
### Expected behavior
The model generates tokens using beam search.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26331/comments | https://api.github.com/repos/huggingface/transformers/issues/26331/events | https://github.com/huggingface/transformers/pull/26331 | 1,907,749,984 | PR_kwDOCUB6oc5a7ktl | 26,331 | add option to use peft for training | {
"login": "prathikr",
"id": 31260940,
"node_id": "MDQ6VXNlcjMxMjYwOTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/31260940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prathikr",
"html_url": "https://github.com/prathikr",
"followers_url": "https://api.github.com/users/prathikr/followers",
"following_url": "https://api.github.com/users/prathikr/following{/other_user}",
"gists_url": "https://api.github.com/users/prathikr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prathikr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prathikr/subscriptions",
"organizations_url": "https://api.github.com/users/prathikr/orgs",
"repos_url": "https://api.github.com/users/prathikr/repos",
"events_url": "https://api.github.com/users/prathikr/events{/privacy}",
"received_events_url": "https://api.github.com/users/prathikr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I'm afraid we can't do this! As the doc for our exemple folder mentions:\r\n\r\n> While we strive to present as many use cases as possible, the scripts in this folder are just examples. It is expected that they won’t work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data. This way, you can easily tweak them. \r\n\r\n",
"@ArthurZucker given we are adding an optional feature, can you explain what is the concern here? Thanks!",
"Hey @prathikr, @askhade, thank you for your contribution, we appreciate you opening a PR!\r\n\r\nArthur references this specific README for the examples: https://github.com/huggingface/transformers/tree/main/examples#examples\r\n\r\nAnd more specifically:\r\n\r\n> Please discuss on the [forum](https://discuss.huggingface.co/) or in an [issue](https://github.com/huggingface/transformers/issues) a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, **but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability.**\r\n\r\n\r\nWe are aligned with you that adding examples leveraging PEFT and parameter efficient fine-tuning would be cool, but this kind of feature can very easily blow up the size of our examples which are arguably already way too long. \r\n\r\nWhile the feature you add is optional and therefore doesn't add \"bloat\" to the service itself, the examples' goal is really readability and adaptability; adding too many optional features unfortunately reduces this readability.\r\n\r\nWe appreciate your understanding! "
] | 1,695 | 1,695 | 1,695 | CONTRIBUTOR | null | This PR adds an option to use peft for finetuning text-classification task. It is required to run meta-llama/Llama-2-7b-hf which we would like to add to our internal benchmarking pipelines. Also adds an explicit padding token which resolves padding issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26331/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26331",
"html_url": "https://github.com/huggingface/transformers/pull/26331",
"diff_url": "https://github.com/huggingface/transformers/pull/26331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26331.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.