url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
βŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
βŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/27545
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27545/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27545/comments
https://api.github.com/repos/huggingface/transformers/issues/27545/events
https://github.com/huggingface/transformers/issues/27545
1,997,599,093
I_kwDOCUB6oc53EPF1
27,545
runtime shape mismatch for llama2-70
{ "login": "ZhangShiyue", "id": 11383558, "node_id": "MDQ6VXNlcjExMzgzNTU4", "avatar_url": "https://avatars.githubusercontent.com/u/11383558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhangShiyue", "html_url": "https://github.com/ZhangShiyue", "followers_url": "https://api.github.com/users/ZhangShiyue/followers", "following_url": "https://api.github.com/users/ZhangShiyue/following{/other_user}", "gists_url": "https://api.github.com/users/ZhangShiyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhangShiyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhangShiyue/subscriptions", "organizations_url": "https://api.github.com/users/ZhangShiyue/orgs", "repos_url": "https://api.github.com/users/ZhangShiyue/repos", "events_url": "https://api.github.com/users/ZhangShiyue/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhangShiyue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I tested flash-attn-2.3.1. It does not work either.", "Hi @ZhangShiyue \r\nI suspect the issue comes from the fact that Llama-2-70b uses GQA (Grouped Query Attention). I created a dummy llama-2 model that uses GQA [here](https://huggingface.co/ybelkada/tiny-random-LlamaForCausalLM-GQA) (same config as llama-2-70B except for the number of layers) and I ran:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel_id = \"ybelkada/tiny-random-LlamaForCausalLM-GQA\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_id, use_flash_attention_2=True, torch_dtype=torch.float16, low_cpu_mem_usage=True\r\n).to(0)\r\n\r\ndummy_input = torch.LongTensor([[0, 1, 0], [0, 1, 0]]).to(0)\r\nattention_mask = torch.LongTensor([[1, 1, 1], [0, 1, 1]]).to(0)\r\nprint(model.generate(dummy_input, attention_mask=attention_mask, max_new_tokens=100))\r\n```\r\n\r\nAnd the script seemed to work on transformers main. Can you help me create a small reproducible snippet for your issue?", "Thank you @younesbelkada! generate also works for me. What does not work is getting logits.\r\nHere is a snippet of code I just tried and it gave me: RuntimeError: shape '[6, 8, 128]' is invalid for input of size 49152\r\n\r\n```\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\nloading_kwargs = {\r\n \"device_map\": \"balanced_low_0\",\r\n \"use_flash_attention_2\": True,\r\n \"torch_dtype\": torch.float16\r\n}\r\nmodel = AutoModel.from_pretrained(\"llama-2-70b/pytorch/\", **loading_kwargs)\r\nmodel.eval()\r\n\r\ndummy_input = torch.LongTensor([[0, 1, 0], [0, 1, 0]]).cuda()\r\nattention_mask = torch.LongTensor([[1, 1, 1], [0, 1, 1]]).cuda()\r\n\r\nprint(model(input_ids=dummy_input, attention_mask=attention_mask).logits)\r\n```\r\n\r\n", "Hi @ZhangShiyue , I still was not able to reproduce, can you run this snippet and confirm it fails on your end?\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoModel\r\n\r\nmodel_id = \"ybelkada/tiny-random-LlamaForCausalLM-GQA\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_id, use_flash_attention_2=True, torch_dtype=torch.float16, low_cpu_mem_usage=True\r\n).to(0)\r\n\r\ndummy_input = torch.LongTensor([[0, 1, 0], [0, 1, 0]]).to(0)\r\nattention_mask = torch.LongTensor([[1, 1, 1], [0, 1, 1]]).to(0)\r\n\r\nprint(model(input_ids=dummy_input, attention_mask=attention_mask).logits)\r\n```\r\nThe main difference between my setup and yours is that I am using transformers main, can you try to update your transformers package?\r\n```\r\npip install -U transformers\r\n```", "hey @younesbelkada, thank you! This issue has been resolved after I updated transformers. " ]
1,700
1,700
1,700
NONE
null
### System Info transformers-4.34.0.dev0 torch-2.1.0.dev20230711+cu121 accelerate-0.23.0.dev0 flash-attn-2.1.1 cuda 12.1 H100 GPUs python-3.10 ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to get model logits from llama2-70b for some text. The following issue only occurs when batch size > 1 + use_flash_attention_2=True + llama2-70. In other words, if I use batch=1 or use_flash_attention_2=False or llama2-7/13, no error will be thrown out. FYI, dtype=torch.float16 ``` Traceback (most recent call last): ... File "/workspace/llm_evaluator/tasks/evaluator_base.py", line 41, in compute_logits return model(input_ids=input_ids, attention_mask=attn_mask).logits File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl return forward_call(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1035, in forward outputs = self.model( File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl return forward_call(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 922, in forward layer_outputs = decoder_layer( File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl return forward_call(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 632, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl return forward_call(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/layers/buildpacks/requirements/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 490, in forward attn_output = self._flash_attention_forward( File "/layers/buildpacks/requirements/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 527, in _flash_attention_forward query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input( File "/layers/buildpacks/requirements/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 563, in _upad_input query_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k RuntimeError: shape '[1400, 8, 128]' is invalid for input of size 11468800 ``` note that the shape difference is always a factor of 8: 11468800 / 8 = 1400 * 8 * 128. I noticed [this relevant issue](https://github.com/facebookresearch/llama/issues/423). I confirmed that in config.json, pretraining_tp=1, and I did not use bitsandbytes. ### Expected behavior Shouldn't throw out this error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27545/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27544
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27544/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27544/comments
https://api.github.com/repos/huggingface/transformers/issues/27544/events
https://github.com/huggingface/transformers/issues/27544
1,997,513,585
I_kwDOCUB6oc53D6Nx
27,544
Empty string token gets added for certain numbers with T5 and LLama for fast and slow tokenizers
{ "login": "SumanthRH", "id": 39546518, "node_id": "MDQ6VXNlcjM5NTQ2NTE4", "avatar_url": "https://avatars.githubusercontent.com/u/39546518?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SumanthRH", "html_url": "https://github.com/SumanthRH", "followers_url": "https://api.github.com/users/SumanthRH/followers", "following_url": "https://api.github.com/users/SumanthRH/following{/other_user}", "gists_url": "https://api.github.com/users/SumanthRH/gists{/gist_id}", "starred_url": "https://api.github.com/users/SumanthRH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SumanthRH/subscriptions", "organizations_url": "https://api.github.com/users/SumanthRH/orgs", "repos_url": "https://api.github.com/users/SumanthRH/repos", "events_url": "https://api.github.com/users/SumanthRH/events{/privacy}", "received_events_url": "https://api.github.com/users/SumanthRH/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! As you can read[ from the documentation](https://huggingface.co/docs/transformers/main/model_doc/llama#transformers.LlamaTokenizer.legacy), if you do not set the flag `legacy` to `False`, you won't get the fix for the slow tokenizers. For Fast tokenizers, this can already be fixed by doing:\r\n```python \r\nfrom tokenizers import Metaspace\r\ntokenizer._tokenizer.pre_tokenizer = Metaspace(add_prefix_space = True, replacement = \"▁\", prepend_scheme = \"first\")\r\n```\r\nfor T5 and for Llama the normnalizer should be set to None. \r\nThe proper fix is coming in #26678. πŸ€— ", "Ah I got my mistake! I was actually referring to the empty string token that gets added in the middle, not the whitespace that gets added in the beginning. I always saw the empty string token being present in `tok.encode`, and I naively assumed that meant that `tok.decode` would add an extra whitespace (turns out to NOT be the case). My main confusion stemmed from the fact that tokenization for numbers differs across T5 and Llama. It looks like for Llama, the tokenizer implicitly adds whitespaces between tokens that are words, but doesn't do so with numbers!\r\n```\r\ntok = AutoTokenizer.from_pretrained(\"meta-llama/LLama-2-7b-hf\", use_fast=False)\r\nprint(tok.encode(\"there are 4 people\"), add_special_tokens=False) # [727, 526, 29871, 29946, 2305]\r\nprint(tok.batch_decode([727, 526, 29871, 29946, 2305])) # ['there', 'are', '', '4', 'people']\r\nprint(tok.decode([727, 526, 29871, 29946, 2305])) # orig string \"there are 4 people\"\r\n```\r\nNone of the tokens have whitespace, unlike GPT2 (which gives `['there', ' are', ' 4', ' people']`). And if you remove the whitespace token (29871 ID) you get:\r\n```\r\nprint(tok.decode([727, 526, 29946, 2305])) # \"there are4 people\"\r\n```\r\nLooks like this difference in the pretokenization (and detokenizer) in adding a whitespace is causing this behaviour. Please correct me if I'm wrong!", "No both Llama and T5 add a prefix token to the sentence they have as an input. Main differences:\r\n- different training \r\n- different algorithm (unigram vs BPE) \r\n- different stripping mechanism (T5 replaces any occurences of spaces to a single space)\r\nπŸ˜‰ ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
### System Info `transformers` 4.35.0, `python` 3.11.5 Can also reproduce on the latest dev version 4.36.0.dev0 ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction An empty string token (or, equivalently, a whitespace after decoding) gets added for T5 and Llama tokenizers (both slow and fast tokenizers). This seems to be related to the fix #26678 - however the problem is seen in *both* slow and fast tokenizers. Here's a code snippet to reproduce : ``` tok = AutoTokenizer.from_pretrained("meta-llama/LLama-2-7b-hf", use_fast=True) print(tok.batch_decode(tok.encode("there are 410 people"))) # ['<s>', 'there', 'are', '', '4', '1', '0', 'people'] tok = AutoTokenizer.from_pretrained("meta-llama/LLama-2-7b-hf", use_fast=False) print(tok.batch_decode(tok.encode("there are 410 people"))) # ['<s>', 'there', 'are', '', '4', '1', '0', 'people'] ``` Notice the extra token (I'm using `batch_decode` to see the individual tokens, but if you use `decode`, you'll see an extra whitespace). I see the same issue with `t5-base`, but the issue goes away for certain numbers with t5 (400, for example). Also, I used the `transformers` fork in PR #26678 (after building from source), but even with the `legacy` flag, the problem remains, so I believe this is a separate issue. ### Expected behavior No extra empty string token/whitespace in decoded output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27544/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27543
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27543/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27543/comments
https://api.github.com/repos/huggingface/transformers/issues/27543/events
https://github.com/huggingface/transformers/pull/27543
1,997,359,560
PR_kwDOCUB6oc5fqCjm
27,543
Generate: fix flaky tests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,700
1,700
1,700
MEMBER
null
# What does this PR do? Fixes two flaky tests: 1. `tests/models/clvp/test_modeling_clvp.py::ClvpDecoderTest::test_constrained_beam_search_generate` was failing at a ~3% rate because the range for which we were sampling tokens to force in constrained beam search was based on the random input_ids and sometimes invalid. πŸ‘‰ fix = hard code the valid range based on `config.vocab_size` 2. `tests/models/marian/test_modeling_marian.py::MarianStandaloneDecoderModelTest::test_sample_generate` was failing at a ~0.1% rate because of a combination of poor logits processor setup + random input_ids πŸ‘‰ fix = don't set up the logits processor with a combination that may lead to no valid continuation being possible
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27543/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27543", "html_url": "https://github.com/huggingface/transformers/pull/27543", "diff_url": "https://github.com/huggingface/transformers/pull/27543.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27543.patch", "merged_at": 1700216101000 }
https://api.github.com/repos/huggingface/transformers/issues/27542
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27542/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27542/comments
https://api.github.com/repos/huggingface/transformers/issues/27542/events
https://github.com/huggingface/transformers/issues/27542
1,997,191,846
I_kwDOCUB6oc53Crqm
27,542
trainer fails when using torchrun for distributed run of transformer model wrapped with PEFT without using device_map
{ "login": "Ahmed-Roushdy", "id": 68569076, "node_id": "MDQ6VXNlcjY4NTY5MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/68569076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ahmed-Roushdy", "html_url": "https://github.com/Ahmed-Roushdy", "followers_url": "https://api.github.com/users/Ahmed-Roushdy/followers", "following_url": "https://api.github.com/users/Ahmed-Roushdy/following{/other_user}", "gists_url": "https://api.github.com/users/Ahmed-Roushdy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ahmed-Roushdy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ahmed-Roushdy/subscriptions", "organizations_url": "https://api.github.com/users/Ahmed-Roushdy/orgs", "repos_url": "https://api.github.com/users/Ahmed-Roushdy/repos", "events_url": "https://api.github.com/users/Ahmed-Roushdy/events{/privacy}", "received_events_url": "https://api.github.com/users/Ahmed-Roushdy/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[ { "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false } ]
[ "@muellerz @pacman100, could you help me with above issue?. I would appreciate your help", "Gentle ping @pacman100 @muellerzr ", "Hello, when using FSDP+PEFT, you need to have a different Auto wrap policy. See these lines on how to do that: https://github.com/pacman100/DHS-LLM-Workshop/blob/3c34a1215d7b5f73bb4a1ec122011b1de1470559/chat_assistant/sft/training/train.py#L168-L172" ]
1,700
1,707
null
NONE
null
### System Info - `transformers` version: 4.33.3 [1/1854] - Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.19.2 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Background about my target: I want to do distributed training of new architecture of LLaMA model wrapped with peft. Basically, I have compressed llama2 and modified its architecture. For saving the new model I am using torch.save and torch.load for loading the model. Hence, I am not able to use .pretrained_method for loading the model and setting device_map ='auto' for further distributed fine-tuning. I am not sure if there is a still a way for automatic using the device map. What i did, I tried to use torchrn for the distribuited run of the original LLaMA2 model wrapped with LoRA to see if i can use the same approach to my modified model. Steps: The code snippet ```ruby model_args, data_args, training_args = parser.parse_args_into_dataclasses() print(training_args) print('Start Loading Model') model = transformers.AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, ) config = LoraConfig( r=8, lora_alpha=16, lora_dropout=0.1, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) model.print_trainable_parameters() trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module) ``` scripet to run ``` torchrun \ --standalone \ --nnodes=1 \ --nproc-per-node=7 \ train.py \ --model_name_or_path "meta-llama/Llama-2-7b-hf" \ --bf16 True \ --output_dir checkpoints/dist-LLaMa-7B \ --num_train_epochs 3 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 8 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True ``` I got the following error ``` Start building the trainer module Traceback (most recent call last): File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 285, in <module> train() File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 278, in train trainer.train() File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1556, in train return inner_training_loop( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1675, in _inner_training_loop self.model = self.accelerator.prepare(self.model) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1288, in prepare result = tuple( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1289, in <genexpr> self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1094, in _prepare_one return self.prepare_model(obj, device_placement=device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1464, in prepare_model model = FSDP(model, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 391, in __init__ _auto_wrap(auto_wrap_kwargs, fsdp_kwargs, FullyShardedDataParallel) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 73, in _auto_wrap _recursive_wrap(**auto_wrap_kwargs, **fsdp_kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( [Previous line repeated 2 more times] File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 388, in _recursive_wrap return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 317, in _wrap return wrapper_cls(module, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 408, in __init__ _init_param_handle_from_module( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 429, in _init_param_handle_from_module _init_param_handle_from_params(state, managed_params, fully_sharded_module) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 525, in _init_param_handle_from_params handle = FlatParamHandle( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 366, in __init__ self._init_flat_param(params, fully_sharded_module, use_orig_params) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 440, in _init_flat_param raise ValueError( ValueError: `FlatParameter` requires uniform `requires_grad` Traceback (most recent call last): File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 285, in <module> Traceback (most recent call last): train() File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 285, in <module> File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 278, in train trainer.train() File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1556, in train train() File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 278, in train trainer.train() File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1556, in train return inner_training_loop( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1675, in _inner_training_loop return inner_training_loop( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1675, in _inner_training_loop self.model = self.accelerator.prepare(self.model) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1288, in prepare result = tuple( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1289, in <genexpr> self.model = self.accelerator.prepare(self.model) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1288, in prepare result = tuple( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1289, in <genexpr> self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1094, in _prepare_one self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1094, in _prepare_one return self.prepare_model(obj, device_placement=device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1464, in prepare_model return self.prepare_model(obj, device_placement=device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1464, in prepare_model model = FSDP(model, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 391, in __init__ _auto_wrap(auto_wrap_kwargs, fsdp_kwargs, FullyShardedDataParallel) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 73, in _auto_wrap model = FSDP(model, **kwargs)_recursive_wrap(**auto_wrap_kwargs, **fsdp_kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 391, in __init__ File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap _auto_wrap(auto_wrap_kwargs, fsdp_kwargs, FullyShardedDataParallel) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 73, in _auto_wrap _recursive_wrap(**auto_wrap_kwargs, **fsdp_kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( [Previous line repeated 2 more times] File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 388, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 317, in _wrap wrapped_child, num_wrapped_params = _recursive_wrap( [Previous line repeated 2 more times] File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 388, in _recursive_wrap return wrapper_cls(module, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 408, in __init__ return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 317, in _wrap _init_param_handle_from_module( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 429, in _init_param_handle_from_module return wrapper_cls(module, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 408, in __init__ _init_param_handle_from_params(state, managed_params, fully_sharded_module) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 525, in _init_param_handle_from_params _init_param_handle_from_module( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 429, in _init_param_handle_from_module handle = FlatParamHandle( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 366, in __init__ _init_param_handle_from_params(state, managed_params, fully_sharded_module) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 525, in _init_param_handle_from_params self._init_flat_param(params, fully_sharded_module, use_orig_params) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 440, in _init_flat_param handle = FlatParamHandle( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 366, in __init__ raise ValueError( ValueError: `FlatParameter` requires uniform `requires_grad` self._init_flat_param(params, fully_sharded_module, use_orig_params) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 440, in _init_flat_param raise ValueError( ValueError: `FlatParameter` requires uniform `requires_grad` ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1854106) of binary: /home/aelkordy/.conda/envs/mamoth/bin/python Traceback (most recent call last): File "/home/aelkordy/.conda/envs/mamoth/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ train.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2023-11-16_10:52:13 host : g1lmd1 rank : 1 (local_rank: 1) exitcode : 1 (pid: 1854107) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2023-11-16_10:52:13 host : g1lmd1 rank : 2 (local_rank: 2) exitcode : 1 (pid: 1854108) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-11-16_10:52:13 host : g1lmd1 rank : 0 (local_rank: 0) exitcode : 1 (pid: 1854106) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ``` ### Expected behavior I was expecting successful run of the distribuited training of LLaMA2 with PEFT similar to LLaMA2 without parameter efficient finetuning methods
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27542/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27541
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27541/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27541/comments
https://api.github.com/repos/huggingface/transformers/issues/27541/events
https://github.com/huggingface/transformers/pull/27541
1,997,060,916
PR_kwDOCUB6oc5fpAZm
27,541
Disable docker image build job `latest-pytorch-amd` for now
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @echarlaix (FYI)." ]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? This is currently failing, and we need Guillaume's help from infra team. Currently, if `setup.py` is modified, this job would be triggered and then fail.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27541/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27541", "html_url": "https://github.com/huggingface/transformers/pull/27541", "diff_url": "https://github.com/huggingface/transformers/pull/27541.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27541.patch", "merged_at": 1700150446000 }
https://api.github.com/repos/huggingface/transformers/issues/27540
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27540/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27540/comments
https://api.github.com/repos/huggingface/transformers/issues/27540/events
https://github.com/huggingface/transformers/pull/27540
1,997,046,195
PR_kwDOCUB6oc5fo9KV
27,540
Generate: improve assisted generation tests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts `is_decoder` is a poorly named flag πŸ˜… contrarily to `is_encoder_decoder`, which controls many aspects in generation, `is_decoder` only controls one thing AFAIK -- whether to enable `use_cache` ([example](https://github.com/huggingface/transformers/blob/651408a077f842e76e75bfc7d02b8ac38eeb6480/src/transformers/models/bert/modeling_bert.py#L953)) and pipe the cache around in encoders with a LM Head.\r\n\r\nIt is also not mutually exclusive with `is_encoder_decoder` (it should be IMO πŸ‘€)\r\n\r\nAll tests that require caching, such as the assisted generation ones, have to set `model.config.is_decoder = True`. Otherwise, the tests will fail in the encoder with LM Heads (see image below)\r\n![Screenshot 2023-11-16 at 18 07 27](https://github.com/huggingface/transformers/assets/12240844/b57ab460-32b1-41c4-b7a6-94853db0a9ea)\r\n\r\n", "@gante Thanks for explaining! I thought they were mutually exclusive " ]
1,700
1,700
1,700
MEMBER
null
# What does this PR do? Strengthens the test suite for assisted generation. With these modifications, previously found API problems will be properly caught in advance. ## Post mortem ### Why weren't API problems caught before? Assisted generation has two loops: the loop to obtain the candidate tokens from the assistant model (inner loop), and the loop to generate the final tokens from the main model (outer loop). Both loops are slightly different depending on whether the main model accepts the matches or not -- there are different code paths depending on whether `n_matches > 0` or not. The following cases were being tested and had no API issues: 1. `n_matches == 0` 2. `n_matches > 0`, but we only run 1 iteration of the outer loop πŸ‘‰ We weren't explicitly testing the case where `n_matches > 0` AND we ran more than 1 outer loop iteration. ### If we weren't testing that case, why was the CI randomly red? Each individual test had a ~97% chance of being green. The (random) assistant model was building the candidate sequence from the most likely tokens from its vocabulary (size = 99), and the main model was comparing the candidate sequence against sampling from its logits. Most of the times, `n_matches == 0`, so the test passed. However, sometimes we had `n_matches > 0`, but not to the point where it was enough to complete assisted generation in 1 outer loop. πŸ‘‰ There was a low chance (per test) of hitting the failing case, resulting in inconsistent CI failures
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27540/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27540/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27540", "html_url": "https://github.com/huggingface/transformers/pull/27540", "diff_url": "https://github.com/huggingface/transformers/pull/27540.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27540.patch", "merged_at": 1700160860000 }
https://api.github.com/repos/huggingface/transformers/issues/27539
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27539/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27539/comments
https://api.github.com/repos/huggingface/transformers/issues/27539/events
https://github.com/huggingface/transformers/pull/27539
1,996,944,063
PR_kwDOCUB6oc5fomfX
27,539
4D `attention_mask` support
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Generally, I don't have a problem with allowing to pass 4D attention masks! @poedator can you explain your use case a little bit for why you want to pass 4d attention masks? ", "@patrickvonplaten \r\nhere is a use example:\r\nSuppose one does beam search and has a starting prefix with tokens `11 22 33` in 4 beams. Now he needs to check candidates with tokens 44, 55, 66, and 77. Present code would pack the beams into a batch of shape (4, 4):\r\n\r\n```\r\n11 22 33 44\r\n11 22 33 55\r\n11 22 33 66\r\n11 22 33 77\r\n```\r\nand run it with mask of all ones, passing such mask in 2D which gets expanded internally to 4D.\r\n\r\nThe proposed way would be to have a batch shaped (1, 7):\r\n`11 22 33 44 55 66 77`\r\nand the 4d mask would have a shape (1, 1, 4, 7) and look like this:\r\n```\r\n1 1 1 1 0 0 0 \r\n1 1 1 0 1 0 0 \r\n1 1 1 0 0 1 0 \r\n1 1 1 0 0 0 1\r\n\r\nwith a positions tensor of [0, 1, 2, 3, 3, 3, 3]\r\n```\r\n\r\nAt subsequent beam search iterations the mask will reflect which past tokens should the new tokens attend to.\r\nSuch mask needs to pass intact. \r\nThis saves memory for past_key_values cache and thus allows beam search and other similar inference (like SpecInfer) of longer sequences with limited VRAM.\r\n\r\nAnother use case is kindly proposed by @UniverseFly below.", "Very interesting PR! Would this feature also enable SFT packing as mentioned in https://github.com/huggingface/trl/issues/805?\r\n\r\n\r\n![](https://user-images.githubusercontent.com/26831266/272305004-93c690a8-7e9b-40ad-885f-d530996aa109.png)\r\n", "> Very interesting PR! Would this feature also enable SFT packing as mentioned in [huggingface/trl#805](https://github.com/huggingface/trl/issues/805)?\r\nSure it would. Just have a separate packing function somewhere - it is beyond the scope of this PR. \r\nBesides, one should be able to pack multiple series of sequences into a batch this way. \r\n", "I tried this branch and the `model.forward` seems to work fairly well, but `model.generate` raises errors with the 4D attention mask (with Llama). After some checking, it might be due to the missing logic here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/53a7e7750ff088ffbd7d96c5aeed122cc96b6866/src/transformers/models/llama/modeling_llama.py#L1087-L1124", "Generate looks like a harder challenge for your methods - each individual sequence will be expanding, thus you'd need to reorder past_kv and mask at each step. I believe that to implement it, you'd need to write custom `prepare_inputs_for_generation()`, and possibly some more logic. \r\nI'll be happy to test drive it. \r\nOn my side I intend to write a PR for more efficient beam search after this PR merges.", "Hi, @ArthurZucker \r\nI limited this PR only to the mask code, proceeding with the tests. \r\n\r\nSo far I have demo in [Colab with monkey patch based on this PR.](https://colab.research.google.com/drive/1PMcLNjjjK0Zwgg6rQJdutwolfIVCZfs6?usp=sharing) It shows a negligible difference in logits obtained the old and new ways. I dent to believe that this is a rounding error somewhere. Would you support it as the basis for the tests? \r\nBTW, where to put this new test?\r\n\r\nHi, @UniverseFly ,\r\nTry the monkey patch from the [Colab notebook](https://colab.research.google.com/drive/1PMcLNjjjK0Zwgg6rQJdutwolfIVCZfs6?usp=sharing) - see if it works to implement your idea. ", "Thanks for this PR and the demo. It is very helpful in trying the [SpecInfer paper](https://arxiv.org/abs/2305.09781). Also in another recent progress on speculative decoding [look ahead decoding](https://lmsys.org/blog/2023-11-21-lookahead-decoding/) Fig 5, this PR will also be useful.", "Reviewing now πŸ˜‰ ", "- squashed all earlier commits into one\r\n- added tests. Made a separate class to test with full model loading.\r\n- added support for sdpa (following https://github.com/huggingface/transformers/pull/26572)\r\n- `test_modeling_utils.py::AttentionMaskTester` and `::TestAttentionImplementation` tests pass\r\n- new tests pass\r\n\r\n@ArthurZucker, please review. Hopefully it is ready to merge.", "@ArthurZucker, pls give me a hint about `NameError: name 'torch' is not defined` error. Apparently a decorator or import is missing, but can't figure it out. The import and decorators seem in place...", "> Hi, @ArthurZucker I limited this PR only to the mask code, proceeding with the tests.\r\n> \r\n> So far I have demo in [Colab with monkey patch based on this PR.](https://colab.research.google.com/drive/1PMcLNjjjK0Zwgg6rQJdutwolfIVCZfs6?usp=sharing) It shows a negligible difference in logits obtained the old and new ways. I dent to believe that this is a rounding error somewhere. Would you support it as the basis for the tests? BTW, where to put this new test?\r\n> \r\n> Hi, @UniverseFly , Try the monkey patch from the [Colab notebook](https://colab.research.google.com/drive/1PMcLNjjjK0Zwgg6rQJdutwolfIVCZfs6?usp=sharing) - see if it works to implement your idea.\r\n\r\n@poedator, Did you ever get the 4D beam search working with the monkey patch in the Colab notebook? I would be very interested if you were able to get this working already! (especially depth first kv cache updating)", "> @poedator, Did you ever get the 4D beam search working with the monkey patch in the Colab notebook? I would be very interested if you were able to get this working already! (especially depth first kv cache updating)\r\nHi, @Codys12! Thank you for the interest to the PR.\r\nI made a working implementation of memory-efficient beam search using this 4D mask. Got as high as 256 beams of 32 tokens with Llama 7 on A100 and this is far from the limit. There is no complete demo to share, but [this gist ](https://gist.github.com/poedator/c754247f3dca8f70b710186c9bc37032)has the beam search part of my code. Hope that you find it useful.\r\n", "having a look!", "> Sorry checking the test there are duplicate markers πŸ˜… not sure they are needed no?\r\n\r\nEarlier, I got frustrated with failing commits and added decorators everywhere. Now most of them are gone and it still passes CI checks.", "Thanks for the contribution! πŸ€— ", "@ArthurZucker , would you want to publish a blog post in HF blog with 4d attention use cases?\r\nI propose to include:\r\n- memory efficient beam search (my example, from tests)\r\n- SFT packing as mentioned in https://github.com/huggingface/trl/issues/805, suggested by @UniverseFly \r\n- [look ahead decoding](https://lmsys.org/blog/2023-11-21-lookahead-decoding/), suggested by @KexinFeng \r\n", "If you want feel free to do so! πŸ€— ", "Note that not all paths of this can be `torch.compile`d:\r\n\r\nThe following fails due to `torch.all(attention_mask == 1)`.\r\n\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nfrom transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask_for_sdpa\r\n\r\nclass Model(nn.Module):\r\n def forward(self, inputs_embeds):\r\n batch_size, seq_length, _ = inputs_embeds.shape\r\n past_key_values_length = 10\r\n attention_mask = torch.tensor([1.])\r\n attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(\r\n attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length\r\n )\r\n return attention_mask\r\n\r\nmodel = Model()\r\nmodel = torch.compile(model, fullgraph=True)\r\nmodel(torch.ones([1,5, 32]))\r\n```", "@PhilJd,\r\n`torch.all(attention_mask == 1)` was present even before this PR.\r\n[see this line](https://github.com/huggingface/transformers/pull/27539/files#diff-b14be70e49c04e876d7cd745948bf1bee279bc0d6f2a71b1a18e3b5aff293bd1L343)\r\nit comes form https://github.com/huggingface/transformers/pull/26572\r\n\r\nhave you tested the preceding commit?", "Ah sorry, just looked at the blame - yeah, the previous commit fails as well @fxmarty .", "`_prepare_4d_causal_attention_mask` is applied only if `self._use_flash_attention_2` is False (https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1039). Is it because 4D attention mask and flash attention 2 are not compatible?", "The function description should be updated to avoid confusion as `attention_mask` is not necessarily 2D now (https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_attn_mask_utils.py#L290)", "@shentianxiao , thank you for your attention to the 4D attention!\r\n\r\n> `_prepare_4d_causal_attention_mask` is applied only if `self._use_flash_attention_2` is False (https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1039). Is it because 4D attention mask and flash attention 2 are not compatible?\r\n\r\nit is not about compatibility, rather the flash_attention_2 code contrasted original mask vs modified mask coming from `_prepare_4d_causal_attention_mask()`\r\n\r\n> The function description should be updated to avoid confusion as attention_mask is not necessarily 2D now (https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_attn_mask_utils.py#L290)\r\n\r\nI agree, that the original mask may also be 4d-shaped now. I just started PR https://github.com/huggingface/transformers/pull/28151 with documentation updates - will make edits there. Hopefully the maintainers responsible for `flash_attention_2` will verify it.", "**IMPORTANT: this PR makes changes that can only used by few classes of models**\r\nrequirements to use:\r\n- have `position_ids` argument in `.forward()` method\r\n- use `modeling_attn_mask_utils.py::_prepare_4d_attention_mask()` function for 4d mask generation\r\n- \r\n\r\nas of 20.12.2023, only a handful (under 20) of transformers model classes meet these criteria. Most of these classes are multimodal, which may require their own use cases for 4D masks. The pure language modelling classes fit to use the 4D mask changes from this PR are only `LlamaModel`, `FalconModel` and `XGLMModel`.", "I made a small blog post based on this PR. \r\nhttps://huggingface.co/blog/poedator/4d-masks\r\nBig thanks to everyone who contributed and commented!", "Thanks for the amazing addition!! This is a great new feature.\r\n\r\nJust wanted to ask a question to make sure I am using it properly. In the code [here](https://github.com/poedator/transformers/blob/d80cb9823cc8b774bb4f41ac59579edca8f79ff0/src/transformers/modeling_attn_mask_utils.py#L357), it looks like the 4D masks are expected to have shape `[batch_size, 1, seq_len, seq_len].` (I am inferring that the `1` in the `expected_shape` is the heads dimension so that the same mask is broadcast to all heads.) In the [blog post](https://huggingface.co/blog/poedator/4d-masks), it describes the attention masks as having shape `[heads, batch_size, input_ids_length, total_sequence_length]`.\r\n\r\nMy question is: **are the `heads` and `batch_size` dimensions transposed in the blog post**? It seems like we are actually expected to provide 4D masks where the first axis is batch size, the second is heads. The blog post implies the reverse. Since I am sometimes using a batch size of 1 in testing, this works either way, but I want to use it correctly and don't see the \"proper\" shape documented anywhere (perhaps it is documented somewhere and I missed it!).\r\n\r\nThanks!", "@jpgard ,\r\nyou are correct, there was an error in my blog post.\r\nChanged it to `[batch_size, heads, input_ids_length, total_sequence_length]`\r\nthank you for raising this!", "Great, thanks for the quick reply and for your hard work on this @poedator !!", "Has this been tested with flash attention 2? Works great for me without flash attention 2, but when using flash attention I get lots of messages of the form `../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [202,0,0], thread: [105,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\" failed.`\r\n\r\nLower chunk of the stack trace posted below.\r\n\r\n```\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 798, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1511, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 549, in forward\r\n attn_output = self._flash_attention_forward(\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 592, in _flash_attention_forward\r\n query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 631, in _upad_input\r\n query_layer = index_first_axis(\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/autograd/function.py\", line 553, in apply\r\n return super().apply(*args, **kwargs) # type: ignore[misc]\r\n File \"/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/flash_attn/bert_padding.py\", line 17, in forward\r\n return torch.gather(\r\nRuntimeError: CUDA error: device-side assert triggered\r\n```\r\n\r\nWould be great to be able to use FA2 with this PR as the speedups are much larger as sequence length grows -- so FA2 seems like the perfect accompaniment to e.g. \"packed\" training sequences enabled by this PR." ]
1,700
1,708
1,702
CONTRIBUTOR
null
This is implementation for feature request from #27493 [custom 4d attention_mask as transformers .forward() argument](https://github.com/huggingface/transformers/issues/27493). 1) Allowing 4d attention masks to pass thru `_prepare_4d_causal_attention_mask()` intact 2) support in OPT (need to build custom `positions` tensor) 3) support in Llama (while Llama can accept custom `position_ids`, I added code to generate them internally) The benefits of the code are to enable more memory-efficient text generation with tree-based parallel decoding as described in [SpecInfer paper](https://arxiv.org/abs/2305.09781) Tagging: @gante (generate) @patrickvonplaten (masks) @younesbelkada @ArthurZucker (text models) This PR is WiP: - Will add tests - Need advice on how to handle models beyond covered Llama and OPT - May add example for memory-efficient generation **IMPORTANT: this PR makes changes that can only used by few classes of models** requirements to use: - have `position_ids` argument in `.forward()` method - use `modeling_attn_mask_utils.py::_prepare_4d_attention_mask()` function for 4d mask generation - as of 20.12.2023, only a handful (under 20) of transformers model classes meet these criteria. Most of these classes are multimodal, which may require their own use cases for 4D masks. The pure language modelling classes fit to use the 4D mask changes from this PR are only `LlamaModel`, `FalconModel` and `XGLMModel`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27539/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 1, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/27539/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27539", "html_url": "https://github.com/huggingface/transformers/pull/27539", "diff_url": "https://github.com/huggingface/transformers/pull/27539.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27539.patch", "merged_at": 1702807684000 }
https://api.github.com/repos/huggingface/transformers/issues/27538
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27538/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27538/comments
https://api.github.com/repos/huggingface/transformers/issues/27538/events
https://github.com/huggingface/transformers/issues/27538
1,996,781,817
I_kwDOCUB6oc53BHj5
27,538
New Model Architecture
{ "login": "LegallyCoder", "id": 119312866, "node_id": "U_kgDOBxyR4g", "avatar_url": "https://avatars.githubusercontent.com/u/119312866?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LegallyCoder", "html_url": "https://github.com/LegallyCoder", "followers_url": "https://api.github.com/users/LegallyCoder/followers", "following_url": "https://api.github.com/users/LegallyCoder/following{/other_user}", "gists_url": "https://api.github.com/users/LegallyCoder/gists{/gist_id}", "starred_url": "https://api.github.com/users/LegallyCoder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LegallyCoder/subscriptions", "organizations_url": "https://api.github.com/users/LegallyCoder/orgs", "repos_url": "https://api.github.com/users/LegallyCoder/repos", "events_url": "https://api.github.com/users/LegallyCoder/events{/privacy}", "received_events_url": "https://api.github.com/users/LegallyCoder/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "And i completed the architecture . But i must edit the code. And i have few questions. \r\nDo we write the task classes or does someone else?\r\nWhat features should we pay attention to in the files?\r\nSorry, this will be my first PR.", "Hi @LegallyCoder, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nFor instructions on how to add a model and use a model on the hub, please refer to our documentation: \r\n* https://huggingface.co/docs/transformers/model_sharing\r\n* https://huggingface.co/docs/transformers/custom_models\r\n\r\nIf you've followed these instructions and there is an error, please raise an issue and follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml). ", "Sorry, i didnt know this. You are right. I closed this isue." ]
1,700
1,700
1,700
NONE
null
### Model description I am writing a new model architecture. Its implementation is almost complete. I'm wondering how to integrate it into the hub. I forked the transformers lib and tried copying a model architecture, but it didn't work and it gave too many errors. How are things going here? ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27538/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27537
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27537/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27537/comments
https://api.github.com/repos/huggingface/transformers/issues/27537/events
https://github.com/huggingface/transformers/issues/27537
1,996,716,822
I_kwDOCUB6oc53A3sW
27,537
Allow script tracing DINOv2
{ "login": "Danil328", "id": 11178882, "node_id": "MDQ6VXNlcjExMTc4ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/11178882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Danil328", "html_url": "https://github.com/Danil328", "followers_url": "https://api.github.com/users/Danil328/followers", "following_url": "https://api.github.com/users/Danil328/following{/other_user}", "gists_url": "https://api.github.com/users/Danil328/gists{/gist_id}", "starred_url": "https://api.github.com/users/Danil328/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Danil328/subscriptions", "organizations_url": "https://api.github.com/users/Danil328/orgs", "repos_url": "https://api.github.com/users/Danil328/repos", "events_url": "https://api.github.com/users/Danil328/events{/privacy}", "received_events_url": "https://api.github.com/users/Danil328/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have exception now:\r\n<img width=\"1153\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/11178882/ce61c11a-9247-4045-8da4-5fdd9d3bb899\">\r\n", "Hi @Danil328, thanks for raising this issue! \r\n\r\nCould you make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and include details of your running environment and a minimal reproducible snippet? \r\n\r\nFrom the error it looks like the `scale_factor` values being passed to `interpolate` is a NoneType.", "Same problem in facebookresearch - https://github.com/facebookresearch/dinov2/issues/102\r\n### Reproduction\r\n```python\r\nimport torch\r\nfrom transformers import AutoImageProcessor, AutoModel\r\nfrom PIL import Image\r\nimport requests\r\n\r\nurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\nprocessor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')\r\nmodel = AutoModel.from_pretrained('facebook/dinov2-base')\r\n\r\ninputs = processor(images=image, return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\nlast_hidden_states = outputs.last_hidden_state\r\n\r\nwith torch.no_grad():\r\n example_input = torch.rand(1, 3, 224, 224, dtype=torch.float32, device=\"cuda\")\r\n traced_model = torch.jit.trace(model.cuda(), example_input) # fails here\r\n```\r\n### Error\r\n<img width=\"1162\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/11178882/50aba4d4-5ad4-4398-9a26-5e63d337c61f\">\r\n\r\n### Expected behavior\r\nSuccess\r\n\r\n### Enviroment\r\n`bash\r\npython=3.8\r\ntorch==2.0.1\r\ntransformers==4.35.0\r\n`", "@Danil328 - thanks for providing the snippet! I've opened a PR which should resolve the issue " ]
1,700
1,700
1,700
NONE
null
I found PR to dinov2 "Pass scale factor as a tuple of floats to F.interpolate() to allow tracing." https://github.com/facebookresearch/dinov2/pull/247 https://github.com/huggingface/transformers/blob/85fde09c97213bf7e8625f83096bb2a9e183f987/src/transformers/models/dinov2/modeling_dinov2.py#L104C19-L104C19
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27537/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27536
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27536/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27536/comments
https://api.github.com/repos/huggingface/transformers/issues/27536/events
https://github.com/huggingface/transformers/issues/27536
1,996,618,828
I_kwDOCUB6oc53AfxM
27,536
Adding a model
{ "login": "Ranitbag007", "id": 133197492, "node_id": "U_kgDOB_ButA", "avatar_url": "https://avatars.githubusercontent.com/u/133197492?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ranitbag007", "html_url": "https://github.com/Ranitbag007", "followers_url": "https://api.github.com/users/Ranitbag007/followers", "following_url": "https://api.github.com/users/Ranitbag007/following{/other_user}", "gists_url": "https://api.github.com/users/Ranitbag007/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ranitbag007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ranitbag007/subscriptions", "organizations_url": "https://api.github.com/users/Ranitbag007/orgs", "repos_url": "https://api.github.com/users/Ranitbag007/repos", "events_url": "https://api.github.com/users/Ranitbag007/events{/privacy}", "received_events_url": "https://api.github.com/users/Ranitbag007/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @Ranitbag007, \r\n\r\nIf you're wanting general instructions, you can find the information in our documentation. [This section details](https://huggingface.co/docs/transformers/v4.15.0/model_sharing#add-new-files-to-your-model-repo) adding a tokenizer to the hub. [This page](https://huggingface.co/docs/transformers/v4.35.2/en/custom_models#writing-a-custom-configuration) details how to add a model to be loaded through the `AutoModel` API. The equivalent for tokenizers is having the tokenizer inherit from `PretrainedTokenizer`. \r\n\r\nIf you've tried this and there's an issue, you'll need to open an issue, following the template with full details about what you've tried and the error messages occurred. ", "I am using sentencepiece tokenizer model and unable to register the tokenizer ", "As per [my comment here](https://github.com/huggingface/transformers/issues/27427#issuecomment-1805735559), you need to follow the issue template, providing full details of your running environment, a reproducible code snippet and details of the error encountered. " ]
1,700
1,700
null
NONE
null
### Model description I have built a llm and can you tell me how to register my own tokenizer with huggingface hub? ### Open source status - [x] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27536/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27535
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27535/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27535/comments
https://api.github.com/repos/huggingface/transformers/issues/27535/events
https://github.com/huggingface/transformers/issues/27535
1,996,545,559
I_kwDOCUB6oc53AN4X
27,535
Contrastive Image Text example deletes `image_column` because it is doesnt have a corresponding argument in `VisionTextDualEncoderModel.forward`
{ "login": "luisblanche-mirakl", "id": 144011644, "node_id": "U_kgDOCJVxfA", "avatar_url": "https://avatars.githubusercontent.com/u/144011644?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luisblanche-mirakl", "html_url": "https://github.com/luisblanche-mirakl", "followers_url": "https://api.github.com/users/luisblanche-mirakl/followers", "following_url": "https://api.github.com/users/luisblanche-mirakl/following{/other_user}", "gists_url": "https://api.github.com/users/luisblanche-mirakl/gists{/gist_id}", "starred_url": "https://api.github.com/users/luisblanche-mirakl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luisblanche-mirakl/subscriptions", "organizations_url": "https://api.github.com/users/luisblanche-mirakl/orgs", "repos_url": "https://api.github.com/users/luisblanche-mirakl/repos", "events_url": "https://api.github.com/users/luisblanche-mirakl/events{/privacy}", "received_events_url": "https://api.github.com/users/luisblanche-mirakl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @luisblanche-mirakl, thanks for raising this issue! \r\n\r\nCould you confirm the command line command being used to run the script - is it with the default values [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model)?", "Hi @amyeroberts thansk for answering.\r\n\r\nI am actually using a json parameter file : \r\n```json\r\n{\"output_dir\": \"/{mypath}/clip/test\",\r\n\"model_name_or_path\": \"{mypath}/clip-roberta\",\r\n\"data_dir\": \"{mypath}/data/coco\",\r\n\"dataset_name\": \"ydshieh/coco_dataset_script\",\r\n\"dataset_config_name\": \"2017\",\r\n\"image_column\":\"image_path\" ,\r\n\"caption_column\": \"caption\" ,\r\n\"max_seq_length\": 256,\r\n\"max_train_samples\": 100, \r\n\"max_eval_samples\": 100,\r\n\"preprocessing_num_workers\": 4,\r\n\"do_train\": true,\r\n\"do_eval\": false,\r\n\"per_device_train_batch_size\": 8,\r\n\"per_device_eval_batch_size\": 8,\r\n\"learning_rate\": 5e-5, \r\n\"warmup_steps\": 0, \r\n\"weight_decay\": 0.1,\r\n\"token\": \"***\",\r\n\"report_to\": [\"tensorboard\"],\r\n\"overwrite_output_dir\": true\r\n}\r\n```", "@luisblanche-mirakl OK, thanks for sharing! You can set `remove_unused_columns=False` in the TrainingArguments which should prevent this from happening. \r\n\r\ncc @ydshieh to check if the script needs updating", "Hi,\r\n\r\nThe [code example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model) does use `remove_unused_columns=False`.\r\n\r\nI don't think there is something to be updated here :-), simply following the example is good.\r\n\r\n```bash\r\npython examples/pytorch/contrastive-image-text/run_clip.py \\\r\n --output_dir ./clip-roberta-finetuned \\\r\n --model_name_or_path ./clip-roberta \\\r\n --data_dir $PWD/data \\\r\n --dataset_name ydshieh/coco_dataset_script \\\r\n --dataset_config_name=2017 \\\r\n --image_column image_path \\\r\n --caption_column caption \\\r\n --remove_unused_columns=False \\\r\n --do_train --do_eval \\\r\n --per_device_train_batch_size=\"64\" \\\r\n --per_device_eval_batch_size=\"64\" \\\r\n --learning_rate=\"5e-5\" --warmup_steps=\"0\" --weight_decay 0.1 \\\r\n --overwrite_output_dir \\\r\n --push_to_hub\r\n``` ", "Ok thanks for checking, I will try with this argument. There is still something I don't understand: when you set `--image_column image_path` why does it still counts as an unused column ? ", "As you already found, the used/unused are done by looking at the model's forward signature\r\n\r\nhttps://github.com/huggingface/transformers/blob/1394e08cf099d16515c1889ab9507946489f5afe/src/transformers/trainer.py#L729", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
## Configuration Using `transformers==4.35.0` and `datasets==2.14.7` ## Problem Statement: Running the example in [examples/pytorch/contrastive-image-text/run_clip.py](examples/pytorch/contrastive-image-text/run_clip.py) (tried with the COCO dataset from the example as well as with my own dataset) The `image_column` is removed by : https://github.com/huggingface/transformers/blob/1394e08cf099d16515c1889ab9507946489f5afe/src/transformers/trainer.py#L729 I think this is because [`VisionTextDualEncoderModel.forward`](VisionTextDualEncoderModel.forward) expects `pixel_values` and not a path to an image so `image_path` is not in the signature. In the example code there is this line : https://github.com/huggingface/transformers/blob/1394e08cf099d16515c1889ab9507946489f5afe/examples/pytorch/contrastive-image-text/run_clip.py#L505-L506 Which i think maybe is the problem: we expect to process the images files on the fly but the column is deleted before we can do it because the model only accepts pixel values. This causes the error downstream when calling : https://github.com/huggingface/transformers/blob/1394e08cf099d16515c1889ab9507946489f5afe/examples/pytorch/contrastive-image-text/run_clip.py#L446-L449 We end up with `KeyError` for `image_path`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27535/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27534
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27534/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27534/comments
https://api.github.com/repos/huggingface/transformers/issues/27534/events
https://github.com/huggingface/transformers/issues/27534
1,996,544,899
I_kwDOCUB6oc53ANuD
27,534
transformers import_utils.py module 'torch' has no attribute 'fx'
{ "login": "R0k1e", "id": 105703383, "node_id": "U_kgDOBkzn1w", "avatar_url": "https://avatars.githubusercontent.com/u/105703383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/R0k1e", "html_url": "https://github.com/R0k1e", "followers_url": "https://api.github.com/users/R0k1e/followers", "following_url": "https://api.github.com/users/R0k1e/following{/other_user}", "gists_url": "https://api.github.com/users/R0k1e/gists{/gist_id}", "starred_url": "https://api.github.com/users/R0k1e/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/R0k1e/subscriptions", "organizations_url": "https://api.github.com/users/R0k1e/orgs", "repos_url": "https://api.github.com/users/R0k1e/repos", "events_url": "https://api.github.com/users/R0k1e/events{/privacy}", "received_events_url": "https://api.github.com/users/R0k1e/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @R0k1e, thanks for raising this issue! \r\n\r\nSo that we can best help you could you provide some more information: \r\n\r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n\r\n> install the llama according to its readme\r\n\r\n* Could you link to the README? Which llama is being referred to here?\r\n\r\n> I inspect the doc of torch version and found there is fx module in this version. And when I use the console to import torch.fx,\r\n\r\nCould you open a python session, run the following and reply with what is printed out?:\r\n\r\n```py\r\nprint(\"Importing torch\")\r\nimport torch\r\nprint(torch.__version__)\r\n\r\nprint(\"Importing torch.fx\")\r\nimport torch.fx\r\n\r\nprint(\"Importing transformers\")\r\nimport transformers\r\nprint(transformers.__version__)\r\n\r\nprint(\"Importing transformers models\")\r\nfrom transformers import LlamaForCausalLM, LlamaTokenizer, AutoTokenizer, AutoModelForCausalLM\r\n```", "> transformers-cli env\r\n\r\n- `transformers` version: 4.35.2\r\n- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.0\r\n- Huggingface_hub version: 0.19.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.24.1\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.12.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: Yes, using tensor_parallel to fuifil this function. To be more precise, the script is like the following format:\r\n```\r\nimport tensor_parallel as tp\r\nmodel = LlamaForCausalLM.from_pretrained(ckpt_dir, low_cpu_mem_usage = True, torch_dtype=torch.float16)\r\nmodel = tp.tensor_parallel(model, [i for i in range(n_gpus)])\r\n```\r\n\r\n> llama README\r\n\r\nhttps://github.com/facebookresearch/llama\r\nAccording to this README, I create a conda env with python3.10, and then use the pip install -e . to build up the env. And then replace the torch 2.1.1 to torch 1.12.1 to suit the cuda version.\r\n\r\n> python session\r\n\r\n``` \r\n>>> print(\"Importing torch\")\r\nImporting torch\r\n>>> import torch\r\n>>> print(torch.__version__)\r\n1.12.1\r\n>>> \r\n>>> print(\"Importing torch.fx\")\r\nImporting torch.fx\r\n>>> import torch.fx\r\n>>> \r\n>>> print(\"Importing transformers\")\r\nImporting transformers\r\n>>> import transformers\r\n>>> print(transformers.__version__)\r\n4.35.2\r\n>>> \r\n>>> print(\"Importing transformers models\")\r\nImporting transformers models\r\n>>> from transformers import LlamaForCausalLM, LlamaTokenizer, AutoTokenizer, AutoModelForCausalLM\r\n>>> \r\n```\r\n\r\nIf I ignore the import torch.fx, the error occurs as before\r\n ", "same here, temporarily solved by downgrading transformers 4.35.2 to 4.34.0", "more tests: \r\n4.35.0-4.35.2: same error\r\n4.34.0-4.34.1: succeed", "@jessyford @R0k1e OK - thanks for testing and reporting back. I've open a PR which should resolve this issue for you. \r\n\r\nAs a side note - you don't need to follow installation instructions from the original Llama repo to use the model. You can just install `transformers` and use the model directly with the latest pytorch versions. ", "Hi, I'm getting the same error.\r\n```\r\nTraceback (most recent call last):\r\n File \"/scratch/project_mnt/S0066/unlimiformer-11-dec-kg/src/run.py\", line 25, in <module>\r\n from unlimiformer import Unlimiformer\r\n File \"/scratch/project_mnt/S0066/unlimiformer-11-dec-kg/src/unlimiformer.py\", line 6, in <module>\r\n from transformers import BartModel, BartForConditionalGeneration, \\\r\n File \"<frozen importlib._bootstrap>\", line 1075, in _handle_fromlist\r\n File \"/home/uqpocall/micromamba/envs/unlimiformer11/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1344, in __getattr__\r\n value = getattr(module, name)\r\n File \"/home/uqpocall/micromamba/envs/unlimiformer11/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1343, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/home/uqpocall/micromamba/envs/unlimiformer11/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1355, in _get_module\r\n raise RuntimeError(\r\nRuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):\r\nmodule 'torch' has no attribute 'fx'\r\n```\r\nHere is the output from `transformers-cli env`:\r\n- `transformers` version: 4.35.2\r\n- Platform: Linux-4.18.0-477.27.1.el8_8.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.3.3\r\n- Accelerate version: 0.25.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 1.12.1.post200 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nI am trying to install this particular version as these are the instructions I found here: https://github.com/abertsch72/unlimiformer and I am trying to reproduce their results. Also, I got very different results just by changing environments (sequence of sequences <s> ... </s><s> ...</s> were treated differently by bart-base in 4.34.1 vs 4.35.2)", "fixed it by adding `from torch.fx import symbolic_trace` to the relevant module.", "Hi @patrickocal, the error would have still be present if running from 4.35.x as the fix had been merged but was not yet part of a release i.e. to have it requires installing from source. We've just (1 hour ago!) released 4.36 which contains the fix. If after installing 4.36 the issue persists, please let us know in a comment. " ]
1,700
1,702
1,700
NONE
null
### System Info env: transformers=4.35.2, torch = 1.12.1+cu113, Ubuntu 20.04.6 LTS python=3.10 when I write this sentence "from transformers import LlamaForCausalLM, LlamaTokenizer, AutoTokenizer, AutoModelForCausalLM", It tell me it cant find fx in torch. I inspect the doc of torch version and found there is fx module in this version. And when I use the console to import torch.fx, there is no fault happened. And the I add the import torch.fx before the "from transformers import LlamaForCausalLM, LlamaTokenizer, AutoTokenizer, AutoModelForCausalLM", the error misses. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. install the llama according to its readme 2. install the torch = 1.12.1+cu113 3. write a footprint including following sentences import argparse import json import os import time import pandas as pd import tensor_parallel as tp import torch from tqdm import tqdm from transformers import LlamaForCausalLM, LlamaTokenizer, AutoTokenizer, AutoModelForCausalLM and then the error occur from transformers import LlamaForCausalLM, LlamaTokenizer, AutoTokenizer, AutoModelForCausalLM File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist File "/miniconda3/envs/llama-7b/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1344, in __getattr__ value = getattr(module, name) File "/miniconda3/envs/llama-7b/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1343, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/miniconda3/envs/llama-7b/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1355, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): module 'torch' has no attribute 'fx' ### Expected behavior no error occur
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27534/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/27534/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27533
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27533/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27533/comments
https://api.github.com/repos/huggingface/transformers/issues/27533/events
https://github.com/huggingface/transformers/pull/27533
1,996,214,229
PR_kwDOCUB6oc5fmGRW
27,533
Add random seed for generation
{ "login": "mokeyish", "id": 16131917, "node_id": "MDQ6VXNlcjE2MTMxOTE3", "avatar_url": "https://avatars.githubusercontent.com/u/16131917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mokeyish", "html_url": "https://github.com/mokeyish", "followers_url": "https://api.github.com/users/mokeyish/followers", "following_url": "https://api.github.com/users/mokeyish/following{/other_user}", "gists_url": "https://api.github.com/users/mokeyish/gists{/gist_id}", "starred_url": "https://api.github.com/users/mokeyish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mokeyish/subscriptions", "organizations_url": "https://api.github.com/users/mokeyish/orgs", "repos_url": "https://api.github.com/users/mokeyish/repos", "events_url": "https://api.github.com/users/mokeyish/events{/privacy}", "received_events_url": "https://api.github.com/users/mokeyish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @mokeyish πŸ‘‹ \r\n\r\nWe already have a `set_seed` function in `transformers` ([doc](https://huggingface.co/docs/transformers/v4.35.2/en/internal/trainer_utils#transformers.set_seed)), which you can call as follows for complete reproducibility:\r\n\r\n```py\r\nfrom transformers import set_seed\r\n# code to load the model\r\n\r\nset_seed(0)\r\n# generate code here\r\n```\r\n\r\nWould this solve your use case? πŸ€— We want to avoid adding flags and further complexity to generate, unless strictly needed.", "@gante This seed is passed in through the http RestAPIs, and your solution is a global setting, which is the same as before to make model training reproducible. But global settings, different API calls should conflict, right?", "In theory, yes. In practice, we're not interested in adding complexity to `generate` unless widely requested by the community -- our ability to maintain features is limited :) \r\n\r\nI haven't seen request for it, but I'll do my standard bargain: if this comment gets 10 reactions or more, it means users are actively looking for it. In that case, I'd be happy to include it in `transformers` πŸ€— \r\n\r\n(the person that does the 10th reaction: please ping me)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
# What does this PR do? Add a random seed to GenerationConfig, such that with the same `seed` and parameters should return the same result. openai adds a seed parameter to control the return of the same results, making it easier to reproduce the results and troubleshoot problems. https://github.com/openai/openai-openapi/blob/master/openapi.yaml#L5392 I would like to cc @ArthurZucker and @younesbelkada review my PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27533/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27533", "html_url": "https://github.com/huggingface/transformers/pull/27533", "diff_url": "https://github.com/huggingface/transformers/pull/27533.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27533.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27532
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27532/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27532/comments
https://api.github.com/repos/huggingface/transformers/issues/27532/events
https://github.com/huggingface/transformers/issues/27532
1,996,204,556
I_kwDOCUB6oc52-6oM
27,532
Model implementation with Transformers and Hugging face hub.
{ "login": "sudhakar071", "id": 84635136, "node_id": "MDQ6VXNlcjg0NjM1MTM2", "avatar_url": "https://avatars.githubusercontent.com/u/84635136?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sudhakar071", "html_url": "https://github.com/sudhakar071", "followers_url": "https://api.github.com/users/sudhakar071/followers", "following_url": "https://api.github.com/users/sudhakar071/following{/other_user}", "gists_url": "https://api.github.com/users/sudhakar071/gists{/gist_id}", "starred_url": "https://api.github.com/users/sudhakar071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sudhakar071/subscriptions", "organizations_url": "https://api.github.com/users/sudhakar071/orgs", "repos_url": "https://api.github.com/users/sudhakar071/repos", "events_url": "https://api.github.com/users/sudhakar071/events{/privacy}", "received_events_url": "https://api.github.com/users/sudhakar071/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @sudhakar071, \r\n\r\nPlease refer to our documentation to see how to push models to the hub: https://huggingface.co/docs/transformers/model_sharing\r\n\r\nAnd make your model accessible using the `AutoModel` API: https://huggingface.co/docs/transformers/custom_models" ]
1,700
1,700
null
NONE
null
### Model description I created a model and for ease in training and fine-tuning, I need to integrate the model with Hugging face hub. Model's tokenizer is a SentencePiece tokenizer. Here is the model repository. https://github.com/sudhakar-71/scratch-ai ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27532/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27531
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27531/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27531/comments
https://api.github.com/repos/huggingface/transformers/issues/27531/events
https://github.com/huggingface/transformers/issues/27531
1,996,189,021
I_kwDOCUB6oc52-21d
27,531
Automatic mask generation with fine-tuned SAM model
{ "login": "roboticsbrian", "id": 47238439, "node_id": "MDQ6VXNlcjQ3MjM4NDM5", "avatar_url": "https://avatars.githubusercontent.com/u/47238439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roboticsbrian", "html_url": "https://github.com/roboticsbrian", "followers_url": "https://api.github.com/users/roboticsbrian/followers", "following_url": "https://api.github.com/users/roboticsbrian/following{/other_user}", "gists_url": "https://api.github.com/users/roboticsbrian/gists{/gist_id}", "starred_url": "https://api.github.com/users/roboticsbrian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roboticsbrian/subscriptions", "organizations_url": "https://api.github.com/users/roboticsbrian/orgs", "repos_url": "https://api.github.com/users/roboticsbrian/repos", "events_url": "https://api.github.com/users/roboticsbrian/events{/privacy}", "received_events_url": "https://api.github.com/users/roboticsbrian/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nIt seems that you only saved the model files, but not the preprocessor file in the \"/tmp/finetuned.wt\" folder. Hence you also need to run the following:\r\n```\r\nfrom transformers import SamImageProcessor\r\n\r\n# assuming you fine-tuned a based-sized SAM model\r\nprocessor = SamImageProcessor.from_pretrained(\"facebook/sam-vit-base\")\r\nprocessor.save_pretrained(\"/tmp/finetuned.wt\")\r\n```\r\nin order to save the `preprocessor_config.json` file in the same folder.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
### System Info - `transformers` version: 4.33.1 - Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.19.0 - Safetensors version: 0.3.3 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada @rafaelpadilla ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Fine-tuned a `SamModel` and saved its weights using `.save_pretrained()` 2. Try to create a new `MaskGenerationPipeline` instance using the `pipeline("mask-generation")` factory and fail: ``` >>> generator = pipeline("mask-generation", model="/tmp/finetuned.wt") loading configuration file /tmp/finetuned.wt/config.json Model config SamConfig { "_name_or_path": "/tmp/finetuned.wt", "architectures": [ "SamModel" ], "initializer_range": 0.02, "mask_decoder_config": { "model_type": "" }, "model_type": "sam", "prompt_encoder_config": { "model_type": "" }, "torch_dtype": "float32", "transformers_version": "4.33.1", "vision_config": { "dropout": 0.0, "initializer_factor": 1.0, "intermediate_size": 6144, "model_type": "", "projection_dim": 512 } } loading configuration file /tmp/finetuned.wt/config.json Model config SamConfig { "_name_or_path": "/tmp/finetuned.wt", "architectures": [ "SamModel" ], "initializer_range": 0.02, "mask_decoder_config": { "model_type": "" }, "model_type": "sam", "prompt_encoder_config": { "model_type": "" }, "torch_dtype": "float32", "transformers_version": "4.33.1", "vision_config": { "dropout": 0.0, "initializer_factor": 1.0, "intermediate_size": 6144, "model_type": "", "projection_dim": 512 } } loading weights file /tmp/finetuned.wt/pytorch_model.bin All model checkpoint weights were used when initializing SamModel. All the weights of SamModel were initialized from the model checkpoint at /tmp/finetuned.wt. If your task is similar to the task the model of the checkpoint was trained on, you can already use SamModel for predictions without further training. --------------------------------------------------------------------------- OSError Traceback (most recent call last) /tmp/ipykernel_24/577173567.py in <module> ----> 1 generator = pipeline("mask-generation", model="/tmp/finetuned.wt") /opt/conda/lib/python3.8/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 944 # Instantiate image_processor if needed 945 if isinstance(image_processor, (str, tuple)): --> 946 image_processor = AutoImageProcessor.from_pretrained( 947 image_processor, _from_pipeline=task, **hub_kwargs, **model_kwargs 948 ) /opt/conda/lib/python3.8/site-packages/transformers/models/auto/image_processing_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 342 kwargs["_from_auto"] = True 343 --> 344 config_dict, _ = ImageProcessingMixin.get_image_processor_dict(pretrained_model_name_or_path, **kwargs) 345 image_processor_class = config_dict.get("image_processor_type", None) 346 image_processor_auto_map = None /opt/conda/lib/python3.8/site-packages/transformers/image_processing_utils.py in get_image_processor_dict(cls, pretrained_model_name_or_path, **kwargs) 327 try: 328 # Load from local folder or from cache or download from model Hub and cache --> 329 resolved_image_processor_file = cached_file( 330 pretrained_model_name_or_path, 331 image_processor_file, /opt/conda/lib/python3.8/site-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs) 398 if not os.path.isfile(resolved_file): 399 if _raise_exceptions_for_missing_entries: --> 400 raise EnvironmentError( 401 f"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout " 402 f"'https://huggingface.co/{path_or_repo_id}/{revision}' for available files." OSError: /tmp/finetuned.wt does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co//tmp/finetuned.wt/None' for available files. ``` ### Expected behavior I was expecting to get a working mask generator instance to use, similar to what you get with `pipeline("mask-generation", model="facebook/sam-vit-base")` I realize I can get a mask generator instance from something like: ``` generator = pipeline( "mask-generation", model="/tmp/finetuned.wt", image_processor=SamImageProcessor.from_pretrained(pretrained_model_name) ) ``` Is this how it should be done?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27531/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27530
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27530/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27530/comments
https://api.github.com/repos/huggingface/transformers/issues/27530/events
https://github.com/huggingface/transformers/issues/27530
1,996,168,301
I_kwDOCUB6oc52-xxt
27,530
Error when set pad_token
{ "login": "AndreWanga", "id": 114132971, "node_id": "U_kgDOBs2H6w", "avatar_url": "https://avatars.githubusercontent.com/u/114132971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreWanga", "html_url": "https://github.com/AndreWanga", "followers_url": "https://api.github.com/users/AndreWanga/followers", "following_url": "https://api.github.com/users/AndreWanga/following{/other_user}", "gists_url": "https://api.github.com/users/AndreWanga/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreWanga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreWanga/subscriptions", "organizations_url": "https://api.github.com/users/AndreWanga/orgs", "repos_url": "https://api.github.com/users/AndreWanga/repos", "events_url": "https://api.github.com/users/AndreWanga/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreWanga/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Would you mind upgrading to the latest version of transformers and provide me with a reproducer of how you are initializing the model? πŸ€— ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
### System Info transformer version: 4.24.0 python version: 3.9.13 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I use transformer for [chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b) , I try to run this code: `tokenizer.pad_token = tokenizer.eos_token` But raise an AttributeError: can't set attribute ### Expected behavior code run successfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27530/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27529
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27529/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27529/comments
https://api.github.com/repos/huggingface/transformers/issues/27529/events
https://github.com/huggingface/transformers/pull/27529
1,996,132,743
PR_kwDOCUB6oc5fl0qE
27,529
docs: fix 404 link
{ "login": "panpan0000", "id": 14049268, "node_id": "MDQ6VXNlcjE0MDQ5MjY4", "avatar_url": "https://avatars.githubusercontent.com/u/14049268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/panpan0000", "html_url": "https://github.com/panpan0000", "followers_url": "https://api.github.com/users/panpan0000/followers", "following_url": "https://api.github.com/users/panpan0000/following{/other_user}", "gists_url": "https://api.github.com/users/panpan0000/gists{/gist_id}", "starred_url": "https://api.github.com/users/panpan0000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/panpan0000/subscriptions", "organizations_url": "https://api.github.com/users/panpan0000/orgs", "repos_url": "https://api.github.com/users/panpan0000/repos", "events_url": "https://api.github.com/users/panpan0000/events{/privacy}", "received_events_url": "https://api.github.com/users/panpan0000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts , Many thanks to your kindly review and help.\r\n\r\nBut the check was unsuccessful and I was lost.\r\n![image](https://github.com/huggingface/transformers/assets/14049268/cb4c2ccb-c6d2-4b7f-8493-ee64eeecd369)\r\n\r\ndetail log as below\r\n![image](https://github.com/huggingface/transformers/assets/14049268/98c34d7a-860d-4c0a-8c6f-8e0ec3ecb30e)\r\nbut even I ran `make quality` on main head 06343b06335a1f8417bd32d3ffc7cf2cca9a24ac , it failed \r\n```\r\nOh no! πŸ’₯ πŸ’” πŸ’₯\r\n3 files would be reformatted, 2678 files would be left unchanged.\r\n```\r\n\r\n or run on my PR with command `doc-builder style src/transformers docs/source --max_len 119 --check_only --path_to_docs docs/source`, it was succesful.\r\n", "@panpan0000 Unfortunately we had some package dependency nightmares this week. The fix was pushed to main - I can see you've rebased to get the updates and it's all working now ❀️ \r\n\r\nThank you for your patience and for looking into the issue. Once the final build PR documentation run has finished we can merge :) ", "awesome @amyeroberts , nice community here!" ]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? ISSUE: when you go to doc : https://huggingface.co/docs/transformers/main/en/main_classes/trainer#specific-gpus-selection ![image](https://github.com/huggingface/transformers/assets/14049268/359c2a4f-95e0-45cd-8db6-31fccea48617) the link is 404 due to URL is case sentitive. ![image](https://github.com/huggingface/transformers/assets/14049268/06d2c6f9-7cd8-48ad-ad8f-03b32063f563) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27529/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27529", "html_url": "https://github.com/huggingface/transformers/pull/27529", "diff_url": "https://github.com/huggingface/transformers/pull/27529.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27529.patch", "merged_at": 1700483079000 }
https://api.github.com/repos/huggingface/transformers/issues/27528
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27528/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27528/comments
https://api.github.com/repos/huggingface/transformers/issues/27528/events
https://github.com/huggingface/transformers/pull/27528
1,996,113,067
PR_kwDOCUB6oc5flwWU
27,528
docs: replace torch.distributed.run by torchrun
{ "login": "panpan0000", "id": 14049268, "node_id": "MDQ6VXNlcjE0MDQ5MjY4", "avatar_url": "https://avatars.githubusercontent.com/u/14049268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/panpan0000", "html_url": "https://github.com/panpan0000", "followers_url": "https://api.github.com/users/panpan0000/followers", "following_url": "https://api.github.com/users/panpan0000/following{/other_user}", "gists_url": "https://api.github.com/users/panpan0000/gists{/gist_id}", "starred_url": "https://api.github.com/users/panpan0000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/panpan0000/subscriptions", "organizations_url": "https://api.github.com/users/panpan0000/orgs", "repos_url": "https://api.github.com/users/panpan0000/repos", "events_url": "https://api.github.com/users/panpan0000/events{/privacy}", "received_events_url": "https://api.github.com/users/panpan0000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just found an old PR which did the similar work https://github.com/huggingface/transformers/pull/21780/files but out of sync.\r\nbut my PR just focus on DOC", "> Thanks for adding this!\r\n> \r\n> Just a few comments on removing the changes for the research examples.\r\n> \r\n> For any future readers who stumble on this PR: the previously closed PR requested [comments in the docs](https://github.com/huggingface/transformers/pull/21780#pullrequestreview-1315022415) for older pytorch versions. We now officially support pytorch >= 1.10. The entrypoint `torchrun` is present from [1.10 onwards](https://github.com/pytorch/pytorch/commit/65e6194aeb3269a182cfe2c05c122159da12770f).\r\n\r\ncomments already addressed, now PR can be merged now." ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? The old way will be deprecated , move to new torchrun. `FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun.` This PR only addresses the doc part, not touching the unit-test to limit the impact. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27528/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27528", "html_url": "https://github.com/huggingface/transformers/pull/27528", "diff_url": "https://github.com/huggingface/transformers/pull/27528.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27528.patch", "merged_at": 1701102393000 }
https://api.github.com/repos/huggingface/transformers/issues/27527
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27527/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27527/comments
https://api.github.com/repos/huggingface/transformers/issues/27527/events
https://github.com/huggingface/transformers/pull/27527
1,996,111,770
PR_kwDOCUB6oc5flwEL
27,527
translate Trainer.md to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu,\r\n\r\nHere is another PR of trainer.md, I will fix merge conflict later.\r\n\r\nAnd @statelesshz, I find the @stevhliu has assigned a pr review work for you of deepspeed translation work, thansk for your help. And You can check this one too.\r\n\r\nBest" ]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27527", "html_url": "https://github.com/huggingface/transformers/pull/27527", "diff_url": "https://github.com/huggingface/transformers/pull/27527.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27527.patch", "merged_at": 1700165236000 }
https://api.github.com/repos/huggingface/transformers/issues/27526
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27526/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27526/comments
https://api.github.com/repos/huggingface/transformers/issues/27526/events
https://github.com/huggingface/transformers/issues/27526
1,995,949,383
I_kwDOCUB6oc5298VH
27,526
How to preupgrade transformer cache and build the upgraded into docker image?
{ "login": "lanyusan", "id": 56706512, "node_id": "MDQ6VXNlcjU2NzA2NTEy", "avatar_url": "https://avatars.githubusercontent.com/u/56706512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lanyusan", "html_url": "https://github.com/lanyusan", "followers_url": "https://api.github.com/users/lanyusan/followers", "following_url": "https://api.github.com/users/lanyusan/following{/other_user}", "gists_url": "https://api.github.com/users/lanyusan/gists{/gist_id}", "starred_url": "https://api.github.com/users/lanyusan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lanyusan/subscriptions", "organizations_url": "https://api.github.com/users/lanyusan/orgs", "repos_url": "https://api.github.com/users/lanyusan/repos", "events_url": "https://api.github.com/users/lanyusan/events{/privacy}", "received_events_url": "https://api.github.com/users/lanyusan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lanyusan, thanks for raising this issue! \r\n\r\nCould you confirm which version of transformers you're running? \r\n\r\n> The cache for model files is preupgraded and built into container image to avoid upgrade each time a container is launched.\r\n\r\nThis is something you'd have to handle on your side in terms of image setup etc. Here's the doc page about cache management: https://huggingface.co/docs/transformers/installation#cache-setup. To select where you want your cache, you'll want to set the `TRANSFORMERS_CACHE` env var.", "@amyeroberts \r\n\r\nThanks for the reply. It is exactly what I needed. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
### System Info Linux ubuntu 22.04 Docker 24.05 I am not sure if this is the right place for this issue. Apology if it isn't and please direct me to the right place. I have been using transformer in docker images that are deployed at runpod/replicate. The containers of the images could go cold and be relaunched again and again. Each time the container would waste 20 to 40 seconds for the blow cache upgrade. ``` The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. ``` It would take around 20 to 40 seconds, which is a significant waste of our GPU time and container startup time. I have tried to find out how to preupgrade the cache and build the upgrade cache into docker image by google but I couldn't find a way to do it. Please advise how to preupgrade the cache and build the upgraded cache in docker image. Many thanks. ### Expected behavior The cache for model files is preupgraded and built into container image to avoid upgrade each time a container is launched.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27526/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27526/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27525
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27525/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27525/comments
https://api.github.com/repos/huggingface/transformers/issues/27525/events
https://github.com/huggingface/transformers/issues/27525
1,995,941,895
I_kwDOCUB6oc5296gH
27,525
Multi-thread inference failed when load_in_8bit with chatglm2
{ "login": "MickeyJ1002", "id": 50672267, "node_id": "MDQ6VXNlcjUwNjcyMjY3", "avatar_url": "https://avatars.githubusercontent.com/u/50672267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MickeyJ1002", "html_url": "https://github.com/MickeyJ1002", "followers_url": "https://api.github.com/users/MickeyJ1002/followers", "following_url": "https://api.github.com/users/MickeyJ1002/following{/other_user}", "gists_url": "https://api.github.com/users/MickeyJ1002/gists{/gist_id}", "starred_url": "https://api.github.com/users/MickeyJ1002/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MickeyJ1002/subscriptions", "organizations_url": "https://api.github.com/users/MickeyJ1002/orgs", "repos_url": "https://api.github.com/users/MickeyJ1002/repos", "events_url": "https://api.github.com/users/MickeyJ1002/events{/privacy}", "received_events_url": "https://api.github.com/users/MickeyJ1002/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @MickeyJ1002, thanks for raising this issue! \r\n\r\nCould you make sure to follow the issue template, and provide a minimal code snippet we can reproduce the error and the running environment it occurs in? Are my statements about the respective bugs [here](https://github.com/huggingface/transformers/issues/25197#issuecomment-1665352607) still correct?\r\n\r\nThe linked issues are useful. However to be able to debug we really need to know precisely the error your encountering, with a detailed description of the bug or unexpected behaviour, any observations and error messages, as well as what you've tried so far. ", "system info:\r\n\r\nPython version: 3.8.10\r\ngpus_num: 4\r\n\r\n(transformers==4.33.1 accelerate==0.23.0 bitsandbytes==0.42.1)\r\n(transformers==4.33.1 accelerate==0.21.0 bitsandbytes==0.37.1)\r\n(transformers==4.30.2 accelerate==0.21.0 bitsandbytes==0.37.1)\r\n(transformers==4.32.0 accelerate==0.21.0 bitsandbytes==0.37.1)\r\n..... I've tried all of the above combinations, and the same error info arised (Multi-thread inference failed).\r\n\r\nMy test code:\r\n```\r\nimport os\r\nos.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'\r\nfrom transformers import AutoTokenizer, AutoModel, TextIteratorStreamer\r\nfrom threading import Thread, currentThread\r\n\r\nmodel_path = \"THUDM/chatglm2-6b\"\r\n\r\ndevice_map = {'transformer.embedding': 0, 'transformer.encoder.final_layernorm': 0, \r\n 'transformer.output_layer': 0, 'transformer.rotary_pos_emb': 0, \r\n 'lm_head': 0, \r\n 'transformer.encoder.layers.0': 0, 'transformer.encoder.layers.1': 0, \r\n 'transformer.encoder.layers.2': 0, 'transformer.encoder.layers.3': 1, \r\n 'transformer.encoder.layers.4': 1, 'transformer.encoder.layers.5': 1, \r\n 'transformer.encoder.layers.6': 1, 'transformer.encoder.layers.7': 1, \r\n 'transformer.encoder.layers.8': 1, 'transformer.encoder.layers.9': 1, \r\n 'transformer.encoder.layers.10': 1, 'transformer.encoder.layers.11': 1, \r\n 'transformer.encoder.layers.12': 2, 'transformer.encoder.layers.13': 2, \r\n 'transformer.encoder.layers.14': 2, 'transformer.encoder.layers.15': 2, \r\n 'transformer.encoder.layers.16': 2, 'transformer.encoder.layers.17': 2, \r\n 'transformer.encoder.layers.18': 2, 'transformer.encoder.layers.19': 2, \r\n 'transformer.encoder.layers.20': 2, 'transformer.encoder.layers.21': 3, \r\n 'transformer.encoder.layers.22': 3, 'transformer.encoder.layers.23': 3, \r\n 'transformer.encoder.layers.24': 3, 'transformer.encoder.layers.25': 3, \r\n 'transformer.encoder.layers.26': 3, 'transformer.encoder.layers.27': 3}\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(model_path, device_map='auto', trust_remote_code=True, load_in_8bit=True)\r\n\r\ndef infer(prompt):\r\n inputs = tokenizer(prompt, return_tensors='pt')\r\n inputs = inputs.to(model.device)\r\n print('before generate')\r\n t = currentThread()\r\n streamer = TextIteratorStreamer(tokenizer)\r\n generation_kwargs = dict(inputs, streamer=streamer, max_length=2048)\r\n thread = Thread(target=model.generate, kwargs=generation_kwargs)\r\n thread.start()\r\n print(\"------******-------\")\r\n for new_text in streamer:\r\n print(\"thread id:\", t.ident ,\"new text:\",new_text)\r\n print(\"------******-------\")\r\n\r\nif __name__ == '__main__':\r\n prompt1 = 'ε†™δΈ€η―‡ε…³δΊŽι»„ιΉ€ζ₯Όηš„800ε­—δ½œζ–‡'\r\n prompt2 = 'Describe each state in the United States in detail'\r\n t1 = Thread(target=infer, args=(prompt1,))\r\n t2 = Thread(target=infer, args=(prompt2,))\r\n t1.start()\r\n # time.sleep(5)\r\n t2.start()\r\n t1.join()\r\n t2.join()\r\n```\r\n\r\nerror info:\r\n```\r\nTraceback (most recent call last):\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/threading.py\", line 932, in _bootstrap_inner\r\n self.run()\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/transformers/generation/utils.py\", line 1602, in generate\r\n return self.greedy_search(\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/transformers/generation/utils.py\", line 2450, in greedy_search\r\n outputs = self(\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/wujianjian/.cache/huggingface/modules/transformers_modules/base/modeling_chatglm.py\", line 928, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/wujianjian/.cache/huggingface/modules/transformers_modules/base/modeling_chatglm.py\", line 824, in forward\r\n hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/wujianjian/.cache/huggingface/modules/transformers_modules/base/modeling_chatglm.py\", line 637, in forward\r\n layer_ret = layer(\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/wujianjian/.cache/huggingface/modules/transformers_modules/base/modeling_chatglm.py\", line 562, in forward\r\n mlp_output = self.mlp(layernorm_output)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/home/wujianjian/.cache/huggingface/modules/transformers_modules/base/modeling_chatglm.py\", line 498, in forward\r\n output = self.dense_4h_to_h(intermediate_parallel)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/bitsandbytes/nn/modules.py\", line 242, in forward\r\n out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py\", line 488, in matmul\r\n return MatMul8bitLt.apply(A, B, out, bias, state)\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/torch/autograd/function.py\", line 506, in apply\r\n return super().apply(*args, **kwargs) # type: ignore[misc]\r\n File \"/data/qinbaoshuai/software/anaconda3/envs/qlora_infer/lib/python3.8/site-packages/bitsandbytes/autograd/_functions.py\", line 397, in forward\r\n output += torch.matmul(subA, state.subB)\r\nRuntimeError: mat1 and mat2 shapes cannot be multiplied (1x15 and 6x4096)\r\n```\r\n\r\n@amyeroberts ", "> Hi @MickeyJ1002, thanks for raising this issue!\r\n> \r\n> Could you make sure to follow the issue template, and provide a minimal code snippet we can reproduce the error and the running environment it occurs in? Are my statements about the respective bugs [here](https://github.com/huggingface/transformers/issues/25197#issuecomment-1665352607) still correct?\r\n> \r\n> The linked issues are useful. However to be able to debug we really need to know precisely the error your encountering, with a detailed description of the bug or unexpected behaviour, any observations and error messages, as well as what you've tried so far.\r\n\r\nExactly, the statements about the respective bugs [here](https://github.com/huggingface/transformers/issues/25197#issuecomment-1665352607) are still correct", "In this case - I'm going to hand over to @younesbelkada who was on the case last time ", "Hi @MickeyJ1002 \r\nApologies for the delay !\r\nTo load 8-bit / 4-bit models with multi-thread you need to load entirely the model on each GPU. Can you try out the snippet below with latest bitsandbytes / accelerate / transformers (`pip install -U accelerate transformers bitsandbytes`)\r\n\r\n```python\r\nimport os\r\nos.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'\r\nfrom transformers import AutoTokenizer, AutoModel, TextIteratorStreamer\r\nfrom accelerate import PartialState\r\nfrom threading import Thread, currentThread\r\n\r\nmodel_path = \"THUDM/chatglm2-6b\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(model_path, device_map={'': PartialState().process_index}, trust_remote_code=True, load_in_4bit=True)\r\n\r\ndef infer(prompt):\r\n inputs = tokenizer(prompt, return_tensors='pt')\r\n inputs = inputs.to(model.device)\r\n print('before generate')\r\n t = currentThread()\r\n streamer = TextIteratorStreamer(tokenizer)\r\n generation_kwargs = dict(inputs, streamer=streamer, max_length=2048)\r\n thread = Thread(target=model.generate, kwargs=generation_kwargs)\r\n thread.start()\r\n print(\"------******-------\")\r\n for new_text in streamer:\r\n print(\"thread id:\", t.ident ,\"new text:\",new_text)\r\n print(\"------******-------\")\r\n\r\nif __name__ == '__main__':\r\n prompt1 = 'ε†™δΈ€η―‡ε…³δΊŽι»„ιΉ€ζ₯Όηš„800ε­—δ½œζ–‡'\r\n prompt2 = 'Describe each state in the United States in detail'\r\n t1 = Thread(target=infer, args=(prompt1,))\r\n t2 = Thread(target=infer, args=(prompt2,))\r\n t1.start()\r\n # time.sleep(5)\r\n t2.start()\r\n t1.join()\r\n t2.join()\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,707
null
NONE
null
> till persists I have tried almost all version combinations and this problem still persists. Only when transformers==4.31.0, [Multi-thread issue ](https://github.com/huggingface/transformers/issues/25197)was resolved but the GPU memory was not reduced [# 25228](https://github.com/huggingface/transformers/issues/25228) _Originally posted by @MickeyJ1002 in https://github.com/huggingface/transformers/issues/25197#issuecomment-1813702796_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27525/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27524
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27524/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27524/comments
https://api.github.com/repos/huggingface/transformers/issues/27524/events
https://github.com/huggingface/transformers/pull/27524
1,995,863,194
PR_kwDOCUB6oc5fk7dT
27,524
Successfully Resolved The ZeroDivisionError Exception.
{ "login": "hi-sushanta", "id": 93595990, "node_id": "U_kgDOBZQpVg", "avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hi-sushanta", "html_url": "https://github.com/hi-sushanta", "followers_url": "https://api.github.com/users/hi-sushanta/followers", "following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}", "gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}", "starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions", "organizations_url": "https://api.github.com/users/hi-sushanta/orgs", "repos_url": "https://api.github.com/users/hi-sushanta/repos", "events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}", "received_events_url": "https://api.github.com/users/hi-sushanta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm always looking for ways to improve my code, so please don't hesitate to share your thoughts.", "@hi-sushanta It all looks good to me! I can see there's an error on the documentation build unrelated to this PR. Last week there was a fix pushed to `main`. Could you rebase to include that and trigger another CI run? Once all green we should be good to merge! ", "@hi-sushanta I think the quality failures are coming from some recent changes to our formatting in the library: we've recently started using `ruff` instead of black. To align with the updates you'll need to: \r\n* Uninstall black: `pip uninstall black` \r\n* Install any needed updates: `pip install -e .[quality]`\r\n* Re-run formatting: `make fixup`", "now all checked pass.\r\n" ]
1,700
1,700
1,700
CONTRIBUTOR
null
I noticed an issue in the issue section and addressed it by adding a small code snippet. Fixes #27513 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27524/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27524", "html_url": "https://github.com/huggingface/transformers/pull/27524", "diff_url": "https://github.com/huggingface/transformers/pull/27524.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27524.patch", "merged_at": 1700844908000 }
https://api.github.com/repos/huggingface/transformers/issues/27523
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27523/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27523/comments
https://api.github.com/repos/huggingface/transformers/issues/27523/events
https://github.com/huggingface/transformers/pull/27523
1,995,771,935
PR_kwDOCUB6oc5fkn02
27,523
Revert "add attention_mask and position_ids in assisted model"
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @patrickvonplaten . Sorry for making this mistake, would you please wait for the new [PR](https://github.com/huggingface/transformers/pull/27503) merge, it should have fixed the problem.", "@patrickvonplaten -- @jiqing-feng's PR should fix it :) The PR being reverted here did break speculative decoding for encoder-decoder architectures. CI did not caught it since the non-slow test is based on sample and it had a lucky run.", "Merging so that distil whisper works again" ]
1,700
1,700
1,700
MEMBER
null
Reverts huggingface/transformers#26892 as it breaks speculative decoding of Whisper
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27523/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27523", "html_url": "https://github.com/huggingface/transformers/pull/27523", "diff_url": "https://github.com/huggingface/transformers/pull/27523.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27523.patch", "merged_at": 1700142640000 }
https://api.github.com/repos/huggingface/transformers/issues/27522
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27522/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27522/comments
https://api.github.com/repos/huggingface/transformers/issues/27522/events
https://github.com/huggingface/transformers/pull/27522
1,995,511,033
PR_kwDOCUB6oc5fjusJ
27,522
fix no sequence length models error
{ "login": "AdamLouly", "id": 27873459, "node_id": "MDQ6VXNlcjI3ODczNDU5", "avatar_url": "https://avatars.githubusercontent.com/u/27873459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdamLouly", "html_url": "https://github.com/AdamLouly", "followers_url": "https://api.github.com/users/AdamLouly/followers", "following_url": "https://api.github.com/users/AdamLouly/following{/other_user}", "gists_url": "https://api.github.com/users/AdamLouly/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdamLouly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdamLouly/subscriptions", "organizations_url": "https://api.github.com/users/AdamLouly/orgs", "repos_url": "https://api.github.com/users/AdamLouly/repos", "events_url": "https://api.github.com/users/AdamLouly/events{/privacy}", "received_events_url": "https://api.github.com/users/AdamLouly/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts can you check this when you have time?\r\nThank you :) ", "Hi @AdamLouly - thanks for opening this PR and apologies for the delay! \r\n\r\nI'm not sure that this is where we want to apply the fix: not having a fixed sequence length is a feature of some models. I think it would make more sense to change the setting of `block_size` so that it isn't -1 if `max_pos_embeddings` is -1.", "@amyeroberts I changed it, let me know if this works." ]
1,700
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Models with no sequence length limit will fail to create a dataset because the block_size will be -1. and it will error this: `ValueError: num_samples should be a positive integer value, but got num_samples=0` this PR handles the case where max_position_embeddings is -1, we set block size to a default value. This will fix this issue: https://github.com/huggingface/transformers/issues/27521
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27522/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27522", "html_url": "https://github.com/huggingface/transformers/pull/27522", "diff_url": "https://github.com/huggingface/transformers/pull/27522.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27522.patch", "merged_at": 1702317687000 }
https://api.github.com/repos/huggingface/transformers/issues/27521
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27521/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27521/comments
https://api.github.com/repos/huggingface/transformers/issues/27521/events
https://github.com/huggingface/transformers/issues/27521
1,995,509,268
I_kwDOCUB6oc528Q4U
27,521
Data Issues related to models with no sequence length limit (e.g., XLNet)
{ "login": "AdamLouly", "id": 27873459, "node_id": "MDQ6VXNlcjI3ODczNDU5", "avatar_url": "https://avatars.githubusercontent.com/u/27873459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdamLouly", "html_url": "https://github.com/AdamLouly", "followers_url": "https://api.github.com/users/AdamLouly/followers", "following_url": "https://api.github.com/users/AdamLouly/following{/other_user}", "gists_url": "https://api.github.com/users/AdamLouly/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdamLouly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdamLouly/subscriptions", "organizations_url": "https://api.github.com/users/AdamLouly/orgs", "repos_url": "https://api.github.com/users/AdamLouly/repos", "events_url": "https://api.github.com/users/AdamLouly/events{/privacy}", "received_events_url": "https://api.github.com/users/AdamLouly/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
CONTRIBUTOR
null
### System Info nightly transformers ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Models like XLNet, which have no sequence limit, set max_position_embeddings to -1. eg: **https://github.com/huggingface/transformers/blob/acc394c4f5e1283c19783581790b3dc3105a3697/src/transformers/models/xlnet/configuration_xlnet.py#L235** This causes block_size to be -1 during text grouping, resulting in an empty dataset and a `ValueError: num_samples should be a positive integer value, but got num_samples=0` error. after the latest change to pull the block size from max_position_embeddings was introduced, this case was not handled. I'm creating a PR to fix that issue. ### Expected behavior `ValueError: num_samples should be a positive integer value, but got num_samples=0` error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27521/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27520
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27520/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27520/comments
https://api.github.com/repos/huggingface/transformers/issues/27520/events
https://github.com/huggingface/transformers/pull/27520
1,995,251,529
PR_kwDOCUB6oc5fi18A
27,520
docs: add docs for map, and add num procs to load_dataset
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27520). All of your documentation changes will be reflected on that endpoint." ]
1,700
1,700
1,700
CONTRIBUTOR
null
Hi, In the .map, this do both for train and eval dataset, so I add description of .map train and eval to make them clear and logs in terminal look clear I also add num_proc in load_dataset with the purpose of faster I would like to cc @stevhliu review my PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27520/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27520", "html_url": "https://github.com/huggingface/transformers/pull/27520", "diff_url": "https://github.com/huggingface/transformers/pull/27520.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27520.patch", "merged_at": 1700140579000 }
https://api.github.com/repos/huggingface/transformers/issues/27519
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27519/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27519/comments
https://api.github.com/repos/huggingface/transformers/issues/27519/events
https://github.com/huggingface/transformers/pull/27519
1,994,939,611
PR_kwDOCUB6oc5fhxe6
27,519
Incorrect setting for num_beams in translation and summarization examples
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,700
1,700
1,700
MEMBER
null
Our translation and summarization examples use `None` as a default value for `num_beams`, but this is no longer valid now that we use `GenerationConfig`. This PR fixes the examples, and also adds a guard in `GenerationConfig` so that `None` values there are replaced with `1` (and a warning is thrown). Also, while I'm fixing example issues, some of our TF examples use `main_process_first` - this is a Torch function, and is not necessary because TF handles distributed training very differently. Since it looks like it could cause problems depending on whether Torch is also present, I'm removing it from all of the TF examples. Fixes #27505
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27519/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27519", "html_url": "https://github.com/huggingface/transformers/pull/27519", "diff_url": "https://github.com/huggingface/transformers/pull/27519.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27519.patch", "merged_at": 1700072334000 }
https://api.github.com/repos/huggingface/transformers/issues/27518
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27518/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27518/comments
https://api.github.com/repos/huggingface/transformers/issues/27518/events
https://github.com/huggingface/transformers/pull/27518
1,994,927,630
PR_kwDOCUB6oc5fhu1g
27,518
translate model.md to chinese
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Part of https://github.com/huggingface/transformers/issues/26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> cc @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27518/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27518", "html_url": "https://github.com/huggingface/transformers/pull/27518", "diff_url": "https://github.com/huggingface/transformers/pull/27518.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27518.patch", "merged_at": 1700096344000 }
https://api.github.com/repos/huggingface/transformers/issues/27517
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27517/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27517/comments
https://api.github.com/repos/huggingface/transformers/issues/27517/events
https://github.com/huggingface/transformers/issues/27517
1,994,919,889
I_kwDOCUB6oc526A_R
27,517
Loss calculation bug in MBartForCausalLM
{ "login": "ZL92", "id": 40026571, "node_id": "MDQ6VXNlcjQwMDI2NTcx", "avatar_url": "https://avatars.githubusercontent.com/u/40026571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZL92", "html_url": "https://github.com/ZL92", "followers_url": "https://api.github.com/users/ZL92/followers", "following_url": "https://api.github.com/users/ZL92/following{/other_user}", "gists_url": "https://api.github.com/users/ZL92/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZL92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZL92/subscriptions", "organizations_url": "https://api.github.com/users/ZL92/orgs", "repos_url": "https://api.github.com/users/ZL92/repos", "events_url": "https://api.github.com/users/ZL92/events{/privacy}", "received_events_url": "https://api.github.com/users/ZL92/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ZL92 \r\nThanks for bringing up this discussion\r\nPer my understanding since these models (bart, mbart), are encoder-decoder base models, the logic is slightly different, instead of shifting the labels during the calculation of the loss, they are shifted before being fed to the decoder e.g. here for Bart: https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L1433\r\n\r\n", "@younesbelkada Thanks for your reply! Sorry for the unclear description, I mean the Causal Language Model classes for Mbart and Bart, and I added the link to the source code into the description. \r\n\r\nIf I understand correctly, in building causal language models, the [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/blob/514de24abfd4416aeba6a6455ad5920f57f3567d/src/transformers/data/data_collator.py#L744) copies the input_ids as labels, and do shifting in loss calculation. Llama and GPT2 do so, but Mbart and Bart do not.\r\n\r\n", "Hi @ZL92 \r\nThanks ! \r\nOK I had a deeper look, so it appears that:\r\n\r\n1- `MbartForCausalLM` has a `MbartDecoderWrapper` attribute here: https://github.com/huggingface/transformers/blob/f1185a4a73a03d238afce1b40456588d22520dd2/src/transformers/models/mbart/modeling_mbart.py#L1914\r\n2- That decoder wrapper is a wrapper around `MbartDecoder` here: https://github.com/huggingface/transformers/blob/f1185a4a73a03d238afce1b40456588d22520dd2/src/transformers/models/mbart/modeling_mbart.py#L1899\r\n3- That decoder do not call `shift_labels` indeed\r\n\r\nTherefore the solution for that corner case (for now) is to subclass `DataCollatorForLanguageModeling` and do it inside the data collator call while we fix it, not sure however what should be the fix as always shifting input ids to the right will result in a non-backward compatible behaviour ", "Hi @younesbelkada,\r\n\r\nThanks for your solution! \r\n\r\nI tried to shift the labels as shown in the following to have not that much modification of the Causal LM training pipeline. The training looks good now. \r\n```\r\nloss = None\r\nif labels is not None:\r\n labels = labels.to(logits.device)\r\n\r\n # Shift so that tokens < n predict n\r\n shift_logits = logits[..., :-1, :].contiguous()\r\n shift_labels = labels[..., 1:].contiguous()\r\n\r\n loss_fct = CrossEntropyLoss()\r\n loss = loss_fct(shift_logits.view(-1, self.config.vocab_size), shift_labels.view(-1))\r\n```\r\n\r\nLook forward to the fix! ", "Thanks ! cc @ArthurZucker I wonder how we can fix that in a BC manner", "If it's fix it's fine to break πŸ˜‰ ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
### System Info transformers==4.35.1 ### Who can help? @ArthurZucker @younesbelkada ### Information - [x] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The story is that I followed the [tutorial](https://huggingface.co/docs/transformers/tasks/language_modeling) for building a causal language model and replaced the distleGPT2 with MbartForCausalLM. The model seems to copy the input during training. Then I found one bug in the loss calculation of Mbart. Specifically, the label is the copy of input_ids with [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/issues/27517#issue-1994919889) (with mlm=False). The LM models like GPT2 and Llama shift the labels while MBart doesn't. The loss calculation of MbartForCausalLM [code](https://github.com/huggingface/transformers/blob/f1185a4a73a03d238afce1b40456588d22520dd2/src/transformers/models/mbart/modeling_mbart.py#L2066) ``` loss = None if labels is not None: labels = labels.to(logits.device) loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) ``` While that in LlamaForCausalLM [code](https://github.com/huggingface/transformers/blob/f1185a4a73a03d238afce1b40456588d22520dd2/src/transformers/models/llama/modeling_llama.py#L1056) ``` loss = None if labels is not None: # Shift so that tokens < n predict n shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() # Flatten the tokens loss_fct = CrossEntropyLoss() shift_logits = shift_logits.view(-1, self.config.vocab_size) shift_labels = shift_labels.view(-1) # Enable model parallelism shift_labels = shift_labels.to(shift_logits.device) loss = loss_fct(shift_logits, shift_labels) ``` I find the same problem also in BartForCausalLM. ### Expected behavior Add the shifting in MbartForCausalLM. For example: ``` loss = None if labels is not None: labels = labels.to(logits.device) # Shift so that tokens < n predict n shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() loss_fct = CrossEntropyLoss() loss = loss_fct(shift_logits.view(-1, self.config.vocab_size), shift_labels.view(-1)) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27517/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27516
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27516/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27516/comments
https://api.github.com/repos/huggingface/transformers/issues/27516/events
https://github.com/huggingface/transformers/pull/27516
1,994,901,706
PR_kwDOCUB6oc5fhpFN
27,516
FlashAttention remove Nvidia only GPUs to more generic documentation
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
MEMBER
null
This PR adds AMD as a supported target GPU in the documentation relative to Flash Attention.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27516/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27516", "html_url": "https://github.com/huggingface/transformers/pull/27516", "diff_url": "https://github.com/huggingface/transformers/pull/27516.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27516.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27515
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27515/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27515/comments
https://api.github.com/repos/huggingface/transformers/issues/27515/events
https://github.com/huggingface/transformers/pull/27515
1,994,836,079
PR_kwDOCUB6oc5fhaxI
27,515
Fix wav2vec2 params
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Caught on the Accelerate CI, the `wav2vec2` script doesn't take in `fp16` so removes it as a test arg. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27515/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27515", "html_url": "https://github.com/huggingface/transformers/pull/27515", "diff_url": "https://github.com/huggingface/transformers/pull/27515.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27515.patch", "merged_at": 1700058245000 }
https://api.github.com/repos/huggingface/transformers/issues/27514
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27514/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27514/comments
https://api.github.com/repos/huggingface/transformers/issues/27514/events
https://github.com/huggingface/transformers/issues/27514
1,994,721,286
I_kwDOCUB6oc525QgG
27,514
Add new Model - SegGPT: Segmenting Everything In Context
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hey, would you mind adding a link to the paper, the code and the released checkpoints? πŸ˜‰ ", "@ArthurZucker \r\n\r\n- [Paper](https://arxiv.org/pdf/2304.03284.pdf)\r\n- [Code](https://github.com/baaivision/Painter/tree/main/SegGPT/SegGPT_inference)\r\n- [Checkpoint](https://huggingface.co/BAAI/SegGPT/blob/main/seggpt_vit_large.pth)\r\n" ]
1,700
1,701
null
CONTRIBUTOR
null
### Model description Add new Model - SegGPT: Segmenting Everything In Context The model code and weights are available. - Paper : https://arxiv.org/pdf/2304.03284.pdf - Code : https://github.com/baaivision/Painter/tree/main/SegGPT/SegGPT_inference - Weights : https://huggingface.co/BAAI/SegGPT/blob/main/seggpt_vit_large.pth I will be implementing this model. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27514/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27513
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27513/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27513/comments
https://api.github.com/repos/huggingface/transformers/issues/27513/events
https://github.com/huggingface/transformers/issues/27513
1,994,385,922
I_kwDOCUB6oc523-oC
27,513
ZeroDivisionError in NotebookProgressBar
{ "login": "pete88b", "id": 11458288, "node_id": "MDQ6VXNlcjExNDU4Mjg4", "avatar_url": "https://avatars.githubusercontent.com/u/11458288?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pete88b", "html_url": "https://github.com/pete88b", "followers_url": "https://api.github.com/users/pete88b/followers", "following_url": "https://api.github.com/users/pete88b/following{/other_user}", "gists_url": "https://api.github.com/users/pete88b/gists{/gist_id}", "starred_url": "https://api.github.com/users/pete88b/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pete88b/subscriptions", "organizations_url": "https://api.github.com/users/pete88b/orgs", "repos_url": "https://api.github.com/users/pete88b/repos", "events_url": "https://api.github.com/users/pete88b/events{/privacy}", "received_events_url": "https://api.github.com/users/pete88b/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @pete88b please feel free to open a PR and ping me for review! ", "Hi @hi-sushanta #27524 fixes `update` but misses the fix to `self.label += f\", {1/self.average_time_per_item:.2f} it/s\"` in `update_bar` - there's more detail in the linked notebook in case it helps.\r\nif it's not too late, could you update your PR to include both (o:", "Thank you, PeterπŸ€—.\r\nI updated my code similar to you did." ]
1,700
1,700
1,700
NONE
null
### System Info - `transformers` version: 4.35.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.0 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.1 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes (GeForce 3080 laptop) - Using distributed or parallel set-up in script?: no ### Who can help? Hi @muellerzr - hope this is an easy one for you (o: I'm sometimes seeing a divide by zero error when NotebookProgressBar updates. I've suggested changes to `transformers/utils/notebook.py` in [my notebook](https://github.com/pete88b/spelling-mistake-maker/blob/main/tmp/NotebookProgressBar_divide_by_zero_issue.ipynb). I'm guessing that for a change this simple, it'll be easier for you to make the change - but please let me know if you prefer I make a PR for this change ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Please see [my notebook](https://github.com/pete88b/spelling-mistake-maker/blob/main/tmp/NotebookProgressBar_divide_by_zero_issue.ipynb) ### Expected behavior We should not get a divide by zero error when `average_time_per_item` is zero
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27513/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27512
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27512/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27512/comments
https://api.github.com/repos/huggingface/transformers/issues/27512/events
https://github.com/huggingface/transformers/pull/27512
1,994,371,746
PR_kwDOCUB6oc5ff1kT
27,512
Make some jobs run on the GitHub Actions runners
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? Our self-hosted runners also have `ubuntu-latest` labels, and the jobs sometimes are dispatched to those runners which has missing `pip` issue. This PR uses `ubuntu-22.04` to avoid this. Later, we should probably change the labels of our hosted runners to avoid such collision.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27512/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27512/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27512", "html_url": "https://github.com/huggingface/transformers/pull/27512", "diff_url": "https://github.com/huggingface/transformers/pull/27512.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27512.patch", "merged_at": 1700041397000 }
https://api.github.com/repos/huggingface/transformers/issues/27511
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27511/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27511/comments
https://api.github.com/repos/huggingface/transformers/issues/27511/events
https://github.com/huggingface/transformers/pull/27511
1,994,315,216
PR_kwDOCUB6oc5ffpTX
27,511
[`CircleCI`] skip test_assisted_decoding_sample for everyone
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ArthurZucker [this PR](https://github.com/huggingface/transformers/pull/27503) should fix it!\r\n\r\nI decided to rely on the tests instead of asking the contributor to double-check, but forgot that the tests were stochastic -- they rely on `sample`. The PR that caused this crash had a lucky CI run, so the problem got merged while undetected.\r\n\r\nIn other words, I should have been more careful :)" ]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? cc @gante I'm having a look at why this is failing but skipping for now. (standalone works well)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27511/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27511", "html_url": "https://github.com/huggingface/transformers/pull/27511", "diff_url": "https://github.com/huggingface/transformers/pull/27511.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27511.patch", "merged_at": 1700039872000 }
https://api.github.com/repos/huggingface/transformers/issues/27510
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27510/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27510/comments
https://api.github.com/repos/huggingface/transformers/issues/27510/events
https://github.com/huggingface/transformers/pull/27510
1,994,220,737
PR_kwDOCUB6oc5ffU31
27,510
Update modeling_llama.py
{ "login": "askxiaozhang", "id": 112556925, "node_id": "U_kgDOBrV7fQ", "avatar_url": "https://avatars.githubusercontent.com/u/112556925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/askxiaozhang", "html_url": "https://github.com/askxiaozhang", "followers_url": "https://api.github.com/users/askxiaozhang/followers", "following_url": "https://api.github.com/users/askxiaozhang/following{/other_user}", "gists_url": "https://api.github.com/users/askxiaozhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/askxiaozhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/askxiaozhang/subscriptions", "organizations_url": "https://api.github.com/users/askxiaozhang/orgs", "repos_url": "https://api.github.com/users/askxiaozhang/repos", "events_url": "https://api.github.com/users/askxiaozhang/events{/privacy}", "received_events_url": "https://api.github.com/users/askxiaozhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @askxiaozhang, \r\n\r\nSo that we can understand whether this is the best fix for the bug - could you share your running environment when encountering this bug (run `transformers-cli env` in the terminal and copy-paste the output) as well as a minimal code snippet to reproduce the error? \r\n\r\nNote: any import of `fx` needs to be protected so that things don't crash if it's not installed e.g. [here ](https://github.com/huggingface/transformers/blob/3d1a7bf4761e332e78e07d147bacb0d26a522187/src/transformers/models/llama/modeling_llama.py#L54C1-L54C1)", "> Hi @askxiaozhang,\r\n> \r\n> So that we can understand whether this is the best fix for the bug - could you share your running environment when encountering this bug (run `transformers-cli env` in the terminal and copy-paste the output) as well as a minimal code snippet to reproduce the error?\r\n> \r\n> Note: any import of `fx` needs to be protected so that things don't crash if it's not installed e.g. [here ](https://github.com/huggingface/transformers/blob/3d1a7bf4761e332e78e07d147bacb0d26a522187/src/transformers/models/llama/modeling_llama.py#L54C1-L54C1)\r\nOK,thanks for your advice. There are some infomation:\r\n- `transformers` version: 4.36.0.dev0\r\n- Platform: Linux-5.4.119-19.0009.28-x86_64-with-glibc2.17\r\n- Python version: 3.10.4\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.24.1\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.12.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\ncode snippet:\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).cuda()\r\nmessages=[\r\n { 'role': 'user', 'content': \"write a quick sort algorithm in python\"}\r\n]\r\ninputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(model.device)\r\noutputs = model.generate(inputs, max_new_tokens=2048, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)\r\nprint(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))\r\n```\r\n\r\nAfter I runned code:\r\n```\r\nTraceback (most recent call last):\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/utils/import_utils.py\", line 1353, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/models/llama/modeling_llama.py\", line 55, in <module>\r\n _prepare_4d_causal_attention_mask = torch.fx.wrap(_prepare_4d_causal_attention_mask)\r\nAttributeError: module 'torch' has no attribute 'fx'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/data/jiaqi/DeepSeek-Coder/main.py\", line 49, in <module>\r\n model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).cuda()\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/models/auto/auto_factory.py\", line 565, in from_pretrained\r\n model_class = _get_model_class(config, cls._model_mapping)\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/models/auto/auto_factory.py\", line 387, in _get_model_class\r\n supported_models = model_mapping[type(config)]\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/models/auto/auto_factory.py\", line 740, in __getitem__\r\n return self._load_attr_from_module(model_type, model_name)\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/models/auto/auto_factory.py\", line 754, in _load_attr_from_module\r\n return getattribute_from_module(self._modules[module_name], attr)\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/models/auto/auto_factory.py\", line 698, in getattribute_from_module\r\n if hasattr(module, attr):\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/utils/import_utils.py\", line 1343, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers-4.36.0.dev0-py3.10.egg/transformers/utils/import_utils.py\", line 1355, in _get_module\r\n raise RuntimeError(\r\nRuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):\r\nmodule 'torch' has no attribute 'fx'\r\n```\r\n ", "Hi @askxiaozhang. Thanks for providing further details! This issue should be resolved with #27570", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
When I runned tokenizer.apply_chat_template in runing DeepSeek-Coder,happened this bug.So I Fixed about: Traceback (most recent call last): File "/data/jiaqi/DeepSeek-Coder/main.py", line 79, in <module> model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).cuda() File "/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained model_class = _get_model_class(config, cls._model_mapping) File "/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 387, in _get_model_class supported_models = model_mapping[type(config)] File "/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 740, in __getitem__ return self._load_attr_from_module(model_type, model_name) File "/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 754, in _load_attr_from_module return getattribute_from_module(self._modules[module_name], attr) File "/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 698, in getattribute_from_module if hasattr(module, attr): File "/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1335, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/data/anaconda3/envs/codellama/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1347, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): module 'torch' has no attribute 'fx' # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27510/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27510", "html_url": "https://github.com/huggingface/transformers/pull/27510", "diff_url": "https://github.com/huggingface/transformers/pull/27510.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27510.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27509
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27509/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27509/comments
https://api.github.com/repos/huggingface/transformers/issues/27509/events
https://github.com/huggingface/transformers/pull/27509
1,994,195,689
PR_kwDOCUB6oc5ffPee
27,509
[`pytest`] Avoid flash attn test marker warning
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? Adds the `flash_attn_test` marker to the `pyproject.toml` to remove the following warning: ```python tests/test_modeling_common.py:3131 /home/arthur/transformers/tests/test_modeling_common.py:3131: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html @mark.flash_attn_test tests/test_modeling_common.py:3174 /home/arthur/transformers/tests/test_modeling_common.py:3174: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html @mark.flash_attn_test ``` Allows to de-select them with: ```python pytest ... -m "not flash_attn_test" ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27509/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27509", "html_url": "https://github.com/huggingface/transformers/pull/27509", "diff_url": "https://github.com/huggingface/transformers/pull/27509.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27509.patch", "merged_at": 1700129587000 }
https://api.github.com/repos/huggingface/transformers/issues/27508
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27508/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27508/comments
https://api.github.com/repos/huggingface/transformers/issues/27508/events
https://github.com/huggingface/transformers/pull/27508
1,994,187,558
PR_kwDOCUB6oc5ffNsI
27,508
[`CI-test_torch`] skip test_tf_from_pt_safetensors and `test_assisted_decoding_sample`
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27508). All of your documentation changes will be reflected on that endpoint." ]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? skip test_tf_from_pt_safetensors and test_assisted_decoding_sample
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27508/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27508", "html_url": "https://github.com/huggingface/transformers/pull/27508", "diff_url": "https://github.com/huggingface/transformers/pull/27508.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27508.patch", "merged_at": 1700033969000 }
https://api.github.com/repos/huggingface/transformers/issues/27507
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27507/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27507/comments
https://api.github.com/repos/huggingface/transformers/issues/27507/events
https://github.com/huggingface/transformers/issues/27507
1,994,185,630
I_kwDOCUB6oc523Nue
27,507
The results of tensorrt and model.generate
{ "login": "lin-lcx", "id": 82073001, "node_id": "MDQ6VXNlcjgyMDczMDAx", "avatar_url": "https://avatars.githubusercontent.com/u/82073001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lin-lcx", "html_url": "https://github.com/lin-lcx", "followers_url": "https://api.github.com/users/lin-lcx/followers", "following_url": "https://api.github.com/users/lin-lcx/following{/other_user}", "gists_url": "https://api.github.com/users/lin-lcx/gists{/gist_id}", "starred_url": "https://api.github.com/users/lin-lcx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lin-lcx/subscriptions", "organizations_url": "https://api.github.com/users/lin-lcx/orgs", "repos_url": "https://api.github.com/users/lin-lcx/repos", "events_url": "https://api.github.com/users/lin-lcx/events{/privacy}", "received_events_url": "https://api.github.com/users/lin-lcx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
NONE
null
Torch's model inference is using ``` model = VisionEncoderDecoderModel.from_pretrained( nougat-base) outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), # max_length=model.decoder.config.max_length, early_stopping=True, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[tokenizer.unk_token_id]], return_dict_in_generate=True, ) ``` Now I use tensorrt to accelerate it, and the result is the direct output of the network (for example, I have 50,000 tokens), the output size of tensorrt is (50000,). How should I input it into "model.generate"? Or is there another way to get the same result as "model.generate". To put it simply, it is how to turn the direct output of the model into the result of model.generate.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27507/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27506
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27506/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27506/comments
https://api.github.com/repos/huggingface/transformers/issues/27506/events
https://github.com/huggingface/transformers/pull/27506
1,994,138,265
PR_kwDOCUB6oc5ffC0e
27,506
Update spelling mistake
{ "login": "LimJing7", "id": 3232421, "node_id": "MDQ6VXNlcjMyMzI0MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/3232421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LimJing7", "html_url": "https://github.com/LimJing7", "followers_url": "https://api.github.com/users/LimJing7/followers", "following_url": "https://api.github.com/users/LimJing7/following{/other_user}", "gists_url": "https://api.github.com/users/LimJing7/gists{/gist_id}", "starred_url": "https://api.github.com/users/LimJing7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LimJing7/subscriptions", "organizations_url": "https://api.github.com/users/LimJing7/orgs", "repos_url": "https://api.github.com/users/LimJing7/repos", "events_url": "https://api.github.com/users/LimJing7/events{/privacy}", "received_events_url": "https://api.github.com/users/LimJing7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? thoroughly was misspelled thouroughly ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27506/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27506", "html_url": "https://github.com/huggingface/transformers/pull/27506", "diff_url": "https://github.com/huggingface/transformers/pull/27506.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27506.patch", "merged_at": 1700038246000 }
https://api.github.com/repos/huggingface/transformers/issues/27505
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27505/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27505/comments
https://api.github.com/repos/huggingface/transformers/issues/27505/events
https://github.com/huggingface/transformers/issues/27505
1,994,126,103
I_kwDOCUB6oc522_MX
27,505
TypeError: '>' not supported between instances of 'NoneType' and 'int'
{ "login": "ChristophKnapp", "id": 3720355, "node_id": "MDQ6VXNlcjM3MjAzNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/3720355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChristophKnapp", "html_url": "https://github.com/ChristophKnapp", "followers_url": "https://api.github.com/users/ChristophKnapp/followers", "following_url": "https://api.github.com/users/ChristophKnapp/following{/other_user}", "gists_url": "https://api.github.com/users/ChristophKnapp/gists{/gist_id}", "starred_url": "https://api.github.com/users/ChristophKnapp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChristophKnapp/subscriptions", "organizations_url": "https://api.github.com/users/ChristophKnapp/orgs", "repos_url": "https://api.github.com/users/ChristophKnapp/repos", "events_url": "https://api.github.com/users/ChristophKnapp/events{/privacy}", "received_events_url": "https://api.github.com/users/ChristophKnapp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "--max_train_samples\r\n500\r\n--max_eval_samples\r\n500\r\n--max_predict_samples\r\n500\r\n\r\nreduces waiting time for this error to appear to a minute. This works for me, further reduction does not seem to help much. ", "cc @Rocketknight1 as it seems to be failing on a TF script ", "Hi @ChristophKnapp, can you confirm that you're running the translation example from [here](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) and paste me the exact command you used to run it so I can reproduce?", "@Rocketknight1 Yes that's the script I'm using. The terminal options are:\r\n\r\n--output_dir\r\n/workspace/results\r\n--model_name_or_path\r\nt5-small\r\n--do_train\r\n--do_eval\r\n--source_lang\r\nen\r\n--target_lang\r\nro\r\n--source_prefix\r\ntranslate_English_to_Romanian:_\r\n--dataset_name\r\nwmt16\r\n--dataset_config_name\r\nro-en\r\n--per_device_train_batch_size=16\r\n--per_device_eval_batch_size=16\r\n--overwrite_output_dir\r\n--max_train_samples\r\n500\r\n--max_eval_samples\r\n500\r\n--max_predict_samples\r\n500\r\n\r\nexcept of the last three input values, that should be exactly as recomended on the example md file. ", "Confirmed the issue here - the problem is this code:\r\n```python\r\nis_beam_gen_mode = (\r\n not is_contrastive_search_gen_mode\r\n and (generation_config.num_beams > 1)\r\n and generation_config.do_sample is False\r\n )\r\n```\r\n\r\nThe problem is that in this case, `generation_config` does not have a `num_beams` attribute and so we get a value of `None`. I also see a couple of other issues in this example script that should be fixed.\r\n\r\n@gante are you okay if I open a PR to replace this with something like `getattr(generation_config, \"num_beams\", 1)`?", "@ChristophKnapp thank you for the bug report! We've opened a PR to fix it at #27519. The updated example script is [here](https://github.com/huggingface/transformers/blob/309b0333909c38488c4362b2e9482fce320c1aea/examples/tensorflow/translation/run_translation.py) - please try it and let me know if the issue is resolved!", "> @ChristophKnapp thank you for the bug report! We've opened a PR to fix it at #27519. The updated example script is [here](https://github.com/huggingface/transformers/blob/309b0333909c38488c4362b2e9482fce320c1aea/examples/tensorflow/translation/run_translation.py) - please try it and let me know if the issue is resolved!\r\n\r\nThanks a lot for all your help. The script finishes with no errors now.", "No probs! We try to run tests, but sometimes bug reports like these are the only way we find out about issues like this. It was probably affecting lots of people, not just you. Thanks for letting us know!", "#727" ]
1,700
1,704
1,700
NONE
null
### System Info 2023-11-15 07:10:34.350171: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2023-11-15 07:10:34.489992: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-11-15 07:10:34.490040: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-11-15 07:10:34.490749: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-11-15 07:10:34.544177: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-11-15 07:10:35.040462: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2023-11-15 07:10:36.233180: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 2023-11-15 07:10:36.235004: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2211] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.34.0 - Platform: Linux-6.2.0-36-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.14.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Setup a python environment in pycharm. 2. Add transformer example script for translation from englisch to romanian, 3. Install python libraries from within pycharm. 4. Install transformers development version as requested by script. 5. Run script, after first epoch error is thrown. ### Expected behavior I'm running into this problem when I run the English to Romania translation example. I'm not aware that I modified anything in the script. It fits the model up to the first epoch then it throws this error. There are already two issue reports with this problem nobody felt responsible to take on. I pasted this as a comment in one of them. Given that I was not sure whether the old issue is reopened, I decided to create a new one. I will debug this on my own but any help is appreciated. Regards 2023-11-13 15:47:58.542480: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2023-11-13 15:47:58.564058: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-11-13 15:47:58.564080: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-11-13 15:47:58.564097: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-11-13 15:47:58.568038: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 11/13/2023 15:47:59 - INFO - main - Training/evaluation parameters TFTrainingArguments( _n_gpu=-1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gcp_project=None, gradient_accumulation_steps=1, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/workspace/transformer/results/runs/Nov13_15-47-59_workstation-bluechip-BUSINESSline-individu, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=3.0, optim=adamw_torch, optim_args=None, output_dir=/workspace/transformer/results, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=16, poly_power=1.0, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/workspace/transformer/results, save_on_each_node=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=None, seed=42, skip_memory_metrics=True, split_batches=False, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_name=None, tpu_num_cores=None, tpu_zone=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xla=False, ) Loading Dataset Infos from /.cache/huggingface/modules/datasets_modules/datasets/wmt16/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227 Overwrite dataset info from restored data version if exists. Loading Dataset info from /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227 11/13/2023 15:48:01 - INFO - datasets.info - Loading Dataset Infos from /.cache/huggingface/modules/datasets_modules/datasets/wmt16/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227 11/13/2023 15:48:01 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. 11/13/2023 15:48:01 - INFO - datasets.info - Loading Dataset info from /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227 11/13/2023 15:48:01 - INFO - datasets.builder - Found cached dataset wmt16 (/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227) 11/13/2023 15:48:01 - INFO - datasets.info - Loading Dataset info from /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227 Found cached dataset wmt16 (/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227) Loading Dataset info from /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227 loading configuration file config.json from cache at /.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json Model config T5Config { "_name_or_path": "t5-small", "architectures": [ "T5ForConditionalGeneration" ], "classifier_dropout": 0.0, "d_ff": 2048, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dense_act_fn": "relu", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "relu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": false, "layer_norm_epsilon": 1e-06, "model_type": "t5", "n_positions": 512, "num_decoder_layers": 6, "num_heads": 8, "num_layers": 6, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } }, "transformers_version": "4.36.0.dev0", "use_cache": true, "vocab_size": 32128 } loading file spiece.model from cache at /.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model loading file tokenizer.json from cache at /.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json loading file added_tokens.json from cache at None loading file special_tokens_map.json from cache at None loading file tokenizer_config.json from cache at /.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json Loading cached processed dataset at /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/cache-164eb734af318539.arrow Loading cached processed dataset at /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/cache-442e2020e92ebe8e.arrow Tensorflow: setting up strategy 11/13/2023 15:48:01 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/cache-164eb734af318539.arrow 11/13/2023 15:48:01 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /.cache/huggingface/datasets/wmt16/ro-en/1.0.0/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/cache-442e2020e92ebe8e.arrow 2023-11-13 15:48:01.416190: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1886] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 8825 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6 loading weights file model.safetensors from cache at /.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors Generate config GenerationConfig { "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0 } 2023-11-13 15:48:01.656874: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory Loaded 60,506,624 parameters in the TF 2.0 model. All PyTorch model weights were used when initializing TFT5ForConditionalGeneration. All the weights of TFT5ForConditionalGeneration were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training. You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass loss=None if you do not want your model to compute a loss. You can also specify loss='auto' to get the internal loss without printing this info string. 11/13/2023 15:48:04 - INFO - main - ***** Running training ***** 11/13/2023 15:48:04 - INFO - main - Num examples = 610320 11/13/2023 15:48:04 - INFO - main - Num Epochs = 3.0 11/13/2023 15:48:04 - INFO - main - Instantaneous batch size per device = 16 11/13/2023 15:48:04 - INFO - main - Total train batch size = 16 11/13/2023 15:48:04 - INFO - main - Total optimization steps = 114435 Epoch 1/3 2023-11-13 15:48:13.749879: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f01b9364620 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2023-11-13 15:48:13.749896: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 3060, Compute Capability 8.6 2023-11-13 15:48:13.752234: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var MLIR_CRASH_REPRODUCER_DIRECTORY to enable. 2023-11-13 15:48:13.759242: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:442] Loaded cuDNN version 8700 2023-11-13 15:48:13.802724: I ./tensorflow/compiler/jit/device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 38145/38145 [==============================] - ETA: 0s - loss: 0.6117Generate config GenerationConfig { "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0 } Traceback (most recent call last): File "/workspace/transformer/run_translation.py", line 733, in main() File "/workspace/transformer/run_translation.py", line 693, in main history = model.fit(tf_train_dataset, epochs=int(training_args.num_train_epochs), callbacks=callbacks) File "/workspace/transformer/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/workspace/transformer/lib/python3.10/site-packages/transformers/keras_callbacks.py", line 223, in on_epoch_end predictions = self.generation_function(generation_inputs, attention_mask=attention_mask) File "/tmp/autograph_generated_fileg5wrw6ci.py", line 13, in tf__generation_function retval = ag_.converted_call(ag__.ld(self).model.generate, (ag__.ld(inputs),), dict(attention_mask=ag__.ld(attention_mask), **ag__.ld(self).generate_kwargs), fscope) File "/tmp/autograph_generated_fileqqh0lf7s.py", line 437, in tf__generate is_beam_gen_mode = ag.and_(lambda : ag__.not_(ag__.ld(is_contrastive_search_gen_mode)), lambda : ag__.and_(lambda : ag__.ld(generation_config).num_beams > 1, lambda : ag__.ld(generation_config).do_sample is False)) File "/tmp/autograph_generated_fileqqh0lf7s.py", line 437, in is_beam_gen_mode = ag.and_(lambda : ag__.not_(ag__.ld(is_contrastive_search_gen_mode)), lambda : ag__.and_(lambda : ag__.ld(generation_config).num_beams > 1, lambda : ag__.ld(generation_config).do_sample is False)) File "/tmp/autograph_generated_fileqqh0lf7s.py", line 437, in is_beam_gen_mode = ag.and_(lambda : ag__.not_(ag__.ld(is_contrastive_search_gen_mode)), lambda : ag__.and_(lambda : ag__.ld(generation_config).num_beams > 1, lambda : ag__.ld(generation_config).do_sample is False)) TypeError: in user code: File "/workspace/transformer/lib/python3.10/site-packages/transformers/keras_callbacks.py", line 202, in generation_function * return self.model.generate(inputs, attention_mask=attention_mask, **self.generate_kwargs) File "/workspace/transformer/lib/python3.10/site-packages/transformers/generation/tf_utils.py", line 884, in generate * is_beam_gen_mode = ( TypeError: '>' not supported between instances of 'NoneType' and 'int' Process finished with exit code 1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27505/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27504
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27504/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27504/comments
https://api.github.com/repos/huggingface/transformers/issues/27504/events
https://github.com/huggingface/transformers/issues/27504
1,994,018,745
I_kwDOCUB6oc522k-5
27,504
Needed disk space of RAG model
{ "login": "wangjinghao123", "id": 69663558, "node_id": "MDQ6VXNlcjY5NjYzNTU4", "avatar_url": "https://avatars.githubusercontent.com/u/69663558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangjinghao123", "html_url": "https://github.com/wangjinghao123", "followers_url": "https://api.github.com/users/wangjinghao123/followers", "following_url": "https://api.github.com/users/wangjinghao123/following{/other_user}", "gists_url": "https://api.github.com/users/wangjinghao123/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangjinghao123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangjinghao123/subscriptions", "organizations_url": "https://api.github.com/users/wangjinghao123/orgs", "repos_url": "https://api.github.com/users/wangjinghao123/repos", "events_url": "https://api.github.com/users/wangjinghao123/events{/privacy}", "received_events_url": "https://api.github.com/users/wangjinghao123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @wangjinghao123, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
NONE
null
### System Info Hi, I'm trying to finetune RAG model. I have downloaded the wiki_dpr dataset in advance, but during the finetuning the following occured. Loading index from wiki_dpr with index name exact WARNING:datasets.builder:Using custom data configuration psgs_w100.nq.exact-f44336eb78506dd9 Traceback (most recent call last): File "transformers/examples/research_projects/rag/finetune_rag.py", line 649, in <module> main(args) File "transformers/examples/research_projects/rag/finetune_rag.py", line 586, in main model: GenerativeQAModule = GenerativeQAModule(args) File "transformers/examples/research_projects/rag/finetune_rag.py", line 195, in __init__ self.model.retriever.init_retrieval(self.distributed_port) File "/home/lr/jhwang/wiki_auto_split/transformers/examples/research_projects/rag/distributed_pytorch_retriever.py", line 71, in init_retrieval self.index.init_index() File "/home/lr/jhwang/.local/lib/python3.6/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index dummy=self.use_dummy_dataset, File "/home/lr/jhwang/.local/lib/python3.6/site-packages/datasets/load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "/home/lr/jhwang/.local/lib/python3.6/site-packages/datasets/builder.py", line 651, in download_and_prepare f"Not enough disk space. Needed: {size_str(self.info.size_in_bytes or 0)} (download: {size_str(self.info.download_size or 0)}, generated: {size_str(self.info.dataset_size or 0)}, post-processed: {size_str(self.info.post_processing_size or 0)})" OSError: Not enough disk space. Needed: 174.51 GiB (download: 66.09 GiB, generated: 73.03 GiB, post-processed: 35.39 GiB) Is this normal? How many GBs does the RAG model need in total? Also, is there a way to change the path so that I can download the 174.51GB data to another place? Thanks a lot! ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction CUDA_VISIBLE_DEVICES="0" python3 transformers/examples/research_projects/rag/finetune_rag.py \ --data_dir wiki_auto \ --output_dir wiki_auto_rag \ --model_name_or_path facebook/rag-sequence-base \ --model_type rag_sequence \ --fp16 \ --gpus 1\ ### Expected behavior finetune process
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27504/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27503
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27503/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27503/comments
https://api.github.com/repos/huggingface/transformers/issues/27503/events
https://github.com/huggingface/transformers/pull/27503
1,993,876,092
PR_kwDOCUB6oc5feKZF
27,503
fix assisted decoding assistant model inputs
{ "login": "jiqing-feng", "id": 107918818, "node_id": "U_kgDOBm614g", "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiqing-feng", "html_url": "https://github.com/jiqing-feng", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@jiqing-feng If possible, I would also like to revert [these temporary changes](https://github.com/huggingface/transformers/pull/27511) in this PR :)", "πŸ€— thanks for the fix we had to skip it in #27508 as well! (Only the relevant test)", "Hi @gante @ArthurZucker . I think I have fixed all the comments and also added the tests you mentioned. Would you please help me review it? Thx!\r\n\r\nBTW, the failed test if not related to my changes.", "@amyeroberts I'll have to think harder about assisted generation test robustness, as there are two conflicting effects in place:\r\n1. In theory, assisted generation should yield the exact same outputs\r\n2. In practice, due to the matrix multiplication being shape-dependent (see [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535)), there will be tiny fluctuations. With random models, this means that the odds of a simple assisted vs non-assisted output check failing are high.\r\n\r\nOn top of that, pinning a seed to a previous failure does not prevent bad failure checks in future models or flags. \r\n\r\nMy suggestion would be: I'll work on test robustness today, and we merge this fix as is. WDYT?", "> In practice, due to the matrix multiplication being shape-dependent (see https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535), there will be tiny fluctuations. With random models, this means that the odds of a simple assisted vs non-assisted output check failing are high.\r\n\r\nFor my own understanding, why wouldn't a seed resolve the issues in randomness here? I'm guessing the tests are using `hf-internal-testing/tiny-random-model-name` which can change? \r\n\r\n> On top of that, pinning a seed to a previous failure does not prevent bad failure checks in future models or flags.\r\n\r\nAgreed - but it should make sure that this one passes! For any future models or flags we should add new tests. \r\n\r\nIn terms of tests to add - this relates back to my previous [request here](https://github.com/huggingface/transformers/pull/26892#issuecomment-1804158551). It seems that the PR broke for a specific model type (encoder-decoder). Are there tests, which do not rely on randomness, which we can add that make sure just the API works? ", "Hey @jiqing-feng,\r\n\r\nThere is sadly still a bug with speculative decoding. The following doesn't work:\r\n\r\n```py\r\nfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\r\nfrom transformers import AutoModelForCausalLM\r\nfrom datasets import load_dataset\r\nimport time\r\nimport torch\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\nmodel_id = \"openai/whisper-large-v2\"\r\n\r\nmodel = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nmodel.to(device)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id)\r\n\r\nassistant_model_id = \"distil-whisper/distil-large-v2\"\r\nassistant_model = AutoModelForCausalLM.from_pretrained(\r\n assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nassistant_model.to(device)\r\n\r\ndataset = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nsample = dataset[0][\"audio\"]\r\n\r\ninput_features = processor(sample[\"array\"], return_tensors=\"pt\").input_features.to(\"cuda\").to(torch.float16)\r\n\r\n# warm-up\r\n_ = model.generate(input_features, assistant_model=assistant_model)\r\n\r\nstart_time = time.time()\r\nout = model.generate(input_features, assistant_model=assistant_model)\r\n# out = model.generate(input_features)\r\nprint(time.time() - start_time)\r\n```", "@amyeroberts There is something odd here. We have a mixin test that should be catching API issues. I'm looking into it to attempt to figure out what's wrong.", "The following code snippet also needs to work:\r\n\r\n```diff\r\n- assistant_model_id = \"distil-whisper/distil-large-v2\"\r\n- assistant_model = AutoModelForCausalLM.from_pretrained(\r\n- assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n-)\r\n+ assistant_model_id = \"openai/whisper-tiny\"\r\n+ assistant_model = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n+ assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n+)\r\n```\r\n\r\nBut I think it does already", "Hi @patrickvonplaten \r\n\r\nI run the following script on my CPU device, and it works well.\r\n```python\r\nfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\r\nfrom transformers import AutoModelForCausalLM\r\nfrom datasets import load_dataset\r\nimport time\r\nimport torch\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\nmodel_id = \"openai/whisper-large-v2\"\r\n\r\nmodel = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nmodel.to(device)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id)\r\n\r\nassistant_model_id = \"openai/whisper-tiny\"\r\nassistant_model = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nassistant_model.to(device)\r\n\r\ndataset = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nsample = dataset[0][\"audio\"]\r\n\r\ninput_features = processor(sample[\"array\"], return_tensors=\"pt\").input_features.to(device).to(torch_dtype)\r\n\r\n# warm-up\r\n_ = model.generate(input_features, assistant_model=assistant_model)\r\n\r\nstart_time = time.time()\r\nout = model.generate(input_features, assistant_model=assistant_model)\r\n# out = model.generate(input_features)\r\nprint(time.time() - start_time)\r\n```", "Hey @jiqing-feng,\r\n\r\nThanks so much for quickly jumping on fixing the problem here :pray: \r\n\r\nIt sadly still doesn't fix Whisper distillation as per code snippet above. To make sure distil whisper works again on \"main\", we have now reverted the PR here: https://github.com/huggingface/transformers/pull/27523 and also added two slow tests that should be run now everytime we do changes to assisted decoding:\r\n\r\n```py\r\nRUN_SLOW=1 pytest tests/models/whisper/test_modeling_whisper.py -k \"distil\" -sv\r\n```\r\n\r\nIt would be amazing if you could maybe try to open a new PR that is rebased to current \"main\" with all your nice changes and in which all fast tests as well as the slow tests pass:\r\n\r\n```py\r\nRUN_SLOW=1 pytest tests/models/whisper/test_modeling_whisper.py -k \"distil\" -sv\r\n```\r\n\r\nVery sorry about the duplicated work here", "Hi @patrickvonplaten . There is no need to open a new PR. I have fixed the conflicts. \r\n\r\nThere might be a mistake ([here](https://github.com/huggingface/transformers/blob/main/tests/models/whisper/test_modeling_whisper.py#L1752-L1755)) that I see that you use `distil-whisper/distil-large-v2` as assistant model, and you use `WhisperForCausalLM` to load a `WhisperForConditionalGeneration` model. The [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2/blob/main/config.json) is an encoder-decoder model, so I use `WhisperForConditionalGeneration` to load it (it is also the original architectures in the model card). After this change, I can successfully run **RUN_SLOW=1 pytest tests/models/whisper/test_modeling_whisper.py -k \"distil\" -sv** on my current changes.\r\n\r\n", "Hi @jiqing-feng πŸ‘‹ \r\n\r\nI have strengthened the test suite for assisted generation and did a small post mortem on why we didn't caught the issue in our tests in [this PR](https://github.com/huggingface/transformers/pull/27540).\r\n\r\nLet's merge that PR first and then rebase here, to ensure we don't break CI again πŸ€— \r\n\r\nAgain, apologies on our end for not having a robust enough test coverage!", "@jiqing-feng the improved assisted generation tests were merged πŸ€— ", "Hi @gante . I also updated my code base. Would you please help to merge this PR? Thx.", "Hi @jiqing-feng πŸ‘‹ \r\n\r\nI got it working on my end, without the change you added on the Whisper test (which we must revert). It is a non-trivial set of changes, so I'm going to detail the entire diff :)\r\n\r\n1. Remove the `self._extend_attention_mask` and `self._extend_token_type_ids` functions from the `GenerationMixin`\r\n2. Replace them by the following stand-alone functions, which can be added at the bottom of the file\r\n```py\r\ndef _prepare_attention_mask(model_kwargs: Dict[str, Any], new_length: int, is_encoder_decoder: bool) -> Dict[str, Any]:\r\n \"\"\"Expands or crops the model's mask for decoding purposes, to the defined length\"\"\"\r\n\r\n mask_key = \"decoder_attention_mask\" if is_encoder_decoder else \"attention_mask\"\r\n if mask_key not in model_kwargs:\r\n return model_kwargs\r\n\r\n mask = model_kwargs[mask_key]\r\n mask_length_diff = new_length - mask.shape[1]\r\n\r\n if mask_length_diff < 0:\r\n model_kwargs[mask_key] = mask[:, :mask_length_diff]\r\n elif mask_length_diff > 0:\r\n model_kwargs[mask_key] = torch.cat([mask, mask.new_ones((mask.shape[0], mask_length_diff))], dim=-1)\r\n return model_kwargs\r\n\r\n\r\ndef _prepare_token_type_ids(model_kwargs: Dict[str, Any], new_length: int) -> Dict[str, Any]:\r\n \"\"\"Expands or crops the model's token_type_ids for decoding purposes, to the defined length\"\"\"\r\n if \"token_type_ids\" not in model_kwargs or model_kwargs[\"token_type_ids\"] is None:\r\n return model_kwargs\r\n\r\n token_type_ids = model_kwargs[\"token_type_ids\"]\r\n final_token_type = token_type_ids[:, -1].unsqueeze(-1)\r\n type_length_diff = new_length - token_type_ids.shape[1]\r\n\r\n if type_length_diff < 0:\r\n token_type_ids = token_type_ids[:, :type_length_diff]\r\n elif type_length_diff > 0:\r\n token_type_copies = final_token_type.repeat(1, type_length_diff)\r\n model_kwargs[\"token_type_ids\"] = torch.cat([model_kwargs[\"token_type_ids\"], token_type_copies], dim=-1)\r\n return model_kwargs\r\n```\r\n\r\n3. Replace the code after `# Update assistant_kwargs for the assistant's next round of generations` by\r\n```py\r\n assistant_kwargs = _prepare_attention_mask(\r\n assistant_kwargs, new_cur_len, assistant_model.config.is_encoder_decoder\r\n )\r\n assistant_kwargs = _prepare_token_type_ids(assistant_kwargs, new_cur_len)\r\n```\r\n\r\n4. Replace the code after `# 2.1. Prepare the model inputs` by\r\n```py\r\n candidate_kwargs = copy.copy(model_kwargs)\r\n candidate_kwargs = _prepare_attention_mask(\r\n candidate_kwargs, candidate_input_ids.shape[1], self.config.is_encoder_decoder\r\n )\r\n candidate_kwargs = _prepare_token_type_ids(candidate_kwargs, candidate_input_ids.shape[1])\r\n\r\n model_inputs = self.prepare_inputs_for_generation(candidate_input_ids, **candidate_kwargs)\r\n```\r\n\r\n5. Replace the code after `# prepare assistant model's keys of inputs` by\r\n```py\r\n assistant_kwargs = copy.copy(model_kwargs)\r\n if assistant_model.config.is_encoder_decoder:\r\n # both are encoder-decoder\r\n input_ids_key = \"decoder_input_ids\"\r\n attention_key = \"decoder_attention_mask\"\r\n assistant_kwargs[\"encoder_outputs\"] = assistant_kwargs.pop(\"assistant_encoder_outputs\")\r\n elif \"assistant_encoder_outputs\" in assistant_kwargs:\r\n # special case for encoder-decoder with decoder-only assistant (like DistilWhisper)\r\n input_ids_key = \"input_ids\"\r\n attention_key = \"attention_mask\"\r\n assistant_kwargs[\"attention_mask\"] = assistant_kwargs.get(\r\n \"decoder_attention_mask\",\r\n torch.ones((input_ids.shape[0], 1), device=input_ids.device, dtype=torch.long),\r\n )\r\n assistant_kwargs[\"encoder_outputs\"] = assistant_kwargs.pop(\"assistant_encoder_outputs\")\r\n else:\r\n # both are decoder-only\r\n input_ids_key = \"input_ids\"\r\n attention_key = \"attention_mask\"\r\n```\r\n\r\nAll these changes will make `assisted_generation` compatible with all use cases, even the more complex DistilWhisper πŸ€— ", "Hi @gante . Thanks for your review, I have updated all the changes you proposed. Would you please help me to check and merge it? Thx!", "> Perfect, thank you for working on the changes πŸ’ͺ\r\n> \r\n> @jiqing-feng if possible, it would be nice to delete the now unused `_extend_attention_mask` and `_extend_token_type_ids` functions :)\r\n> \r\n> @amyeroberts I've confirmed on my end that all relevant tests are passing:\r\n> \r\n> 1. `RUN_SLOW=1 py.test tests/models/whisper/ -k speculative`\r\n> 2. `py.test tests/ -k test_assisted_decoding_matches_greedy_search`\r\n> 3. `py.test tests/ -k test_assisted_decoding_sample`\r\n\r\nDo you mean delete these 2 functions and replace all `_extend_xxx` functions with our new `_prepare_xxx` functions?", "@jiqing-feng yes `_extend_attention_mask` and `_extend_token_type_ids` -- are not used anywhere in the code", "> Do these also cover the tests that were breaking previously for whisper? Happy to merge once we know it's whisper compatible πŸ€—\r\n\r\nYes, it is `RUN_SLOW=1 py.test tests/models/whisper/ -k speculative` in the list of tests above :) Merging!", "@jiqing-feng thank you for bearing with us πŸ€— ", "@gante D'oh sorry - PR blindness 🀦 Thanks for merging and thanks again @jiqing-feng for all the work iterating on this PR! " ]
1,700
1,702
1,701
CONTRIBUTOR
null
In the last [PR](https://github.com/huggingface/transformers/pull/26892), we didn't consider the `decoder_attention_mask` while updating `model_kwargs`, see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L4742-L4747). This PR has fixed it. Furthermore, I also use a cleaner way to process assistant models's inputs. Hi @gante , would you please help me to review this PR? Thx!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27503/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27503", "html_url": "https://github.com/huggingface/transformers/pull/27503", "diff_url": "https://github.com/huggingface/transformers/pull/27503.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27503.patch", "merged_at": 1701095035000 }
https://api.github.com/repos/huggingface/transformers/issues/27502
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27502/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27502/comments
https://api.github.com/repos/huggingface/transformers/issues/27502/events
https://github.com/huggingface/transformers/issues/27502
1,993,848,739
I_kwDOCUB6oc5217ej
27,502
Inconsistent SequenceClassfication Behavior for Padding Tokens.
{ "login": "Ali1858", "id": 13449847, "node_id": "MDQ6VXNlcjEzNDQ5ODQ3", "avatar_url": "https://avatars.githubusercontent.com/u/13449847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ali1858", "html_url": "https://github.com/Ali1858", "followers_url": "https://api.github.com/users/Ali1858/followers", "following_url": "https://api.github.com/users/Ali1858/following{/other_user}", "gists_url": "https://api.github.com/users/Ali1858/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ali1858/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ali1858/subscriptions", "organizations_url": "https://api.github.com/users/Ali1858/orgs", "repos_url": "https://api.github.com/users/Ali1858/repos", "events_url": "https://api.github.com/users/Ali1858/events{/privacy}", "received_events_url": "https://api.github.com/users/Ali1858/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I tested the model with bfloat16, 4bit (nf4) and original precision (float16?), for all the datatypes there is inconsistency in predicted logits. Is there a way I can avoid this?\r\n\r\nI have seen this issue with `causal_lm` also but with sequence classification it is even more noticeable and impacts the accuracy very much. Moreover, it creates instability while doing RLHF, because the reward signal is not consistent. ", "Hi @Ali1858 πŸ‘‹ \r\n\r\nLong story short, there is no way to avoid this effect, it is a matter of numerical precision and shape-dependent matmul order of operations. You can read more about it in [this comment](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535) or in [this twitter thread](https://twitter.com/joao_gante/status/1716831983375143382) :)\r\n\r\nThe comment is mostly about KV caches, but it applies whenever we modify the shape of the input (e.g. add more padding)", "Hi @gante \r\nThanks for your response and explanation. I see this issue as a common problem. One thing I would like to point out is that I have trained sequence classification with padding_side=\"right\" (default value) not \"left\". Even in inference, I am using padding_side=\"right\" (default value). I have also tested the model with bfloat16, 4bit (nf4) and original precision (float16?), for all the datatypes there is inconsistency in predicted logits.\r\n\r\nIs there something I do and retrain the model to minimize this inconsistency?", "Hi @gante\r\n\r\nI tried inferencing with padding_side=\"left\" and predictions are less inconsistent compared to padding_side=\"right\". They are still not the same but values are not way off. Should I retrain the model with padding_side=\"left\"?\r\n\r\n", "@Ali1858 All tasks should train with right-padding, as you can see in our references and examples. Sequence Classification should do its inference with right-padding, yes -- only generative models should use left-padding at inference time (see [here](https://huggingface.co/docs/transformers/v4.35.2/en/llm_tutorial#wrong-padding-side) why).\r\n\r\nChanging the model variable type will have a major effect on the model predictions, so it's natural that the results are not consistent across types.\r\n\r\nI'm afraid the issues you're describing are not bugs, but rather modeling challenges. Following our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) or our [discord](https://discord.com/invite/hugging-face-879548962464493619) πŸ€— " ]
1,700
1,700
1,700
NONE
null
### System Info 2023-11-15 00:56:45,576] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-1046-kvm-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: bf16 - use_cpu: False - num_processes: 2 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - deepspeed_config: {'gradient_accumulation_steps': 16, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am encountering an issue with the llama-2 model. I trained a 4bit Lora sequence classification model using Llama-2 with padding_side=right (default value) and during the inference, I noticed that the model produces inconsistent logits for the same input text when the number of padding tokens varies. I suspect the attention mask is not working. Here's the specific scenario: I have a model that takes input text sequences with corresponding attention masks. The attention mask is correctly set to 1 for content tokens and 0 for padding tokens to ensure that the model ignores padding tokens when calculating logits. However, when I provide the same input text with different numbers of padding tokens, the model gives different logits, which is unexpected. Example: Input 1 (Fewer Padding Tokens): ```mathematica Input Text: "Hi how are you?" Attention Mask: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Input 2 (More Padding Tokens): ``` ```mathematica Input Text: "Hi how are you?", "Additional text", "More text", "Even more text" Attention Mask: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0] ``` Logits for Input 1: [-2.3750] Logits for Input 2: [-1.7344] In the example above, Input 1 and Input 2 have the same content text with different numbers of padding tokens. However, the model produces different logits for these inputs, which should not be the case. I have verified that the attention mask is correctly set to 1 for content tokens and 0 for padding tokens in both cases, so the model should ignore the padding tokens when calculating logits. I would appreciate any guidance or assistance in understanding and resolving this problem. This is how I am load the model ```python model_args = { "torch_dtype": torch.bfloat16, "quantization_config": BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, ), "cache_dir": 'cache', "device_map": "auto"#{"":0}, } # Since reward models are trained using the same base model, we should use same model base_reward_model = transformers.AutoModelForSequenceClassification.from_pretrained( model_name, num_labels=1,**model_args ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_name, cache_dir='cache') base_reward_model = PeftModel.from_pretrained( base_reward_model, ranking_adapter_name2, adapter_name="rank_ep1", is_trainable=False ) ``` This is how I am getting prediction during training and during inference ```python ## During training logits = model( input_ids=batch["input_ids"], attention_mask=batch["attention_mask"], use_cache=False, ).logits loss = self.loss_fct(logits, cu_lens) ## During inference base_reward_model.eval() base_reward_model.set_adapter(adapter_name) logits = base_reward_model(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], use_cache=False, ).logits ``` ### Expected behavior When using attention masks and padding tokens, I expect the model to produce consistent logits for the same input text, regardless of the number of padding tokens. The attention mask is correctly set to 1 for content tokens and 0 for padding tokens to ensure that the model ignores padding tokens when calculating logits. Therefore, the model should not be affected by the presence or absence of padding tokens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27502/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27501
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27501/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27501/comments
https://api.github.com/repos/huggingface/transformers/issues/27501/events
https://github.com/huggingface/transformers/pull/27501
1,993,759,582
PR_kwDOCUB6oc5fdxXk
27,501
Bump aiohttp from 3.8.5 to 3.8.6 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.", "@dependabot ignore this major version", "OK, I won't notify you about version 3.x.x again, unless you re-open this PR." ]
1,700
1,700
1,700
CONTRIBUTOR
null
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.5 to 3.8.6. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p> <blockquote> <h2>3.8.6</h2> <h2>Security bugfixes</h2> <ul> <li> <p>Upgraded the vendored copy of llhttp_ to v9.1.3 -- by :user:<code>Dreamsorcerer</code></p> <p>Thanks to :user:<code>kenballus</code> for reporting this, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-pjjw-qhg8-p2p9">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-pjjw-qhg8-p2p9</a>.</p> <p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7647">#7647</a>)</p> </li> <li> <p>Updated Python parser to comply with RFCs 9110/9112 -- by :user:<code>Dreamorcerer</code></p> <p>Thanks to :user:<code>kenballus</code> for reporting this, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-gfw2-4jvh-wgfg">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-gfw2-4jvh-wgfg</a>.</p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7663">#7663</a>)</p> </li> </ul> <h2>Deprecation</h2> <ul> <li> <p>Added <code>fallback_charset_resolver</code> parameter in <code>ClientSession</code> to allow a user-supplied character set detection function.</p> <p>Character set detection will no longer be included in 3.9 as a default. If this feature is needed, please use <code>fallback_charset_resolver &lt;https://docs.aiohttp.org/en/stable/client_advanced.html#character-set-detection&gt;</code>_.</p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7561">#7561</a>)</p> </li> </ul> <h2>Features</h2> <ul> <li> <p>Enabled lenient response parsing for more flexible parsing in the client (this should resolve some regressions when dealing with badly formatted HTTP responses). -- by :user:<code>Dreamsorcerer</code></p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7490">#7490</a>)</p> </li> </ul> <h2>Bugfixes</h2> <ul> <li> <p>Fixed <code>PermissionError</code> when <code>.netrc</code> is unreadable due to permissions.</p> <p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7237">#7237</a>)</p> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst">aiohttp's changelog</a>.</em></p> <blockquote> <h1>3.8.6 (2023-10-07)</h1> <h2>Security bugfixes</h2> <ul> <li> <p>Upgraded the vendored copy of llhttp_ to v9.1.3 -- by :user:<code>Dreamsorcerer</code></p> <p>Thanks to :user:<code>kenballus</code> for reporting this, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-pjjw-qhg8-p2p9">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-pjjw-qhg8-p2p9</a>.</p> <p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p> <p><code>[#7647](https://github.com/aio-libs/aiohttp/issues/7647) &lt;https://github.com/aio-libs/aiohttp/issues/7647&gt;</code>_</p> </li> <li> <p>Updated Python parser to comply with RFCs 9110/9112 -- by :user:<code>Dreamorcerer</code></p> <p>Thanks to :user:<code>kenballus</code> for reporting this, see <a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-gfw2-4jvh-wgfg">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-gfw2-4jvh-wgfg</a>.</p> <p><code>[#7663](https://github.com/aio-libs/aiohttp/issues/7663) &lt;https://github.com/aio-libs/aiohttp/issues/7663&gt;</code>_</p> </li> </ul> <h2>Deprecation</h2> <ul> <li> <p>Added <code>fallback_charset_resolver</code> parameter in <code>ClientSession</code> to allow a user-supplied character set detection function.</p> <p>Character set detection will no longer be included in 3.9 as a default. If this feature is needed, please use <code>fallback_charset_resolver &lt;https://docs.aiohttp.org/en/stable/client_advanced.html#character-set-detection&gt;</code>_.</p> <p><code>[#7561](https://github.com/aio-libs/aiohttp/issues/7561) &lt;https://github.com/aio-libs/aiohttp/issues/7561&gt;</code>_</p> </li> </ul> <h2>Features</h2> <ul> <li> <p>Enabled lenient response parsing for more flexible parsing in the client (this should resolve some regressions when dealing with badly formatted HTTP responses). -- by :user:<code>Dreamsorcerer</code></p> <p><code>[#7490](https://github.com/aio-libs/aiohttp/issues/7490) &lt;https://github.com/aio-libs/aiohttp/issues/7490&gt;</code>_</p> </li> </ul> <h2>Bugfixes</h2> <ul> <li>Fixed <code>PermissionError</code> when <code>.netrc</code> is unreadable due to permissions.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/aio-libs/aiohttp/commit/996de2629ef6b4c2934a7c04dfd49d0950d4c43b"><code>996de26</code></a> Release v3.8.6 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7668">#7668</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/8c128d4f042ca36ebdc55ecdd76099b7722331ba"><code>8c128d4</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7651">#7651</a>/45f98b7d backport][3.8] Fix BadStatusLine message (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7666">#7666</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/89b7df157886ff390cdcdc44ecf3c277045838b1"><code>89b7df1</code></a> Allow lax response parsing on Py parser (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7663">#7663</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7664">#7664</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/d5c12ba890557a575c313bb3017910d7616fce3d"><code>d5c12ba</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7661">#7661</a>/85713a48 backport][3.8] Update Python parser for RFCs 9110/9112 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7">#7</a>...</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/8a3977acac632d1f02aa7e047da51e27a717d724"><code>8a3977a</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7272">#7272</a>/b2a7983a backport][3.8] Fix Read The Docs config (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7650">#7650</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/bcc416e533796d04fb8124ef1e7686b1f338767a"><code>bcc416e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7647">#7647</a>/1303350e backport][3.8] Upgrade to llhttp 9.1.3 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7648">#7648</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/b30c0cd2c96e57cc273ffe29c0313487b364f15a"><code>b30c0cd</code></a> Remove chardet/charset-normalizer. (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7589">#7589</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/5946c7436044bae14617ef06ee7c530ed72622da"><code>5946c74</code></a> CookieJar - return 'best-match' and not LIFO (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7577">#7577</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7588">#7588</a>)</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/8c4ec62f5ba514479ef1c2e74741bc7fa33be3f4"><code>8c4ec62</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7518">#7518</a>/8bd42e74 backport][3.8] Fix GunicornWebWorker max_requests_jitter n...</li> <li><a href="https://github.com/aio-libs/aiohttp/commit/a0d234df392bd5cd67d378d31c9531c5ac87c07f"><code>a0d234d</code></a> Use lenient headers for response parser (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7490">#7490</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7492">#7492</a>)</li> <li>Additional commits viewable in <a href="https://github.com/aio-libs/aiohttp/compare/v3.8.5...v3.8.6">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiohttp&package-manager=pip&previous-version=3.8.5&new-version=3.8.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27501/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27501", "html_url": "https://github.com/huggingface/transformers/pull/27501", "diff_url": "https://github.com/huggingface/transformers/pull/27501.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27501.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27500
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27500/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27500/comments
https://api.github.com/repos/huggingface/transformers/issues/27500/events
https://github.com/huggingface/transformers/pull/27500
1,993,505,399
PR_kwDOCUB6oc5fc5RY
27,500
Raise error when quantizing a quantized model
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,699
1,700
1,700
MEMBER
null
# What does this PR do ? This PR fixes the quantization logic, so that we raise an error when a user tries to quantize an already quantized model with a different scheme. Fixes #26695
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27500/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27500", "html_url": "https://github.com/huggingface/transformers/pull/27500", "diff_url": "https://github.com/huggingface/transformers/pull/27500.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27500.patch", "merged_at": 1700148941000 }
https://api.github.com/repos/huggingface/transformers/issues/27499
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27499/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27499/comments
https://api.github.com/repos/huggingface/transformers/issues/27499/events
https://github.com/huggingface/transformers/pull/27499
1,993,456,631
PR_kwDOCUB6oc5fcugA
27,499
Fixing the failure of models without max_position_embeddings attribute.
{ "login": "AdamLouly", "id": 27873459, "node_id": "MDQ6VXNlcjI3ODczNDU5", "avatar_url": "https://avatars.githubusercontent.com/u/27873459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdamLouly", "html_url": "https://github.com/AdamLouly", "followers_url": "https://api.github.com/users/AdamLouly/followers", "following_url": "https://api.github.com/users/AdamLouly/following{/other_user}", "gists_url": "https://api.github.com/users/AdamLouly/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdamLouly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdamLouly/subscriptions", "organizations_url": "https://api.github.com/users/AdamLouly/orgs", "repos_url": "https://api.github.com/users/AdamLouly/repos", "events_url": "https://api.github.com/users/AdamLouly/events{/privacy}", "received_events_url": "https://api.github.com/users/AdamLouly/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts when can we merge this? I have found another issue and we need this to be merged first before fixing the other one.", "@AdamLouly - we can merge now :) " ]
1,699
1,700
1,700
CONTRIBUTOR
null
**Changes Made** Added handling for configurations that may not have the max_position_embeddings attribute. Introduced a default value of 1024 for max_position_embeddings when it's missing in the configuration. **Motivation and Context** Fixing this issue: https://github.com/huggingface/transformers/issues/27498 This PR addresses an issue where some configurations lack the max_position_embeddings attribute, causing failures in certain scenarios.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27499/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27499", "html_url": "https://github.com/huggingface/transformers/pull/27499", "diff_url": "https://github.com/huggingface/transformers/pull/27499.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27499.patch", "merged_at": 1700072202000 }
https://api.github.com/repos/huggingface/transformers/issues/27498
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27498/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27498/comments
https://api.github.com/repos/huggingface/transformers/issues/27498/events
https://github.com/huggingface/transformers/issues/27498
1,993,455,247
I_kwDOCUB6oc520baP
27,498
Models with no max_pos_embeddings will fail on run_clm.py
{ "login": "AdamLouly", "id": 27873459, "node_id": "MDQ6VXNlcjI3ODczNDU5", "avatar_url": "https://avatars.githubusercontent.com/u/27873459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdamLouly", "html_url": "https://github.com/AdamLouly", "followers_url": "https://api.github.com/users/AdamLouly/followers", "following_url": "https://api.github.com/users/AdamLouly/following{/other_user}", "gists_url": "https://api.github.com/users/AdamLouly/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdamLouly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdamLouly/subscriptions", "organizations_url": "https://api.github.com/users/AdamLouly/orgs", "repos_url": "https://api.github.com/users/AdamLouly/repos", "events_url": "https://api.github.com/users/AdamLouly/events{/privacy}", "received_events_url": "https://api.github.com/users/AdamLouly/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @AdamLouly - thanks for reporting an opening a PR to fix!" ]
1,699
1,700
1,700
CONTRIBUTOR
null
### System Info In run_clm.py we set up block_size to config.max_position_embeddings, assuming that all configs will have max_position_embeddings. some models does not have max_position_embeddings as a part of their definition so they will fail when trying to train using run_clm.py bloom is one example. where you get this error if you try to run it: AttributeError: 'BloomConfig' object has no attribute 'max_position_embeddings' we can solve this by setting a check on whether the value exist otherwise put a default value on it. I will create a PR to address this. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run bloom on nightly version of transformers ### Expected behavior AttributeError: 'BloomConfig' object has no attribute 'max_position_embeddings'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27498/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27497
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27497/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27497/comments
https://api.github.com/repos/huggingface/transformers/issues/27497/events
https://github.com/huggingface/transformers/issues/27497
1,993,382,573
I_kwDOCUB6oc520Jqt
27,497
Trainer memory usage increasing a lot during evaluation steps when using Flan-T5 models with 8-bit/4-bit quantisation
{ "login": "guilherme-pombo", "id": 22048961, "node_id": "MDQ6VXNlcjIyMDQ4OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/22048961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guilherme-pombo", "html_url": "https://github.com/guilherme-pombo", "followers_url": "https://api.github.com/users/guilherme-pombo/followers", "following_url": "https://api.github.com/users/guilherme-pombo/following{/other_user}", "gists_url": "https://api.github.com/users/guilherme-pombo/gists{/gist_id}", "starred_url": "https://api.github.com/users/guilherme-pombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guilherme-pombo/subscriptions", "organizations_url": "https://api.github.com/users/guilherme-pombo/orgs", "repos_url": "https://api.github.com/users/guilherme-pombo/repos", "events_url": "https://api.github.com/users/guilherme-pombo/events{/privacy}", "received_events_url": "https://api.github.com/users/guilherme-pombo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @BenjaminBossan ", "Turns out this was happening because I was not defining the:\r\n\r\n_per_device_eval_batch_size_\r\n\r\nparameter when initialising the Trainer and it was being set to the default value on Trainer, which is 8. When I define it separately there is no issue now. Apologies for the confusion. Closing the issue" ]
1,699
1,700
1,700
NONE
null
### System Info Hello :) here are the details of my system: Python: 3.8.10 Transformers version: 4.34.1 Bitsandbytes version: 0.41.2 Peft version: 0.6.2 CUDA Version: 12.0 When using T5-XXL with 8-bit or 4-bit quantisation and Hugging Face trainer the model memory consumption will be reasonable. However, as soon as the model does the evaluation steps, it runs out of memory since it's usage balloons up past the 48GB RAM that my NVIDIA RTX 6000 has. This seems really odd since I expect the evaluation step to require less memory than the training step as it doesn't need to keep track of gradients. This is not a problem when training in either FP32/FP16/BF16. Only happens when using 8 or 4-bit quantisation. Any help would be greatly appreciated! @SunMarc @pacman10 ### Who can help? @SunMarc @pacman10 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This happens with any dataset, the code used for model loading and Trainer setup is as follows: ``` from transformers import AutoModelForSeq2SeqLM, BitsAndBytesConfig from peft import prepare_model_for_kbit_training, TaskType, get_peft_config, get_peft_model import transformers from transformers import Seq2SeqTrainingArguments ''' This happens with any dataset ''' model = AutoModelForSeq2SeqLM.from_pretrained('google/flan-t5-xxl', device_map={"":0}, torch_dtype=torch.bfloat16, load_in_8bit=True, ) model = prepare_model_for_kbit_training(model) target_modules = ['.q', '.k', '.v', '.o', 'wi_0', 'wi_1', 'wo'] peft_config = LoraConfig( task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, target_modules=target_modules, r=32, lora_alpha=32, lora_dropout=0.15 ) model = get_peft_model(model, peft_config) trainer = transformers.Trainer( model=model, train_dataset=train_dataset, eval_dataset=val_dataset, args=transformers.Seq2SeqTrainingArguments( per_device_train_batch_size=1, warmup_ratio=0.01, lr_scheduler_type='cosine', gradient_accumulation_steps=6, num_train_epochs=60, learning_rate=3e-4, eval_accumulation_steps=1, weight_decay=1e-2, bf16=True, logging_steps=10, optim="paged_adamw_8bit", evaluation_strategy="steps", save_strategy="steps", eval_steps=5, save_steps=5, output_dir='my_model', save_total_limit=1, load_best_model_at_end=True, ddp_find_unused_parameters=None, group_by_length=False, ), ) model.config.use_cache = False trainer.train() ``` ### Expected behavior When training T5-XXL in either FP32/FP16/BF16 there is no increase in memory usage during the evaluation step. However, when using 8 or 4-bit quantisation the memory usage balloons up to more than double. Since evaluation doesn't require keeping track of gradients, the expected behaviour should be that memory usage doesn't change.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27497/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27496
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27496/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27496/comments
https://api.github.com/repos/huggingface/transformers/issues/27496/events
https://github.com/huggingface/transformers/issues/27496
1,993,285,966
I_kwDOCUB6oc52zyFO
27,496
Plain-DETR
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 5769473378, "node_id": "LA_kwDOCUB6oc8AAAABV-MtYg", "url": "https://api.github.com/repos/huggingface/transformers/labels/Vision", "name": "Vision", "color": "C079EF", "default": false, "description": "" } ]
open
false
null
[]
[ "@amyeroberts @rafaelpadilla this open for contribution?", "@kamathis4 Sure! If you'd like to contribute this model feel free to open a PR and ping us when ready for review :)" ]
1,699
1,701
null
CONTRIBUTOR
null
### Model description Plain-DETR is an object detector that maintains a "plain" nature: using a single-scale feature map and global cross-attention calculations without specific locality constraints. In contrast to previous leading DETR-based detectors that reintroduce architectural inductive biases of multi-scale and locality into the decoder. By leveraging the Object365 dataset for pre-training, it achieved 63.9 mAP accuracy using a Swin-L backbone, which is highly competitive with state-of-the-art detectors which all heavily rely on multi-scale feature maps and region-based feature extraction Published in the proceedings of ICCV 2023. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Official implementation: [code](https://github.com/impiga/Plain-DETR) Paper: [DETR Does Not Need Multi-Scale or Locality Design](https://openaccess.thecvf.com/content/ICCV2023/html/Lin_DETR_Does_Not_Need_Multi-Scale_or_Locality_Design_ICCV_2023_paper.html)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27496/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27496/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27495
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27495/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27495/comments
https://api.github.com/repos/huggingface/transformers/issues/27495/events
https://github.com/huggingface/transformers/pull/27495
1,993,187,539
PR_kwDOCUB6oc5fbzaC
27,495
translate deepspeed.md to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu\r\n\r\nHi, \r\n\r\nI just translate deepspeed doc. As this part is such vital and hard to be fully correctly translated, would you mind having a double check? I think many people will use this function, so it' s best to provide a great translation work without ambiguity.\r\n\r\nBest", "> LGTM! My Chinese isn't that great (its pretty basic πŸ˜… ) so maybe @statelesshz, would you be interested in helping double check the translation?\r\n\r\nMy Pleasure!", "@statelesshz thanks for your reviw.\r\n\r\nand @stevhliu would you mind having a check and finish the merge. I think I will update this file oneday I learn more about deepspeed.\r\n\r\nBest", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27495). All of your documentation changes will be reflected on that endpoint.", "Great job and thanks again for tackling such a big and dense doc! πŸ‘ " ]
1,699
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27495/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27495/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27495", "html_url": "https://github.com/huggingface/transformers/pull/27495", "diff_url": "https://github.com/huggingface/transformers/pull/27495.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27495.patch", "merged_at": 1700257772000 }
https://api.github.com/repos/huggingface/transformers/issues/27494
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27494/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27494/comments
https://api.github.com/repos/huggingface/transformers/issues/27494/events
https://github.com/huggingface/transformers/pull/27494
1,993,108,241
PR_kwDOCUB6oc5fbiWh
27,494
[`tokenizers`] update `tokenizers` version pin
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nice at onn, going collection" ]
1,699
1,701
1,700
COLLABORATOR
null
# What does this PR do? Preparing for the patch and the release of tokenizers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27494/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27494/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27494", "html_url": "https://github.com/huggingface/transformers/pull/27494", "diff_url": "https://github.com/huggingface/transformers/pull/27494.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27494.patch", "merged_at": 1700041562000 }
https://api.github.com/repos/huggingface/transformers/issues/27493
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27493/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27493/comments
https://api.github.com/repos/huggingface/transformers/issues/27493/events
https://github.com/huggingface/transformers/issues/27493
1,993,090,217
I_kwDOCUB6oc52zCSp
27,493
custom 4d attention_mask as transformers .forward() argument
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "UPD: this feature gets discussed and implemented in https://github.com/huggingface/transformers/pull/27539" ]
1,699
1,702
1,702
CONTRIBUTOR
null
### Feature request somewhere inside transformers models, 2d masks are converted into 4d. I want to be able to pass my own custom 4d mask to .forward(). Presently it causes error. CODE EXAMPLE: ``` model_name = "openlm-research/open_llama_3b" model = transformers.AutoModelForCausalLM.from_pretrained(model_name, device_map=device) # preparing KV cache size0 = 5 max_token = 10000 x0 = torch.randint(max_token, (1, size0), device=device) y0 = model.forward(x0, ) # forward with mask size1 = 3 x1 = torch.randint(max_token, (1, size1), device=device) mask_shape = (1, 1, size0, size1) # bsz, head_dim=1, query_length, key_value_length custom_mask = torch.randint(2, mask_shape, device=device) model.forward(input_ids=x1, attention_mask=custom_mask, past_key_values=y0['past_key_values']) # expected forward with this custom_mask ``` Error msg: ``` ... File .../transformers/src/transformers/modeling_attn_mask_utils.py:154, in AttentionMaskConverter._expand_mask(mask, dtype, tgt_len) 149 @staticmethod 150 def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): 151 """ 152 Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. 153 """ --> 154 bsz, src_len = mask.size() 155 tgt_len = tgt_len if tgt_len is not None else src_len 157 expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) ValueError: too many values to unpack (expected 2) ``` ### Motivation need custom 4d mask for experiments with causal inference. ### Your contribution I am ready to get involved, with HF guidance. tagging @patrickvonplaten who recently authored #27086
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27493/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27492
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27492/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27492/comments
https://api.github.com/repos/huggingface/transformers/issues/27492/events
https://github.com/huggingface/transformers/pull/27492
1,992,864,623
PR_kwDOCUB6oc5fatD4
27,492
[Whisper] Add sequential longform decoding
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "QQ: @patrickvonplaten - wouldn't concatenating and passing the whole audio as input result in exploding GPU VRAM usage?", "Hey @patrickvonplaten would you mind adding the performance for bigger models? The worst the model is at predicting timestamps, the worse the performances of the chuncked algorithm. I remember observing very little loss for large models! (Just as a FMI!)", "> QQ: @patrickvonplaten - wouldn't concatenating and passing the whole audio as input result in exploding GPU VRAM usage?\r\n\r\nThe audio is chunked on the fly (there is a while loop now)", "> Hey @patrickvonplaten would you mind adding the performance for bigger models? The worst the model is at predicting timestamps, the worse the performances of the chuncked algorithm. I remember observing very little loss for large models!\r\n\r\nSure I can run it for larger models as well. I'm not 100% sure though why this matters - if we see such strong gains for smaller models we should add it nevertheless.", "> There could be something wrong with the way I'm initialising the pipeline. But on my single file benchmark - it just truncates the output to 30 sec\r\n> \r\n> Repro:https://github.com/Vaibhavs10/scratchpad/blob/main/conditional_long_form_generation_whisper.ipynb Note: I'm not defining chunk size as it isn't defined in the example snippet up top.\r\n> \r\n> It works as intended with model + generate tho! πŸš€\r\n> \r\n> More of an overall usage remark from a developer's PoV: How do we clarify whether the transcription strategy used is chunked or conditional? Can we allow developers to choose? Supporting this via pipeline is important IMO.\r\n> \r\n> Edit: To clarify, one of the biggest usecase for people to use pipeline is to throw an audio file in whichever format and then get the transcriptions for it.\r\n\r\nNice catch, there was a typo. Added a test for the pipeline now as well.", "I've adapted the tests to match the new time stamp logit processor. I've double-checked that the new time stamp logit processor gives the same WER results on long-form and applied suggestions.\r\n\r\nFailing test is a time-out which is not related to this PR - merging! ", "Thanks for implementing. It seems that longform decoding doesn't work with **return_token_timestamps=True** for model.generate() (nor **return_timestamps=\"word\"** for pipeline() ) in V4.37.2\r\n\r\nFailling at line 822 of whisper/generation_whisper.py in the private method **_postprocess_outputs** with error \"AttribeError: 'tutple' object has no attribute 'cpu' \"", "Hi @antoinethl, could you open a new issue, detailing the error encountered (including full traceback) and a minimal reproducer?", "> Hi @antoinethl, could you open a new issue, detailing the error encountered (including full traceback) and a minimal reproducer?\r\n\r\nHi, just opened 2 issues with traceback and short example." ]
1,699
1,707
1,700
MEMBER
null
# What does this PR do? This PR adds the long-form transcription as originally proposed in the Whisper codebase: https://github.com/openai/whisper and in the paper: https://cdn.openai.com/papers/whisper.pdf To better understand long-form transcription, please have a look at Section 4.5: Strategies for Reliable Long-form Transcription of the paper. Before this PR transformers only had "chunked" long-form transcription which trades speed against accuracy (see Table below). In this PR we add the best-performing long-form transcription to Transformers. ## Usage: One can use long-form transcription now easily with the pipeline object simply passing long-form audio. Previously, long-form audio was truncated to just 30 seconds. This PR makes sure that long audio is **not** cut when passed to the pipeline: ```py from datasets import load_dataset from datasets import Audio import numpy as np from transformers import WhisperForConditionalGeneration, AutoProcessor, pipeline import torch processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en", torch_dtype=torch.float16) model = model.to("cuda") pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch.float16, device="cuda:0", ) ds = load_dataset("distil-whisper/meanwhile", "default")["test"] ds = ds.cast_column("audio", Audio(sampling_rate=16000)) audio = ds[:8]["audio"] result = pipe(audio) ``` The pipeline is great for "easy-to-set-up" code but lacks customization and readability. For example the pipeline currently does not allow running the model with batch sizes > 1 and instead runs each audio 1-by-1, thus being very suboptimal regarding speed. To use long-form transcription for batch size > 1, you can use the following snippet: ```py from datasets import load_dataset from datasets import Audio import numpy as np from transformers import WhisperForConditionalGeneration, AutoProcessor, pipeline import torch processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en", torch_dtype=torch.float16) model = model.to("cuda") ds = load_dataset("distil-whisper/meanwhile", "default")["test"] ds = ds.cast_column("audio", Audio(sampling_rate=16000)) audio = [x["array"] for x in ds[:8]["audio"]] inputs = processor(audio, return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, sampling_rate=16_000) inputs = inputs.to("cuda", torch.float16) result = model.generate(**inputs, return_timestamps=True) decoded = processor.batch_decode(result, skip_special_tokens=True) print(decoded) ``` Docs have been ![Screenshot from 2023-11-20 16-09-23](https://github.com/huggingface/transformers/assets/23423619/2a0bf6e4-94f2-4561-9a6e-3b8e70c01c19) added to give examples for both short- and longform transcription: But I don't think that this is enough for people to notice this method. We should in my opinion create much better guides for Whisper (will be done in a follow-up PR). ## Credits **IMPORTANT**: Most of the code added from this PR was copied & tweaked from the original whisper code: https://github.com/openai/whisper/blob/main/whisper/transcribe.py . Therefore 90% of the credit of this PR goes to @jongwook as the original author of the `transcribe.py` code. **Why copy the code?!**: We originally weren't planning on integrating the full long-form transcription algorithm to `transformers` but a couple of reasons forced us now to add it: - Original codebase is very slow. The original whisper codebase is very slow and doesn't take advantaged of recent advances such as Flash Attention, Torch compile etc... - Batched long-form inference is not supported see here: https://github.com/openai/whisper/discussions/662 - We want to integrate whisper with normal decoding mechanisms such as speculative decoding ## Next steps: When looking at all long-form generation strategies: ![Screenshot from 2023-11-20 16-14-00](https://github.com/huggingface/transformers/assets/23423619/a41f14b8-4f7c-4500-82ea-7a7d47e26522) Transformers has now support for the following: - beam search - initial timestamp constraint In a follow-up PR we will add: "temperature fallback", "voice activity detection", and "previous text conditioning". ## Results: **Note**: that for "chunked transformers" the numbers are <s>crossed-through</s> because the original results from the whisper paper seem to have been slightly incorrect. Re-running the eval gives better results. ### Here the results for `openai/whisper-tiny.en` | x | chunked transformers | this pr | openai/whisper repo | | --- | --- | --- | --- | | Earnings 21 | <s>17.5</s> 17.28 |16.13 | 15.28 | | Earnings 22 | <s>24.1</s> 23.69 | 22.24 | 20.95 | | Meanwhile | <s>16.4</s> 14.55 | 13.27 | 12.90 | | Rev | <s>17.4</s> 17.46 | 16.10 | 14.77 | ### Here the results for `openai/whisper-small.en` | x | chunked transformers | this pr | openai/whisper repo | | --- | --- | --- | --- | | Earnings 21 | <s>15.1</s> 12.61 |11.51 | 11.0 | | Earnings 22 | <s>20.6</s> 16.31 | 15.08 | 14.98 | | Meanwhile | <s>8.7</s> 7.06 | 6.52 | 6.49 | | Rev | <s>14.5</s> 14.02 | 12.28 | 11.93 | ### Here the results for `openai/whisper-large-v2` | x | chunked transformers | this pr | openai/whisper repo | | --- | --- | --- | --- | | Earnings 21 | <s>11.8</s> 11.8 |10.66 | 9.7 | | Earnings 22 | <s>15.1</s> 15.0 | 13.93 | 12.6 | | Meanwhile | <s>6.3</s> 5.2 | 5.14 | 5.1 | | Rev | <s>13.6</s> 13.5 | 11.47 | 11.3 | **Update**: It seems like the number we measured in the distil-whisper paper for chunked long-form are a bit off. Re-running them gives the following: ### Here the results for `openai/whisper-tiny.en` | x | chunked transformers | chunked transformers (this PR) | | --- | --- | --- | | Earnings 21 | 17.28 | 17.29 (+0.01)| | Earnings 22 |23.69 |23.7 (+0.02) | | Meanwhile | 14.55 |14.58 (+0.03) | | Rev | 17.46 | 17.44 (-0.02) | => New algo is on avg. 0.01 WER abs points worse which means it's identical ### Here the results for `openai/whisper-small.en` | x | chunked transformers | chunked transformers (this PR) | | --- | --- | ---- | | Earnings 21 | 12.61 |12.63 (+0.02) | | Earnings 22 | 16.31 | 16.31 (+0.0) | | Meanwhile | 7.06 | 7.04 (-0.02) | | Rev | 14.02 | 14.04 (+0.02)| => New algo is on avg. 0.005 WER abs points worse which means it's identical ### Here the results for `openai/whisper-large-v2` | x | chunked transformers | chunked transformers (this PR) | | --- | --- | --- | | Earnings 21 | 11.75 | 11.72 (-0.03) | | Earnings 22 |15.0 | 14.97 (-0.03) | | Meanwhile | 5.19 | 5.16 (-0.03) | | Rev | 13.53 | 13.53 (+0.0) | => New algo is on avg. 0225 WER abs points better which means it's identical or (tiny tiny bit better) ### Batch size > 1 The code now fully functions for batch size > 1 (made sure that results on the four datasets is within +/- 0.1 % WER). When using batch size = 8, there is a 4x speed-up for large-v2, 2x speed-up for small (and 1.5x speed-up for tiny). The bigger the model, the larger the speed-up! **One should definitely use larger batch sizes when doing long-form timestamp prediction!**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27492/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27492/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27492", "html_url": "https://github.com/huggingface/transformers/pull/27492", "diff_url": "https://github.com/huggingface/transformers/pull/27492.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27492.patch", "merged_at": 1700656055000 }
https://api.github.com/repos/huggingface/transformers/issues/27491
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27491/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27491/comments
https://api.github.com/repos/huggingface/transformers/issues/27491/events
https://github.com/huggingface/transformers/pull/27491
1,992,830,981
PR_kwDOCUB6oc5falvG
27,491
Add patchtst
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27491/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27491", "html_url": "https://github.com/huggingface/transformers/pull/27491", "diff_url": "https://github.com/huggingface/transformers/pull/27491.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27491.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27490
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27490/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27490/comments
https://api.github.com/repos/huggingface/transformers/issues/27490/events
https://github.com/huggingface/transformers/pull/27490
1,992,771,281
PR_kwDOCUB6oc5faYsj
27,490
Fix TF loading PT safetensors when weights are tied
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27490). All of your documentation changes will be reflected on that endpoint.", "Update: I tried to get the test to trigger other models by adding `config.tie_word_embeddings=True`, but everything still seems to be passing, so I guess this PR is ready for review! cc @ArthurZucker or @amyeroberts ", "Added your suggestion @amyeroberts! The method always returns a tuple now.", "Quick ping again for final approval @amyeroberts !", "Before merge, let's check this PR agains the Hub Repo: `\"facebook/bart-large-cnn\"`, say by running \r\n\r\n`tests/models/bart/test_modeling_tf_bart.py::TFBartModelIntegrationTest::test_cnn_summarization_same_as_fairseq_hard`", "@ydshieh - good spot! I missed adding this method for BART. I tested and the slow tests that are failing in the CI are passing now that I've added it.", "Thanks a lot, @Rocketknight1 . The `TFBartModelIntegrationTest` all pass now with this PR.\r\n\r\nHowever, for `TFRag` still have 5 failing tests (as on the slack report)\r\n```\r\nFAILED tests/models/rag/test_modeling_tf_rag.py::TFRagModelIntegrationTests::test_rag_sequence_inference - ValueError: Weight name final_logits_bias:0 does not start with name_scope tf_rag_sequence_for_generation_1/rag. This is an internal error in Transformers, so (unless you were doing something really evil) please open an...\r\n\r\nFAILED tests/models/rag/test_modeling_tf_rag.py::TFRagModelIntegrationTests::test_rag_token_inference - ValueError: Weight name final_logits_bias:0 does not start with name_scope tf_rag_token_for_generation_1/rag. This is an internal error in Transformers, so (unless you were doing something really evil) please open an issue...\r\n\r\nFAILED tests/models/rag/test_modeling_tf_rag.py::TFRagModelIntegrationTests::test_rag_token_inference_save_pretrained - ValueError: Weight name final_logits_bias:0 does not start with name_scope tf_rag_token_for_generation_1/rag. This is an internal error in Transformers, so (unless you were doing something really evil) plea...\r\n\r\nFAILED tests/models/rag/test_modeling_tf_rag.py::TFRagModelSaveLoadTests::test_rag_sequence_from_pretrained - ValueError: Weight name final_logits_bias:0 does not start with name_scope tf_rag_sequence_for_generation_1/rag. This is an internal error in Transformers, so (unless you were doing something really evil) please open...\r\n\r\nFAILED tests/models/rag/test_modeling_tf_rag.py::TFRagModelSaveLoadTests::test_rag_token_from_pretrained - ValueError: Weight name final_logits_bias:0 does not start with name_scope tf_rag_token_for_generation_1/rag. This is an internal error in Transformers, so (unless you were doing something really evil) please open an is...\r\n\r\n\r\n```", "@ydshieh noted! I think I'll need a separate PR for those, though, since it's a composite model. This PR should fix most models, though - can we merge it urgently before the release while I work on something to fix models like RAG?", "@LysandreJik yes, will do! I think this will synergize with the new weight building #27794 as well, and we should be able to get TF-safetensors in good shape soon." ]
1,699
1,701
1,701
MEMBER
null
This PR resolves issues when loading PT safetensors files in TF. The cause of the problem was that safetensors saving discards "aliased" tensors, weight tensors that share the same underlying weight array. This commonly occurs when models use tied weights. Many of our TF models don't support weight tying, however, and as a result the decoder output weights fail to load correctly. The solution is to use a trick we already use for encoder-decoder models, a model-specific `tf_to_pt_weight_rename` method. This PR refactors the way that method is called to make it more accessible (no more need to override `from_pretrained`), and adds `tf_to_pt_weight_rename` methods to the affected models. However, I suspect there are more affected models which aren't showing up in this test, because the only models failing in this test are the models that **always use weight tying without needing a config flag**. If a model **optionally** ties weights based on a flag, that flag will not be set in this test. I suspect the same fix will be needed for several more models as a result, even though this test doesn't flag them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27490/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27490", "html_url": "https://github.com/huggingface/transformers/pull/27490", "diff_url": "https://github.com/huggingface/transformers/pull/27490.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27490.patch", "merged_at": 1701959333000 }
https://api.github.com/repos/huggingface/transformers/issues/27489
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27489/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27489/comments
https://api.github.com/repos/huggingface/transformers/issues/27489/events
https://github.com/huggingface/transformers/pull/27489
1,992,768,237
PR_kwDOCUB6oc5faYCJ
27,489
Update Korean tutorial for using LLMs, and refactor the nested conditional statements in hr_argparser.py
{ "login": "YeonwooSung", "id": 30489717, "node_id": "MDQ6VXNlcjMwNDg5NzE3", "avatar_url": "https://avatars.githubusercontent.com/u/30489717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YeonwooSung", "html_url": "https://github.com/YeonwooSung", "followers_url": "https://api.github.com/users/YeonwooSung/followers", "following_url": "https://api.github.com/users/YeonwooSung/following{/other_user}", "gists_url": "https://api.github.com/users/YeonwooSung/gists{/gist_id}", "starred_url": "https://api.github.com/users/YeonwooSung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YeonwooSung/subscriptions", "organizations_url": "https://api.github.com/users/YeonwooSung/orgs", "repos_url": "https://api.github.com/users/YeonwooSung/repos", "events_url": "https://api.github.com/users/YeonwooSung/events{/privacy}", "received_events_url": "https://api.github.com/users/YeonwooSung/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Ok for me to update and showcase Mistral7B, but I'll let @ArthurZucker or @amyeroberts comment on the code refactor :)", "@YeonwooSung Thanks for opening this PR! \r\n\r\nCould you separate out any code changes in hf parser and image utils into a separate PR? ", "> @YeonwooSung Thanks for opening this PR!\r\n> \r\n> Could you separate out any code changes in hf parser and image utils into a separate PR?\r\n\r\nSure, I will!", "@amyeroberts I've just rebased my main branch, and will open a separate pull request for the code changes in hf parser and image utils.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27489). All of your documentation changes will be reflected on that endpoint.", "BTW we could also use a Korean model to generate Korean text from [here](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)", "> BTW we could also use a Korean model to generate Korean text from [here](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)\r\n\r\nI'll try this next time. Thanks :)" ]
1,699
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? - Update the Korean docs for LLM tutorials for using Mistral7B, not Llama-v1 - Refactor several nested if-else statements for readability ## Before submitting - [v] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [v] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [v] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27489/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27489", "html_url": "https://github.com/huggingface/transformers/pull/27489", "diff_url": "https://github.com/huggingface/transformers/pull/27489.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27489.patch", "merged_at": 1700500463000 }
https://api.github.com/repos/huggingface/transformers/issues/27488
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27488/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27488/comments
https://api.github.com/repos/huggingface/transformers/issues/27488/events
https://github.com/huggingface/transformers/pull/27488
1,992,728,792
PR_kwDOCUB6oc5faPci
27,488
Generate: `GenerationConfig.from_pretrained` can return unused kwargs
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
MEMBER
null
# What does this PR do? I wrote the doctest for the feature described above, but forgot to add the feature itself in the original `GenerationConfig` commit πŸ™ƒ Fixes the failing `pytest --doctest-modules src/transformers/generation/configuration_utils.py::transformers.generation.configuration_utils.GenerationConfig.from_pretrained`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27488/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27488", "html_url": "https://github.com/huggingface/transformers/pull/27488", "diff_url": "https://github.com/huggingface/transformers/pull/27488.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27488.patch", "merged_at": 1699987257000 }
https://api.github.com/repos/huggingface/transformers/issues/27487
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27487/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27487/comments
https://api.github.com/repos/huggingface/transformers/issues/27487/events
https://github.com/huggingface/transformers/issues/27487
1,992,697,178
I_kwDOCUB6oc52xiVa
27,487
ERROR in run_hp_search_optuna when trying to use multi-GPU
{ "login": "sstoia", "id": 129397487, "node_id": "U_kgDOB7Zy7w", "avatar_url": "https://avatars.githubusercontent.com/u/129397487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sstoia", "html_url": "https://github.com/sstoia", "followers_url": "https://api.github.com/users/sstoia/followers", "following_url": "https://api.github.com/users/sstoia/following{/other_user}", "gists_url": "https://api.github.com/users/sstoia/gists{/gist_id}", "starred_url": "https://api.github.com/users/sstoia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sstoia/subscriptions", "organizations_url": "https://api.github.com/users/sstoia/orgs", "repos_url": "https://api.github.com/users/sstoia/repos", "events_url": "https://api.github.com/users/sstoia/events{/privacy}", "received_events_url": "https://api.github.com/users/sstoia/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "cc @muellerzr @pacman100 ", "Sorry for the delay, will be looking into it over this week!", "I'm running into the same issue. Any updates on this please?" ]
1,699
1,706
null
NONE
null
### System Info - transformers version: 4.28.1 - Platform: Linux-3.10.0-1160.95.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (False) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The problem appears when using `run_hp_search_optuna` method from transformers/integrations.py . This method is called when trying to perform an hyperparameter search with the `Trainer.hyperparameter_search` method: ```python best_trial = trainer.hyperparameter_search( direction='maximize', backend='optuna', hp_space=optuna_hp_space, n_trials=10, ) ``` The error obtained is the next one: `Traceback (most recent call last): File "/mnt/beegfs/sstoia/proyectos/LLM_finetuning_stratified_multiclass_optuna.py", line 266, in <module> best_trial = trainer.hyperparameter_search( File "/mnt/beegfs/sstoia/.conda/envs/env/lib/python3.9/site-packages/transformers/trainer.py", line 2592, in hyperparameter_search best_run = backend_dict[backend](self, n_trials, direction, **kwargs) File "/mnt/beegfs/sstoia/.conda/envs/env/lib/python3.9/site-packages/transformers/integrations.py", line 218, in run_hp_search_optuna args = pickle.loads(bytes(args_main_rank)) _pickle.UnpicklingError: pickle data was truncated` ### Expected behavior It should work, as the same function without multi-GPU works fine. I guess the problem comes from a parallelization error, as both GPUs may write on the same file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27487/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27486
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27486/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27486/comments
https://api.github.com/repos/huggingface/transformers/issues/27486/events
https://github.com/huggingface/transformers/pull/27486
1,992,593,696
PR_kwDOCUB6oc5fZyQk
27,486
Revert "[time series] Add PatchTST (#25927)"
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
The model was merged before final review and approval. This reverts commit 2ac5b9325ed3b54950c6c61fd5838ac6e55a9fe1. cc @kashif @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27486/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27486", "html_url": "https://github.com/huggingface/transformers/pull/27486", "diff_url": "https://github.com/huggingface/transformers/pull/27486.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27486.patch", "merged_at": 1699964641000 }
https://api.github.com/repos/huggingface/transformers/issues/27485
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27485/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27485/comments
https://api.github.com/repos/huggingface/transformers/issues/27485/events
https://github.com/huggingface/transformers/pull/27485
1,992,590,534
PR_kwDOCUB6oc5fZxkb
27,485
Generate: fix `ExponentialDecayLengthPenalty` doctest
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The failing test\r\n\r\n```\r\nsrc/transformers/generation/logits_process.py::transformers.generation.logits_process.WhisperTimeStampLogitsProcessor\r\n```\r\n\r\nis known on doctest CI, so good for me to merge. I already ping @sanchit-gandhi on slack.", "@amyeroberts would you be able to merge this PR, as the failure is unrelated? (different doctest on the same file) :)", "I can do it, but maybe wait @amyeroberts this time (so she knows I can πŸ˜„ without surprise)", "I can merge! \r\n\r\ncc @sanchit-gandhi as the failing doctest is a whisper one. " ]
1,699
1,699
1,699
MEMBER
null
# What does this PR do? The doctest was failing... but the root cause for the failure in `transformers.generation.logits_process.ExponentialDecayLengthPenalty` is the transition from `torch==2.0` to `torch==2.1` (i.e. installing `torch==2.0` fixes it). I couldn’t find any related reference in the release notes, but I know they touched `torch.multinomial` after the 2.0 release -- it was sampling things with probability=0.0. This change may impact this doctest, as it is sample-based and it may induce probabilities=0.0. As such, the fix consists of updating the test's outputs. I took the opportunity to improve the example as well :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27485/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27485", "html_url": "https://github.com/huggingface/transformers/pull/27485", "diff_url": "https://github.com/huggingface/transformers/pull/27485.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27485.patch", "merged_at": 1699986110000 }
https://api.github.com/repos/huggingface/transformers/issues/27484
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27484/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27484/comments
https://api.github.com/repos/huggingface/transformers/issues/27484/events
https://github.com/huggingface/transformers/pull/27484
1,992,538,416
PR_kwDOCUB6oc5fZmNV
27,484
[WIP] Hard error when ignoring tensors.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27484). All of your documentation changes will be reflected on that endpoint.", "Removing the `.clone()` [here](https://github.com/huggingface/optimum/blob/15a162824d0c5d8aa7a3d14ab6e9bb07e5732fb6/optimum/bettertransformer/models/base.py#L154), I can confirm that\r\n\r\n```python\r\nfrom transformers import AutoModel\r\nfrom optimum.bettertransformer import BetterTransformer\r\nimport torch\r\n\r\nmodel = AutoModel.from_pretrained(\"bert-base-uncased\")\r\n\r\nmodel_bt = BetterTransformer.transform(model)\r\n\r\nmodel_reverse = BetterTransformer.reverse(model_bt)\r\n\r\nmodel_reverse.save_pretrained(\"bt-reverse-saved\")\r\n\r\nmodel = AutoModel.from_pretrained(\"bert-base-uncased\")\r\nmodel_reverse = AutoModel.from_pretrained(\"bt-reverse-saved\")\r\n\r\nassert torch.equal(model.encoder.layer[0].attention.self.query.weight, model_reverse.encoder.layer[0].attention.self.query.weight)\r\nassert torch.equal(model.encoder.layer[0].attention.self.key.weight, model_reverse.encoder.layer[0].attention.self.key.weight)\r\nassert torch.equal(model.encoder.layer[0].attention.self.value.weight, model_reverse.encoder.layer[0].attention.self.value.weight)\r\n```\r\n\r\nworks fine with this fix (while was failing on main).", "Forgot to merge after resolving conflicts with main. Merging now" ]
1,699
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? Better selection/error when saving a checkpoint. - Find all names we should normally drop (those are in the transformers config) - Find all disjoint tensors (for those we can safely trigger a copy to get rid of the sharing before saving) - Clone those disjoint tensors getting rid of the issue - Find all identical names (those should be declared in the config but we try to find them all anyway.) - For all identical names: - If they are in the config, just ignore them everything is fine - If they are not, warn about them. - For all remainder tensors which are shared yet neither identical NOR disjoint. raise a hard error. * Adding a failing test on `main` that passes here. * We don't need to keep the subfolder logic in this test. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27484/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27484", "html_url": "https://github.com/huggingface/transformers/pull/27484", "diff_url": "https://github.com/huggingface/transformers/pull/27484.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27484.patch", "merged_at": 1707121044000 }
https://api.github.com/repos/huggingface/transformers/issues/27483
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27483/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27483/comments
https://api.github.com/repos/huggingface/transformers/issues/27483/events
https://github.com/huggingface/transformers/pull/27483
1,992,456,564
PR_kwDOCUB6oc5fZUIg
27,483
Set `usedforsecurity=False` in hashlib methods (FIPS compliance)
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks! Not 100% sure we need to rely on the utility as it's a pretty small piece of code for something pretty code + not very intuitive to me that we need to import from hf hub. If others are fine good for me as well\r\n\r\nAgree it's not big piece of code. But having it from an explicitly named `insecure_hashlib` module makes it more easily distinguishable IMO => no need to check each hashlib call to verify that the parameter is correctly set. Also you would have to duplicate the logic that checks whether you are running on python 3.9 or not (2 lines, but still not best to copy-paste around). \r\nAlso the dependency on `huggingface_hub` already exists so no big deal IMO (and updating to latest version forces to get the latest fixes we shipped -that's more of a cool side-effect)", "You are right! πŸ€— ", "Thanks for merging https://github.com/huggingface/transformers/pull/27494 @ArthurZucker ! I merged `main` into this PR and once CI is green I think we'll be good to merge :)", "Still some issue with the dependencies... :confused: \r\n[I opened a PR](https://github.com/huggingface/huggingface_hub/pull/1828) in huggingface_hub and will make another patch release later.", "I converted this PR to draft because of the dependency issues we're having. I'll push a new fix in `huggingface_hub` to definitely settle this (see [slack thread](https://huggingface.slack.com/archives/C014N4749J9/p1700069962026839) -internal- for explanations).\r\n\r\n**Please do not merge even if CI is green.**", "@Wauplin I can see the PR was marked as ready for review again. Does this mean it's now mergeable? \r\n\r\nWould be good to get a final πŸ‘ from @ydshieh to make sure all relevant files for our CI have been updated. ", "Look good!", "Confirmed with @Wauplin over slack that the PR is good to go - merging! " ]
1,699
1,700
1,700
CONTRIBUTOR
null
Solves https://github.com/huggingface/transformers/issues/27034 (cc @DueViktor). This PR makes `transformers` FIPS-compliant regarding hashlib usage by setting `usedforsecurity=False` in every hashlib method (used for file checking, not cryptography purposes). It's based on utilities added in https://github.com/huggingface/huggingface_hub/pull/1782 and released in `huggingface_hub==v0.19.0`. **Note:** before merging this we need to release a new tokenizers version that would allow new `huggingface_hub` version (see https://github.com/huggingface/tokenizers/pull/1385). Tests are currently failing because of this which is expected. cc @ArthurZucker what's the status on the next release?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27483/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27483/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27483", "html_url": "https://github.com/huggingface/transformers/pull/27483", "diff_url": "https://github.com/huggingface/transformers/pull/27483.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27483.patch", "merged_at": 1700144993000 }
https://api.github.com/repos/huggingface/transformers/issues/27482
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27482/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27482/comments
https://api.github.com/repos/huggingface/transformers/issues/27482/events
https://github.com/huggingface/transformers/issues/27482
1,992,443,239
I_kwDOCUB6oc52wkVn
27,482
ERROR | index 4 is out of range
{ "login": "hdnh2006", "id": 17271049, "node_id": "MDQ6VXNlcjE3MjcxMDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17271049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hdnh2006", "html_url": "https://github.com/hdnh2006", "followers_url": "https://api.github.com/users/hdnh2006/followers", "following_url": "https://api.github.com/users/hdnh2006/following{/other_user}", "gists_url": "https://api.github.com/users/hdnh2006/gists{/gist_id}", "starred_url": "https://api.github.com/users/hdnh2006/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hdnh2006/subscriptions", "organizations_url": "https://api.github.com/users/hdnh2006/orgs", "repos_url": "https://api.github.com/users/hdnh2006/repos", "events_url": "https://api.github.com/users/hdnh2006/events{/privacy}", "received_events_url": "https://api.github.com/users/hdnh2006/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hdnh2006, thanks for raising this issue! \r\n\r\nVersion `v4.26.0.dev0` indicates that the installed transformers was from the `main` branch on git. We haven't release 4.26 yet, so you can't install directly using pip and the library name.\r\n\r\nTo install from source (dev version):\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```", "Thanks for the information, I did it and it worked!" ]
1,699
1,700
1,700
NONE
null
### System Info Hello everyone! I am trying to deploy [this](https://huggingface.co/facebook/musicgen-stereo-small) model for testing purposes and I am facing several issues to run it in both, locally and using huggingface endpoints. I have followed exactly [this](https://huggingface.co/blog/run-musicgen-as-an-api) and all the steps. However, in both, locally and in inference endpoints, I am facing the same issue and I get this error: ``` IndexError: index 4 is out of range ``` I just was able to run the model on Google Colab and I realized the transformers version was `4.36.0.dev0` so I tried to put on my `requirements.txt` file something like `transformers>=4.36.0` but it seems this version is not available on inference endpoints docker container. So, in my particular case, do you have any idea how to solve this issue? Thanks in advance ### Who can help? I don't know who I must tag, sorry if I'm wrong: @Narsil @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-to-audio", model="facebook/musicgen-stereo-small") ``` ### Expected behavior The token or audio provided by the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27482/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27481
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27481/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27481/comments
https://api.github.com/repos/huggingface/transformers/issues/27481/events
https://github.com/huggingface/transformers/pull/27481
1,992,307,603
PR_kwDOCUB6oc5fYznq
27,481
[`CI-test_torch`] skip `test_tf_from_pt_safetensors` for 4 models
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27481). All of your documentation changes will be reflected on that endpoint.", "Hello, @ArthurZucker \r\nThanks for fixing the broken tests. I have made a PR and the tests failed because of the broken tests, how could I rerun the tests ? \r\n\r\nhttps://github.com/huggingface/transformers/pull/26304", "I'll re-run them for you! ", "@ArthurZucker Hi, I have rebased my PR #27351 with your latest commits, but the same CI failure still persists with `speech_to_text` (only `speech_to_text_2` is skipped in the current PR). We need to skip `test_tf_from_pt_safetensors` tests for `speech_to_text` as well. Thanks!", "Thanks! @Rocketknight1 is working on a broader fix, I don't really mind skipping this test for all the other failing PRs", "The proper fix is here #27490" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? Skip `test_tf_from_pt_safetensors` tests for MobileBert, TransfoXL, SPeech2text and XGLM
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27481/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27481", "html_url": "https://github.com/huggingface/transformers/pull/27481", "diff_url": "https://github.com/huggingface/transformers/pull/27481.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27481.patch", "merged_at": 1699954443000 }
https://api.github.com/repos/huggingface/transformers/issues/27480
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27480/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27480/comments
https://api.github.com/repos/huggingface/transformers/issues/27480/events
https://github.com/huggingface/transformers/issues/27480
1,992,182,941
I_kwDOCUB6oc52vkyd
27,480
Unable to load bart-large-cnn model correctly
{ "login": "kao73", "id": 18687424, "node_id": "MDQ6VXNlcjE4Njg3NDI0", "avatar_url": "https://avatars.githubusercontent.com/u/18687424?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kao73", "html_url": "https://github.com/kao73", "followers_url": "https://api.github.com/users/kao73/followers", "following_url": "https://api.github.com/users/kao73/following{/other_user}", "gists_url": "https://api.github.com/users/kao73/gists{/gist_id}", "starred_url": "https://api.github.com/users/kao73/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kao73/subscriptions", "organizations_url": "https://api.github.com/users/kao73/orgs", "repos_url": "https://api.github.com/users/kao73/repos", "events_url": "https://api.github.com/users/kao73/events{/privacy}", "received_events_url": "https://api.github.com/users/kao73/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Probably I found the issue in the source code, but not sure how to fix it correctly. Thus, no PR from me for now.\r\nThe `facebook/bart-large-cnn` has `model_type`=`bart` in the configuration file.\r\n\r\nLooking at the piece of code: https://github.com/huggingface/text-generation-inference/blob/v1.1.0/server/text_generation_server/models/__init__.py#L300-L316\r\n```\r\n if model_type in modeling_auto.MODEL_FOR_CAUSAL_LM_MAPPING_NAMES:\r\n return CausalLM(\r\n model_id,\r\n revision,\r\n quantize=quantize,\r\n dtype=dtype,\r\n trust_remote_code=trust_remote_code,\r\n )\r\n if model_type in modeling_auto.MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES:\r\n return Seq2SeqLM(\r\n model_id,\r\n revision,\r\n quantize=quantize,\r\n dtype=dtype,\r\n trust_remote_code=trust_remote_code,\r\n )\r\n```\r\nThe `bart` model type mentioned in the `modeling_auto.MODEL_FOR_CAUSAL_LM_MAPPING_NAMES` and `modeling_auto.MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES` lists.\r\nThus, the first condition returns `true`, and the model loaded by `CausalLM` class instead of `Seq2SeqLM`.\r\n\r\nAccording to the model card: https://huggingface.co/facebook/bart-large-cnn\r\nthe model should be deployed by `AutoModelForSeq2SeqLM` (`BartForConditionalGeneration`) class:\r\n```\r\n# Load model directly\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-large-cnn\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/bart-large-cnn\")\r\n```", "Hi @kao73 thanks for the issue,\r\nBart architecture contains both a[ decoder-only](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L1949) (i.e. CausalLM model) and [enc-decoder model](https://github.com/huggingface/transformers/blob/e107ae364e5b9564c8f8a14dcc185efa506c7b6e/src/transformers/models/bart/modeling_bart.py#L1496 ). \r\nFor the checkpoint `facebook/bart-large-cnn` it is indeed an encoder-decoder: https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L7\r\nPerhaps the fix should go on TGI side and in the case the model architecture is both present in `MODEL_FOR_CAUSAL_LM_MAPPING_NAMES` and `MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES`, the choice of the class to load should be determined by `model.config.is_encoder_decoder`. cc @OlivierDehaene ", "@OlivierDehaene @younesbelkada Was this resolved? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.33.3 - Platform: Linux-5.15.136-90.144.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NA - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @SunMarc ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Download the `facebook/bart-large-cnn` model to local FS (/tmp/model) 2. Start TGI with parameters: text-generation-launcher --hostname 127.0.0.1 --port 9000 --sharded false --num-shard 1 --dtype bfloat16 --max-input-length 1024 --max-total-tokens 2048 --max-batch-total-tokens 4096 --model-id /tmp/model 3. Make request to the server by any available client. The `inputs` text can be any string 4. The model produces meaningless text. Example: Input: `The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris.` Output: `had had had had had had had had had had had only had` ### Expected behavior The model should produce summarization generated text for the input request
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27480/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27479
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27479/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27479/comments
https://api.github.com/repos/huggingface/transformers/issues/27479/events
https://github.com/huggingface/transformers/pull/27479
1,991,938,486
PR_kwDOCUB6oc5fXk2x
27,479
Adding flash attention to GPT2
{ "login": "canberk17", "id": 33362633, "node_id": "MDQ6VXNlcjMzMzYyNjMz", "avatar_url": "https://avatars.githubusercontent.com/u/33362633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/canberk17", "html_url": "https://github.com/canberk17", "followers_url": "https://api.github.com/users/canberk17/followers", "following_url": "https://api.github.com/users/canberk17/following{/other_user}", "gists_url": "https://api.github.com/users/canberk17/gists{/gist_id}", "starred_url": "https://api.github.com/users/canberk17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/canberk17/subscriptions", "organizations_url": "https://api.github.com/users/canberk17/orgs", "repos_url": "https://api.github.com/users/canberk17/repos", "events_url": "https://api.github.com/users/canberk17/events{/privacy}", "received_events_url": "https://api.github.com/users/canberk17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@younesbelkada, thanks for the tips ! Now most of the tests are passing. However, I'm facing a challenge with to address the issues in the following test:\r\n\r\n`tests_torch failure` referring to `Speech2TextModelTest` which is puzzling since I haven't made changes to this part of the code.\r\n\r\nFor `tests_torch` I don't know how the tests on google colab passed. I used the following statement to run these test both for the files in this branch:\r\n`!RUN_SLOW=1 pytest -sv --disable-warnings -k flash_attn_2 tests/models/gpt2/test_modeling_gpt2.py\r\n`\r\nSimilarly, I used the same command for the recently merged GPT variant:\r\n`!RUN_SLOW=1 pytest -sv --disable-warnings -k flash_attn_2 tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py\r\n`\r\nand I got the results mentioned in my comment. I am wondering if I am executing these tests on Colab properly, as I don't see any error messages when I run these tests. When you get a change could you give me some insights on my approach please ", "cc @younesbelkada ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,707
1,707
NONE
null
# What does this PR do? Adding Flash Attention2 to GPT2, here are my tests: ![image](https://github.com/huggingface/transformers/assets/33362633/ebece185-023b-4816-81b5-928a4c10e753) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Contributing to : [#26350](https://github.com/huggingface/transformers/issues/26350) ## Who can review? Hey guys @younesbelkada @ArthurZucker, could you please review it when you get a chance. I was trying to debug why I was getting these test failures, some of them point to falcon model ( even though I haven not touched that file ). Also I ran the flash attention test on another model that has been merged, and these are the test results I am getting: ![image](https://github.com/huggingface/transformers/assets/33362633/7e0191c6-8d02-43cb-8ead-c7baeeff3a7e) am I on the right path here? I couldn’t address why some of these test failures are happening.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27479/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27479", "html_url": "https://github.com/huggingface/transformers/pull/27479", "diff_url": "https://github.com/huggingface/transformers/pull/27479.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27479.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27478
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27478/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27478/comments
https://api.github.com/repos/huggingface/transformers/issues/27478/events
https://github.com/huggingface/transformers/pull/27478
1,991,929,744
PR_kwDOCUB6oc5fXjHK
27,478
feat: add flash_attn 2 to bert
{ "login": "chiennv2000", "id": 61793581, "node_id": "MDQ6VXNlcjYxNzkzNTgx", "avatar_url": "https://avatars.githubusercontent.com/u/61793581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chiennv2000", "html_url": "https://github.com/chiennv2000", "followers_url": "https://api.github.com/users/chiennv2000/followers", "following_url": "https://api.github.com/users/chiennv2000/following{/other_user}", "gists_url": "https://api.github.com/users/chiennv2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/chiennv2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chiennv2000/subscriptions", "organizations_url": "https://api.github.com/users/chiennv2000/orgs", "repos_url": "https://api.github.com/users/chiennv2000/repos", "events_url": "https://api.github.com/users/chiennv2000/events{/privacy}", "received_events_url": "https://api.github.com/users/chiennv2000/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Thanks a lot for your review and your suggestions @younesbelkada.\r\nBut I don't really familiar with ```make fix-copies``` command. Can you guide me on how to do that?\r\n", "2) I appreciate your feedback. I'm happy to receive your assistance in implementing these changes. \r\nIf you could help me with other architectures, that would be fantastic. Additionally, I'm open to collaborating on extending this to the Roberta and XLMR model. @younesbelkada ", "Perfect thanks! \r\nAs a first step, can you simply run `make fix-copies` and push the changes here? Then we'll take it over from there !", "Thanks @younesbelkada , I did it", "cc @younesbelkada ", "Didn't had time to properly look into it, will do it asap!", "Any updates on getting this PR merged?", "Hello there! \r\n\r\nI'm working on integrating scaled_dot_product_attention to BERT #28802, and there might be some merge conflicts with this change. Mostly, if my changes go through, then we can get rid of most of the downstream dependencies from fix-copies. \r\n\r\nLet me know if you have any questions. Happy to discuss and/or chat on the best way forward if necessary." ]
1,699
1,707
null
NONE
null
Feat: Add flash attention option for BERT Usage: model = BertModel.from_pretrained('bert-base-uncased', torch_dtype=torch.bfloat16, use_flash_attention_2=True) - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27478/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27478", "html_url": "https://github.com/huggingface/transformers/pull/27478", "diff_url": "https://github.com/huggingface/transformers/pull/27478.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27478.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27477
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27477/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27477/comments
https://api.github.com/repos/huggingface/transformers/issues/27477/events
https://github.com/huggingface/transformers/pull/27477
1,991,318,862
PR_kwDOCUB6oc5fVdaD
27,477
Update processor mapping for hub snippets
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? Updates the processor mapping so that `AutoImageProcessor` is selected for vision models rather than the deprecated `AutoFeatureExtractor`. This should resolve auto-generated snippets on the hub that still show using old classes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27477/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27477", "html_url": "https://github.com/huggingface/transformers/pull/27477", "diff_url": "https://github.com/huggingface/transformers/pull/27477.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27477.patch", "merged_at": 1699992354000 }
https://api.github.com/repos/huggingface/transformers/issues/27476
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27476/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27476/comments
https://api.github.com/repos/huggingface/transformers/issues/27476/events
https://github.com/huggingface/transformers/pull/27476
1,991,276,505
PR_kwDOCUB6oc5fVT_w
27,476
[Docs] PatchTST doc improvements
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27476). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? updates the docs to the structure of #26876
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27476/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27476", "html_url": "https://github.com/huggingface/transformers/pull/27476", "diff_url": "https://github.com/huggingface/transformers/pull/27476.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27476.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27475
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27475/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27475/comments
https://api.github.com/repos/huggingface/transformers/issues/27475/events
https://github.com/huggingface/transformers/issues/27475
1,991,102,507
I_kwDOCUB6oc52rdAr
27,475
[nougat] Unable to use nougat models with `image-to-text` pipeline
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @xenova,\r\n\r\nThis appears to be a config issue from their side.\r\nHere is a quick fix.\r\n\r\n```python\r\nfrom transformers import pipeline\r\nfrom transformers import AutoFeatureExtractor\r\n\r\npipe = pipeline(\r\n task='image-to-text', \r\n model='facebook/nougat-base', \r\n feature_extractor=AutoFeatureExtractor,\r\n)\r\n\r\nresponse = pipe(\r\n 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/nougat_paper.png', \r\n max_new_tokens=20\r\n)\r\nprint(response[0].get('generated_text'))\r\n```\r\n\r\nPlease let me know if it works for you :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, I would like to work on this issue. Could you please assign it to me?", "Hi @avisinghal6 - we don't normally assign issues: people saying they're working on it and open a PR in github or on the hub directly, linking in a comment to the related work. \r\n\r\nIn this case - you're more than welcome to tackle this! ", "Thanks, i will work on this issue and update the status in a few days.", "> ```python\r\n> feature_extractor\r\n> ```\r\n\r\nI have a few questions regarding this issue:\r\n1. Is it possible to solve the config issue and if yes, any leads on how?\r\n2. Does it involve changing the files [here](https://github.com/huggingface/transformers/tree/main/src/transformers/models/nougat)? If yes, which files must be edited?\r\n\r\nAny help is highly appreciated!", "Hi,\r\n\r\nI looked a bit into this issue and found the problematic line. Opened a PR above." ]
1,699
1,707
1,707
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.0.dev0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (False) - Tensorflow version (GPU?): 2.14.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.20 - JaxLib version: 0.4.20 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @NielsRogge @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running ```py from transformers import pipeline pipe = pipeline('image-to-text', 'facebook/nougat-base') pipe('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/nougat_paper.png') ``` results in the following error: ``` ValueError: Unrecognized feature extractor in facebook/nougat-base. Should have a `feature_extractor_type` key in its preprocessor_config.json of config.json, or one of the following `model_type` keys in its config.json: audio-spectrogram-transformer, beit, chinese_clip, clap, clip, clipseg, clvp, conditional_detr, convnext, cvt, data2vec-audio, data2vec-vision, deformable_detr, deit, detr, dinat, donut-swin, dpt, encodec, flava, glpn, groupvit, hubert, imagegpt, layoutlmv2, layoutlmv3, levit, maskformer, mctct, mobilenet_v1, mobilenet_v2, mobilevit, nat, owlvit, perceiver, poolformer, pop2piano, regnet, resnet, seamless_m4t, segformer, sew, sew-d, speech_to_text, speecht5, swiftformer, swin, swinv2, table-transformer, timesformer, tvlt, unispeech, unispeech-sat, van, videomae, vilt, vit, vit_mae, vit_msn, wav2vec2, wav2vec2-conformer, wavlm, whisper, xclip, yolos ``` ### Expected behavior The model should function properly with the `pipeline` API.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27475/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27475/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27474
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27474/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27474/comments
https://api.github.com/repos/huggingface/transformers/issues/27474/events
https://github.com/huggingface/transformers/pull/27474
1,991,073,395
PR_kwDOCUB6oc5fUnSq
27,474
[DataCollator] Warn on identical `eos_token_id` and `pad_token_id`
{ "login": "MustSave", "id": 58774251, "node_id": "MDQ6VXNlcjU4Nzc0MjUx", "avatar_url": "https://avatars.githubusercontent.com/u/58774251?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MustSave", "html_url": "https://github.com/MustSave", "followers_url": "https://api.github.com/users/MustSave/followers", "following_url": "https://api.github.com/users/MustSave/following{/other_user}", "gists_url": "https://api.github.com/users/MustSave/gists{/gist_id}", "starred_url": "https://api.github.com/users/MustSave/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MustSave/subscriptions", "organizations_url": "https://api.github.com/users/MustSave/orgs", "repos_url": "https://api.github.com/users/MustSave/repos", "events_url": "https://api.github.com/users/MustSave/events{/privacy}", "received_events_url": "https://api.github.com/users/MustSave/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Hi @MustSave - thanks for opening this PR!\r\n> \r\n> This warning should be included in the data collator responsible for collating for multi-turn learning, rather than the parent class.\r\n\r\nThank you for your comment. I will make the modifications to include a warning in the relevant class." ]
1,699
1,699
1,699
NONE
null
# What does this PR do? This PR displays a warning message when the values of `pad_token_id` and `eos_token_id` are identical. This is to prevent unexpected behavior during multi-turn training. After the multi-turn data training with [DataCollatorForCompletionOnlyLM](https://github.com/huggingface/trl/blob/9e9f024399b76842ece3552884bbc4f304fd4153/trl/trainer/utils.py#L56), I encountered an issue where the model continued generating outputs even after the assistant's turn had been completed. This issue was due to the equivalence of the tokenizer's eos token and pad token by default, resulting in the eos token not being properly trained. For instance, in the torch_call() function within [data_collator.py](https://github.com/huggingface/transformers/blob/7ee995fd9c692761c4601ddbffa2ac2ec9f27b0b/src/transformers/data/data_collator.py#L740C10-L740C10), the pad_token_id is converted to ignore_id(-100). ```python if self.mlm: batch["input_ids"], batch["labels"] = self.torch_mask_tokens( batch["input_ids"], special_tokens_mask=special_tokens_mask ) else: labels = batch["input_ids"].clone() if self.tokenizer.pad_token_id is not None: labels[labels == self.tokenizer.pad_token_id] = -100 batch["labels"] = labels ``` If the multi-turn data is formatted as shown below, and `eos_token_id` and `pad_token_id` are identical, the eos_token would not be properly trained. This could lead to a scenario where the model continuously generates both user and assistant turns without recognizing the end of sequence (eos) token. ``` <s>### User: What is up? ### Assistant: Hello! How can I help you today?</s> ### User: Goodbye ### Assistant: Goodbye! If you have any more questions in the future, don't hesitate to ask.</s> ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @muellerz @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27474/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27474", "html_url": "https://github.com/huggingface/transformers/pull/27474", "diff_url": "https://github.com/huggingface/transformers/pull/27474.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27474.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27473
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27473/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27473/comments
https://api.github.com/repos/huggingface/transformers/issues/27473/events
https://github.com/huggingface/transformers/issues/27473
1,990,941,136
I_kwDOCUB6oc52q1nQ
27,473
Add initial_prompt
{ "login": "tinderlord", "id": 75577186, "node_id": "MDQ6VXNlcjc1NTc3MTg2", "avatar_url": "https://avatars.githubusercontent.com/u/75577186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tinderlord", "html_url": "https://github.com/tinderlord", "followers_url": "https://api.github.com/users/tinderlord/followers", "following_url": "https://api.github.com/users/tinderlord/following{/other_user}", "gists_url": "https://api.github.com/users/tinderlord/gists{/gist_id}", "starred_url": "https://api.github.com/users/tinderlord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tinderlord/subscriptions", "organizations_url": "https://api.github.com/users/tinderlord/orgs", "repos_url": "https://api.github.com/users/tinderlord/repos", "events_url": "https://api.github.com/users/tinderlord/events{/privacy}", "received_events_url": "https://api.github.com/users/tinderlord/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @tinderlord - thanks for opening an issue! \r\n\r\nCould you provide a bit more information about the feature - ideally with an example code snippet of how it would be used and details about the models this would apply to (I'm assuming whisper?)\r\n\r\ncc @sanchit-gandhi ", "Note that this is available in Transformers after the PR #22496:\r\n```python\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\ninput_speech = dataset[3][\"audio\"][\"array\"]\r\n\r\nprocessor = WhisperProcessor.from_pretrained(\"distil-whisper/distil-large-v2\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"distil-whisper/distil-large-v2\")\r\ninput_features = processor(input_speech, return_tensors=\"pt\").input_features\r\n\r\n# --- Without prompt ---\r\noutput_without_prompt = model.generate(input_features)\r\nprint(processor.decode(output_without_prompt[0]))\r\n# <|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of Rocky Ithaca.<|endoftext|>\r\n\r\n# --- With prompt ---\r\n#Β Let's change the spelling of \"Leighton\" -> \"Layton\" by passing it as a prompt\r\nprompt_ids = processor.get_prompt_ids(\"Layton\")\r\noutput_with_prompt = model.generate(input_features, prompt_ids=prompt_ids)\r\nprint(processor.decode(output_with_prompt[0]))\r\n# <|startofprev|> Layton<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and can discover in it but little of Rocky Ithaca.<|endoftext|>\r\n```\r\n\r\nIs this what you're looking for @tinderlord?", "@sanchit-gandhi, can you tell how to use this prompt feature witn whisper asr pipeline?", "It is not supported yet but planned", "@ArthurZucker, thanks for your answer. Maybe you can help me with using initial prompt with batching asr without pipeline?", "> Note that this is available in Transformers after the PR #22496:\r\n> \r\n> ```python\r\n> from transformers import WhisperProcessor, WhisperForConditionalGeneration\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\n> input_speech = dataset[3][\"audio\"][\"array\"]\r\n> \r\n> processor = WhisperProcessor.from_pretrained(\"distil-whisper/distil-large-v2\")\r\n> model = WhisperForConditionalGeneration.from_pretrained(\"distil-whisper/distil-large-v2\")\r\n> input_features = processor(input_speech, return_tensors=\"pt\").input_features\r\n> \r\n> # --- Without prompt ---\r\n> output_without_prompt = model.generate(input_features)\r\n> print(processor.decode(output_without_prompt[0]))\r\n> # <|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of Rocky Ithaca.<|endoftext|>\r\n> \r\n> # --- With prompt ---\r\n> #Β Let's change the spelling of \"Leighton\" -> \"Layton\" by passing it as a prompt\r\n> prompt_ids = processor.get_prompt_ids(\"Layton\")\r\n> output_with_prompt = model.generate(input_features, prompt_ids=prompt_ids)\r\n> print(processor.decode(output_with_prompt[0]))\r\n> # <|startofprev|> Layton<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and can discover in it but little of Rocky Ithaca.<|endoftext|>\r\n> ```\r\n> \r\n> Is this what you're looking for @tinderlord?\r\n\r\nPretty sure @sanchit-gandhi's answer is what you are looking for! πŸ€— ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Yes, many thanks. This feature is what i was looking for. " ]
1,699
1,704
1,704
NONE
null
### Feature request This parameter will allow a string variable to be passed when encoding ### Motivation To allow better transcription based on prompt and audio ### Your contribution Not at the moment
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27473/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27473/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27472
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27472/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27472/comments
https://api.github.com/repos/huggingface/transformers/issues/27472/events
https://github.com/huggingface/transformers/pull/27472
1,990,875,281
PR_kwDOCUB6oc5fT8Iy
27,472
[FORKED] Adding EncT5 model for non-autoregressive tasks
{ "login": "hackyon", "id": 1557853, "node_id": "MDQ6VXNlcjE1NTc4NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hackyon", "html_url": "https://github.com/hackyon", "followers_url": "https://api.github.com/users/hackyon/followers", "following_url": "https://api.github.com/users/hackyon/following{/other_user}", "gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hackyon/subscriptions", "organizations_url": "https://api.github.com/users/hackyon/orgs", "repos_url": "https://api.github.com/users/hackyon/repos", "events_url": "https://api.github.com/users/hackyon/events{/privacy}", "received_events_url": "https://api.github.com/users/hackyon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @ArthurZucker ", "Might be a duplicate of #26683", "Thanks Amy and Arthur. \r\n\r\nAs @ArthurZucker pointed out, I drafted this PR to show a potential alternative to #26683 (to help visualize what the proposed alternative would look like). \r\n\r\nI'll keep this PR in draft mode for now, and will close along with #26683 when the time comes. Thanks!" ]
1,699
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27472/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27472", "html_url": "https://github.com/huggingface/transformers/pull/27472", "diff_url": "https://github.com/huggingface/transformers/pull/27472.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27472.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27471
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27471/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27471/comments
https://api.github.com/repos/huggingface/transformers/issues/27471/events
https://github.com/huggingface/transformers/pull/27471
1,990,810,819
PR_kwDOCUB6oc5fTuE6
27,471
Add madlad-400 MT models
{ "login": "jbochi", "id": 292712, "node_id": "MDQ6VXNlcjI5MjcxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/292712?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbochi", "html_url": "https://github.com/jbochi", "followers_url": "https://api.github.com/users/jbochi/followers", "following_url": "https://api.github.com/users/jbochi/following{/other_user}", "gists_url": "https://api.github.com/users/jbochi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jbochi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbochi/subscriptions", "organizations_url": "https://api.github.com/users/jbochi/orgs", "repos_url": "https://api.github.com/users/jbochi/repos", "events_url": "https://api.github.com/users/jbochi/events{/privacy}", "received_events_url": "https://api.github.com/users/jbochi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for adding @jbochi! Let us know when you're ready for a review.\r\n\r\nI've set the failed tests to re-run as it seems there was a transient connection error causing them to fail. ", "Thanks for re-running the tests, @amyeroberts . \r\n\r\nThe PR is ready for review. I don't think the previous failure was related to my changes.", "The tests passed this time. 😌 ", "I think we can transfer all MT models now and merge this PR.\r\n\r\nMaybe I will work on LM models later, but I am not sure.\r\n\r\nWhat do you think?\r\n\r\nOn Thu, Nov 16, 2023, 8:15 AM amyeroberts ***@***.***> wrote:\r\n\r\n> ***@***.**** commented on this pull request.\r\n> ------------------------------\r\n>\r\n> In docs/source/en/model_doc/madlad-400.md\r\n> <https://github.com/huggingface/transformers/pull/27471#discussion_r1395676523>\r\n> :\r\n>\r\n> > +- [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt)\r\n> +\r\n> +- [jbochi/madlad400-7b-mt](https://huggingface.co/jbochi/madlad400-7b-mt)\r\n> +\r\n> +- [jbochi/madlad400-7b-mt-bt](https://huggingface.co/jbochi/madlad400-7b-mt-bt)\r\n> +\r\n> +- [jbochi/madlad400-10b-mt](https://huggingface.co/jbochi/madlad400-10b-mt)\r\n>\r\n> OK - let me know when it's ready. We'll do this as the last step before\r\n> merging\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/27471#discussion_r1395676523>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AACHO2FLM2AV6ITIUQ63O23YEYGY5AVCNFSM6AAAAAA7JISLSCVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMYTOMZUGM2DONBZGE>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Thank you for filling out the model cards so beautifully πŸ™ ", "Thanks for the review!", "Hi @jbochi. Apologies for the delay in getting back to you - I'm also learning about the intricacies of org weights too! For the model weights, would you prefer that we move the original checkpoint repos directly under google or duplicate, so that yours still exist under your profile? ", "Hey. No worries. I think it's better to move them (assuming people will be\r\nredirected if they go to mine).\r\n\r\nOn Thu, Nov 23, 2023, 4:50 AM amyeroberts ***@***.***> wrote:\r\n\r\n> Hi @jbochi <https://github.com/jbochi>. Apologies for the delay in\r\n> getting back to you - I'm also learning about the intricacies of org\r\n> weights too! For the model weights, would you prefer that we move the\r\n> original checkpoint repos directly under google or duplicate, so that yours\r\n> still exist under your profile?\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/27471#issuecomment-1824089430>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AACHO2AT3SW7LP3R6SLGTVTYF4L5VAVCNFSM6AAAAAA7JISLSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRUGA4DSNBTGA>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n", "@jbochi If they're moved then they wouldn't exist under your profile any more at all. I believe it would error-out saying that the repo doesn't exist if someone tried: `AutoTokenizer.from_pretrained(\"jbochi/madlad400-3b-mt\")`. Is this still OK or would you rather we copy? ", "In this case, can we copy them instead? Thanks\r\n\r\nOn Fri, Nov 24, 2023, 12:45 PM amyeroberts ***@***.***> wrote:\r\n\r\n> @jbochi <https://github.com/jbochi> If they're moved then they wouldn't\r\n> exist under your profile any more at all. I believe it would error-out\r\n> saying that the repo doesn't exist if someone tried:\r\n> AutoTokenizer.from_pretrained(\"jbochi/madlad400-3b-mt\"). Is this still OK\r\n> or would you rather we copy?\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/27471#issuecomment-1825955848>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AACHO2CTUUN2QO6SMH62K63YGDMNLAVCNFSM6AAAAAA7JISLSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRVHE2TKOBUHA>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n", "@jbochi Yep! All done. I've set the doc tests to re-run which should pass now the google checkpoints exist 🀞 ", "Yay! Thank you!\r\n\r\nOn Mon, Nov 27, 2023, 11:03β€―AM amyeroberts ***@***.***> wrote:\r\n\r\n> @jbochi <https://github.com/jbochi> Yep! All done. I've set the doc tests\r\n> to re-run which should pass now the google checkpoints exist 🀞\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/27471#issuecomment-1828133135>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AACHO2ARZDBRUO4SS2AAVBDYGS2VHAVCNFSM6AAAAAA7JISLSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYGEZTGMJTGU>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n", "@jbochi One last final update - the doctests are failing with a timeout. Could you add the model doc page `madlad-400.md` to the [not_doctested.txt](https://github.com/huggingface/transformers/blob/ce315081340fdf6846f16c321eb53878b6272d53/utils/not_doctested.txt#L4) page? ", "> @jbochi One last final update - the doctests are failing with a timeout. Could you add the model doc page `madlad-400.md` to the [not_doctested.txt](https://github.com/huggingface/transformers/blob/ce315081340fdf6846f16c321eb53878b6272d53/utils/not_doctested.txt#L4) page?\r\n\r\nSure. Done!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27471). All of your documentation changes will be reflected on that endpoint.", "@jbochi Awesome - thanks again for adding! ", "Thanks for the review and all the help, @amyeroberts " ]
1,699
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #26696 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27471/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27471", "html_url": "https://github.com/huggingface/transformers/pull/27471", "diff_url": "https://github.com/huggingface/transformers/pull/27471.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27471.patch", "merged_at": 1701177590000 }
https://api.github.com/repos/huggingface/transformers/issues/27470
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27470/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27470/comments
https://api.github.com/repos/huggingface/transformers/issues/27470/events
https://github.com/huggingface/transformers/pull/27470
1,990,701,213
PR_kwDOCUB6oc5fTWKY
27,470
Fix docstring for `gradient_checkpointing_kwargs`
{ "login": "tomaszcichy98", "id": 107866759, "node_id": "U_kgDOBm3qhw", "avatar_url": "https://avatars.githubusercontent.com/u/107866759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaszcichy98", "html_url": "https://github.com/tomaszcichy98", "followers_url": "https://api.github.com/users/tomaszcichy98/followers", "following_url": "https://api.github.com/users/tomaszcichy98/following{/other_user}", "gists_url": "https://api.github.com/users/tomaszcichy98/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaszcichy98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaszcichy98/subscriptions", "organizations_url": "https://api.github.com/users/tomaszcichy98/orgs", "repos_url": "https://api.github.com/users/tomaszcichy98/repos", "events_url": "https://api.github.com/users/tomaszcichy98/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaszcichy98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27470). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Docstring entry for `gradient_checkpointing_kwargs` was `gradient_checkpointing_args`. Fixes #27469 @stevhliu @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27470/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27470", "html_url": "https://github.com/huggingface/transformers/pull/27470", "diff_url": "https://github.com/huggingface/transformers/pull/27470.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27470.patch", "merged_at": 1699889524000 }
https://api.github.com/repos/huggingface/transformers/issues/27469
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27469/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27469/comments
https://api.github.com/repos/huggingface/transformers/issues/27469/events
https://github.com/huggingface/transformers/issues/27469
1,990,677,125
I_kwDOCUB6oc52p1KF
27,469
Wrong argument name in the documentation for `transformers.TrainingArguments`
{ "login": "tomaszcichy98", "id": 107866759, "node_id": "U_kgDOBm3qhw", "avatar_url": "https://avatars.githubusercontent.com/u/107866759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaszcichy98", "html_url": "https://github.com/tomaszcichy98", "followers_url": "https://api.github.com/users/tomaszcichy98/followers", "following_url": "https://api.github.com/users/tomaszcichy98/following{/other_user}", "gists_url": "https://api.github.com/users/tomaszcichy98/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaszcichy98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaszcichy98/subscriptions", "organizations_url": "https://api.github.com/users/tomaszcichy98/orgs", "repos_url": "https://api.github.com/users/tomaszcichy98/repos", "events_url": "https://api.github.com/users/tomaszcichy98/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaszcichy98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,699
1,699
1,699
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-1046-gcp-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - deepspeed_config: {'gradient_accumulation_steps': 4, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero3_save_16bit_model': False, 'zero_stage': 3} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.1.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: YES - Using distributed or parallel set-up in script?: YES ### Who can help? @stevhliu @MKhalusova ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Create training arguments with `gradient_checkpointing_args` as in the [documentation](https://huggingface.co/docs/transformers/v4.35.0/en/main_classes/trainer#transformers.TrainingArguments) `training_args = TrainingArguments(..., gradient_checkpointing=True, gradient_checkpointing_args={"use_reentrant": False})` 2. Get error `TypeError: TrainingArguments.__init__() got an unexpected keyword argument 'gradient_checkpointing_args'` ### Expected behavior In the documentation `gradient_checkpointing_args` should be `gradient_checkpointing_kwargs`. See: https://github.com/huggingface/transformers/blob/f1185a4a73a03d238afce1b40456588d22520dd2/src/transformers/training_args.py#L1137
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27469/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27468
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27468/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27468/comments
https://api.github.com/repos/huggingface/transformers/issues/27468/events
https://github.com/huggingface/transformers/pull/27468
1,990,655,650
PR_kwDOCUB6oc5fTMMS
27,468
OWLv2: bug fix in post_process_object_detection() when using cuda device
{ "login": "assafbot", "id": 125451756, "node_id": "U_kgDOB3o97A", "avatar_url": "https://avatars.githubusercontent.com/u/125451756?v=4", "gravatar_id": "", "url": "https://api.github.com/users/assafbot", "html_url": "https://github.com/assafbot", "followers_url": "https://api.github.com/users/assafbot/followers", "following_url": "https://api.github.com/users/assafbot/following{/other_user}", "gists_url": "https://api.github.com/users/assafbot/gists{/gist_id}", "starred_url": "https://api.github.com/users/assafbot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/assafbot/subscriptions", "organizations_url": "https://api.github.com/users/assafbot/orgs", "repos_url": "https://api.github.com/users/assafbot/repos", "events_url": "https://api.github.com/users/assafbot/events{/privacy}", "received_events_url": "https://api.github.com/users/assafbot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27468). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
Fix an issue in OWLv2 when calling post_process_object_detection() with data on cuda device. @NielsRogge, I don't know how to add you as a reviewer so I tagged you here
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27468/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27468", "html_url": "https://github.com/huggingface/transformers/pull/27468", "diff_url": "https://github.com/huggingface/transformers/pull/27468.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27468.patch", "merged_at": 1699889504000 }
https://api.github.com/repos/huggingface/transformers/issues/27467
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27467/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27467/comments
https://api.github.com/repos/huggingface/transformers/issues/27467/events
https://github.com/huggingface/transformers/pull/27467
1,990,598,109
PR_kwDOCUB6oc5fS_i7
27,467
[`AWQ` ] Addresses TODO for awq tests
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? As per title, now that models from TheBloke have been converted we will include the model in the current testing suite to make sure we are always compatible with TheBloke weights on the Hub cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27467/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27467", "html_url": "https://github.com/huggingface/transformers/pull/27467", "diff_url": "https://github.com/huggingface/transformers/pull/27467.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27467.patch", "merged_at": 1699895922000 }
https://api.github.com/repos/huggingface/transformers/issues/27466
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27466/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27466/comments
https://api.github.com/repos/huggingface/transformers/issues/27466/events
https://github.com/huggingface/transformers/pull/27466
1,990,587,232
PR_kwDOCUB6oc5fS9MA
27,466
[`Peft`] `modules_to_save` support for peft integration
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts I'll merge this PR after the patch release from PEFT !", "Link to patch release: https://github.com/huggingface/peft/releases/tag/v0.6.2" ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? With https://github.com/huggingface/peft/pull/1112 being merged, we can now extend the PEFT integration to support `modules_to_save`. What is `modules_to_save`? It is a feature in PEFT that enables users to fine-tune extra parameters in addition to PEFT adapters; in practice, users can for example further fine-tune the lm_head in addition to lora adapters Added also some tests and few lines in the documentation cc @amyeroberts @pacman100 @BenjaminBossan
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27466/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27466", "html_url": "https://github.com/huggingface/transformers/pull/27466", "diff_url": "https://github.com/huggingface/transformers/pull/27466.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27466.patch", "merged_at": 1699954377000 }
https://api.github.com/repos/huggingface/transformers/issues/27465
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27465/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27465/comments
https://api.github.com/repos/huggingface/transformers/issues/27465/events
https://github.com/huggingface/transformers/pull/27465
1,990,572,616
PR_kwDOCUB6oc5fS5_S
27,465
Install `python-Levenshtein` for `nougat` in CI image
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? We have already this in https://github.com/huggingface/transformers/blob/8f577dca4f2e9153d152afffe209fee643a90124/.circleci/create_circleci_config.py#L475. This PR just add the same thing to the CI image docker file to fix issues like in doctest ``` nougat.md UNEXPECTED EXCEPTION: ImportError('\nNougatTokenizerFast requires the python-Levenshtein library but it was not found in your environment. You can install it with pip: pip\ninstall python-Levenshtein. Please note that you may need to restart your runtime after installation.\n') ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27465/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27465", "html_url": "https://github.com/huggingface/transformers/pull/27465", "diff_url": "https://github.com/huggingface/transformers/pull/27465.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27465.patch", "merged_at": 1699889894000 }
https://api.github.com/repos/huggingface/transformers/issues/27464
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27464/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27464/comments
https://api.github.com/repos/huggingface/transformers/issues/27464/events
https://github.com/huggingface/transformers/pull/27464
1,990,537,093
PR_kwDOCUB6oc5fSyCl
27,464
Add Fill-in-the-middle training objective example - PyTorch
{ "login": "tanaymeh", "id": 26519539, "node_id": "MDQ6VXNlcjI2NTE5NTM5", "avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanaymeh", "html_url": "https://github.com/tanaymeh", "followers_url": "https://api.github.com/users/tanaymeh/followers", "following_url": "https://api.github.com/users/tanaymeh/following{/other_user}", "gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions", "organizations_url": "https://api.github.com/users/tanaymeh/orgs", "repos_url": "https://api.github.com/users/tanaymeh/repos", "events_url": "https://api.github.com/users/tanaymeh/events{/privacy}", "received_events_url": "https://api.github.com/users/tanaymeh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This PR is currently in draft mode until I can get a GPU VM (shouldn't take more than a week at max) to test to if a model is training all fine using this script.", "@sayakpaul @ArthurZucker This PR is ready for review. I have added two scripts: `run_fim.py` and `run_fim_no_trainer.py`. Both have almost the same code structure as `run_clm.py` and `run_clm_no_trainer.py` respectively.\r\n\r\nI have adapted the FIM dataset transformation from the [SantaCoder repository](https://github.com/loubnabnl/santacoder-finetuning/blob/main/fim.py#L22C13-L83) with changes to make it work with dataset's `.map()` function.\r\n\r\nTraining in both scripts works but I have yet to train a proper model on a GPU to compare the performance. Let me know if I need to make any changes to the script.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27464). All of your documentation changes will be reflected on that endpoint.", "@ArthurZucker @pacman100 Are there any issues in this PR? Kindly let me know!", "> thanks for this! Could you also add maybe and example log from the run to the readme>?\r\n\r\n@ArthurZucker, Since the training log is almost entirely the same as `run_clm.py` (the only difference is the dataset transformation for adding FIM tokens), should I instead add the parts of the log concerned with the training part only?", "Sure πŸ€— ", "@ArthurZucker I have added the `pad_to_multiple_of` argument based on the device (for GPU it will be 8 and for TPUs, it will be 128), you can see it [here](https://github.com/tanaymeh/transformers/blob/98a5976cc555696c2ae0e4eb891b9e77d989c291/examples/pytorch/language-modeling/run_fim.py#L521-L533).\r\n\r\nI have added `attn_implementation` support in the models ([here](https://github.com/tanaymeh/transformers/blob/98a5976cc555696c2ae0e4eb891b9e77d989c291/examples/pytorch/language-modeling/run_fim.py#L504)) and the model's embedding resizing changes as you requested ([here](https://github.com/tanaymeh/transformers/blob/98a5976cc555696c2ae0e4eb891b9e77d989c291/examples/pytorch/language-modeling/run_fim.py#L529-L560)).\r\n\r\nPlease let me know if these are the correct changes or do I need to make any amends!", "Hey! Sorry for the delay, it's a big PR I'll review it later this week πŸ€— ", "@ArthurZucker Thanks for your review! I have added your suggested changes and also made sure all CIs are now green.\r\n\r\nJust waiting for @pacman100's review to get this merged!" ]
1,699
1,706
null
CONTRIBUTOR
null
# What does this PR do? This PR adds a Fill-in-the-middle training objective example to πŸ€— transformers. FIM objective was proposed in [Efficient Training of Language Models to Fill in the Middle](https://arxiv.org/abs/2207.14255). They showed that autoregressive language models can learn to infill text after applying a straightforward transformation to the dataset, which simply moves a span of text from the middle of a document to its end. As discussed in #27059 ## Who can review? @sayakpaul @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27464/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27464", "html_url": "https://github.com/huggingface/transformers/pull/27464", "diff_url": "https://github.com/huggingface/transformers/pull/27464.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27464.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27463
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27463/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27463/comments
https://api.github.com/repos/huggingface/transformers/issues/27463/events
https://github.com/huggingface/transformers/pull/27463
1,990,498,453
PR_kwDOCUB6oc5fSppx
27,463
Add segmentation map processing to SAM Image Processor
{ "login": "rwood-97", "id": 72076688, "node_id": "MDQ6VXNlcjcyMDc2Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rwood-97", "html_url": "https://github.com/rwood-97", "followers_url": "https://api.github.com/users/rwood-97/followers", "following_url": "https://api.github.com/users/rwood-97/following{/other_user}", "gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}", "starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions", "organizations_url": "https://api.github.com/users/rwood-97/orgs", "repos_url": "https://api.github.com/users/rwood-97/repos", "events_url": "https://api.github.com/users/rwood-97/events{/privacy}", "received_events_url": "https://api.github.com/users/rwood-97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@rwood-97 - thanks for opening this PR - let me know when it's ready for review! ", "@amyeroberts I've added a very small amount to the tests. I think they could be more comprehensive but I think at min this now tests a case of input with/without masks. Let me know if there are any others you'd want me to add.\r\n\r\nWill also do docs asap then will make as ready for review", "Hey, just looking through the current documentation on SAM and I think the most relevant place for me to add docs is probably [this tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb). I'll create an issue on that repo once this PR is done so I think this PR could be considered as ready now.", "@rwood-97 Regarding where to document, the best place would be to add an example on the [model doc page](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/sam.md) and/or in the officially supported notebooks - [here](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb) and [here](https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb). \r\n\r\nCould you add a small code snippet to the model doc page to show how to pass and use the masks? After that we can merge :) \r\n\r\nThe notebook you linked to isn't officially maintained by Hugging Face. You can certainly still open a PR to add to it if you'd like! ", "yep sure, I'll try find some time this week/next week for this ", "All done πŸ‘ ", "@rwood-97 Thanks again for all the work on this contribution! " ]
1,699
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixes #27361 ## Before submitting - [ ] ~This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).~ - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27463/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27463", "html_url": "https://github.com/huggingface/transformers/pull/27463", "diff_url": "https://github.com/huggingface/transformers/pull/27463.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27463.patch", "merged_at": 1704732036000 }
https://api.github.com/repos/huggingface/transformers/issues/27462
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27462/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27462/comments
https://api.github.com/repos/huggingface/transformers/issues/27462/events
https://github.com/huggingface/transformers/pull/27462
1,990,409,603
PR_kwDOCUB6oc5fSWOK
27,462
Fix 2 Wav2Vec2 related models' doctest
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? Same (dataset) issue in #27147, but this PR deals with some failing doctests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27462/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27462", "html_url": "https://github.com/huggingface/transformers/pull/27462", "diff_url": "https://github.com/huggingface/transformers/pull/27462.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27462.patch", "merged_at": 1699875467000 }
https://api.github.com/repos/huggingface/transformers/issues/27461
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27461/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27461/comments
https://api.github.com/repos/huggingface/transformers/issues/27461/events
https://github.com/huggingface/transformers/pull/27461
1,990,393,631
PR_kwDOCUB6oc5fSSvO
27,461
Fixed typo in error message
{ "login": "cmcmaster1", "id": 30722043, "node_id": "MDQ6VXNlcjMwNzIyMDQz", "avatar_url": "https://avatars.githubusercontent.com/u/30722043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cmcmaster1", "html_url": "https://github.com/cmcmaster1", "followers_url": "https://api.github.com/users/cmcmaster1/followers", "following_url": "https://api.github.com/users/cmcmaster1/following{/other_user}", "gists_url": "https://api.github.com/users/cmcmaster1/gists{/gist_id}", "starred_url": "https://api.github.com/users/cmcmaster1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cmcmaster1/subscriptions", "organizations_url": "https://api.github.com/users/cmcmaster1/orgs", "repos_url": "https://api.github.com/users/cmcmaster1/repos", "events_url": "https://api.github.com/users/cmcmaster1/events{/privacy}", "received_events_url": "https://api.github.com/users/cmcmaster1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27461). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? "past key much have a shape" -> "past key must have a shape" I was getting this error and the typo was bothering me. Still don't know why I'm getting the error, but it least it might bother me less.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27461/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27461", "html_url": "https://github.com/huggingface/transformers/pull/27461", "diff_url": "https://github.com/huggingface/transformers/pull/27461.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27461.patch", "merged_at": 1699875782000 }
https://api.github.com/repos/huggingface/transformers/issues/27460
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27460/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27460/comments
https://api.github.com/repos/huggingface/transformers/issues/27460/events
https://github.com/huggingface/transformers/pull/27460
1,990,374,881
PR_kwDOCUB6oc5fSOqP
27,460
Default to msgpack for safetensors
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Nice catches!" ]
1,699
1,699
1,699
MEMBER
null
Fix https://github.com/huggingface/transformers/issues/27416
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27460", "html_url": "https://github.com/huggingface/transformers/pull/27460", "diff_url": "https://github.com/huggingface/transformers/pull/27460.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27460.patch", "merged_at": 1699885022000 }
https://api.github.com/repos/huggingface/transformers/issues/27459
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27459/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27459/comments
https://api.github.com/repos/huggingface/transformers/issues/27459/events
https://github.com/huggingface/transformers/pull/27459
1,990,338,832
PR_kwDOCUB6oc5fSGz9
27,459
Fix line ending in `utils/not_doctested.txt`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? #27262 (by me) changed the line ending of this file to `\r\n` which is not good. This PR changes it back, and this would unblock https://github.com/huggingface/transformers/pull/26928#discussion_r1390413692
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27459/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27459", "html_url": "https://github.com/huggingface/transformers/pull/27459", "diff_url": "https://github.com/huggingface/transformers/pull/27459.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27459.patch", "merged_at": 1699875352000 }
https://api.github.com/repos/huggingface/transformers/issues/27458
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27458/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27458/comments
https://api.github.com/repos/huggingface/transformers/issues/27458/events
https://github.com/huggingface/transformers/pull/27458
1,990,100,375
PR_kwDOCUB6oc5fRSrI
27,458
Add eval_logits_to_cpu
{ "login": "EkhiAzur", "id": 85137108, "node_id": "MDQ6VXNlcjg1MTM3MTA4", "avatar_url": "https://avatars.githubusercontent.com/u/85137108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EkhiAzur", "html_url": "https://github.com/EkhiAzur", "followers_url": "https://api.github.com/users/EkhiAzur/followers", "following_url": "https://api.github.com/users/EkhiAzur/following{/other_user}", "gists_url": "https://api.github.com/users/EkhiAzur/gists{/gist_id}", "starred_url": "https://api.github.com/users/EkhiAzur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EkhiAzur/subscriptions", "organizations_url": "https://api.github.com/users/EkhiAzur/orgs", "repos_url": "https://api.github.com/users/EkhiAzur/repos", "events_url": "https://api.github.com/users/EkhiAzur/events{/privacy}", "received_events_url": "https://api.github.com/users/EkhiAzur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @EkhiAzur, thanks for opening a PR. Could you provide some more details about the issue this is trying to solve? ", "Yes sorry, I missed that part.\r\n\r\nWith large generative models during evaluation if compute metrics function is provided, OOM errors can occur. This error happens because predictions logits are save in GPU during evaluation. Eval_accumulation_steps is not the best option to avoid this error because it really slows the run. To avoid that, I added a parameter to decide if logits are saved in the GPU or in CPU.\r\n\r\nThanks for you attention", "@EkhiAzur Thanks for explaining! Trainer already has very many arguments and so we want to be cautious about each one we add. The one being proposed here is highly specific and I suspect not the only (or best) way to tackle OOM when using trainer and accelerate. I'll let @muellerzr and @pacman100 chime in on whether this should be added and possible alternatives ", "@EkhiAzur I can't see a reason why things would break if we just did this all the time for the logits instead. Since it's after `gather` this should be safe", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27458). All of your documentation changes will be reflected on that endpoint.", "Hello, I realized that using `preprocess_logits_for_metrics` to reduce logits shape is better than the PR change. Sorry about that. \r\n\r\nHere is my solution if anyone needs it:\r\n\r\n\r\n```\r\ndef preprocess_logits_for_metrics(logits, labels):\r\n if type(logits)==tuple:\r\n logits = logits[0]\r\n logits = logits.argmax(axis=-1)\r\n return logits\r\n\r\n...\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=trainingArgs,\r\n preprocess_logits_for_metrics = preprocess_logits_for_metrics,\r\n ...\r\n )\r\n\r\n```", "@EkhiAzur can we close this then? :) ", "Yes, thanks for your attention and help!" ]
1,699
1,702
1,702
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27458/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27458", "html_url": "https://github.com/huggingface/transformers/pull/27458", "diff_url": "https://github.com/huggingface/transformers/pull/27458.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27458.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27457
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27457/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27457/comments
https://api.github.com/repos/huggingface/transformers/issues/27457/events
https://github.com/huggingface/transformers/pull/27457
1,990,041,403
PR_kwDOCUB6oc5fRF4l
27,457
Dynamic Resolution Input for CLIP
{ "login": "Starlento", "id": 33574105, "node_id": "MDQ6VXNlcjMzNTc0MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/33574105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Starlento", "html_url": "https://github.com/Starlento", "followers_url": "https://api.github.com/users/Starlento/followers", "following_url": "https://api.github.com/users/Starlento/following{/other_user}", "gists_url": "https://api.github.com/users/Starlento/gists{/gist_id}", "starred_url": "https://api.github.com/users/Starlento/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Starlento/subscriptions", "organizations_url": "https://api.github.com/users/Starlento/orgs", "repos_url": "https://api.github.com/users/Starlento/repos", "events_url": "https://api.github.com/users/Starlento/events{/privacy}", "received_events_url": "https://api.github.com/users/Starlento/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "For VIT, it supports dynamic resolution: https://discuss.huggingface.co/t/fine-tuning-vit-with-more-patches-higher-resolution/18731.\r\nFor Dinov2, it originally support this, so that I copy the code to CLIP model file.\r\nMaybe it should be a config which is setable to user. But I am not familar with transformers Config as I want to add a new member in the class...\r\n\r\nFor the performance, I just tried the example in hf model-card.\r\nFor the original 224 settings, the prob is `tensor([[0.9899, 0.0101]]`.\r\nAnd for 448 settings, the prob is `tensor([[0.9936, 0.0064]]`.", "And I am new to open-source community, so I apologize for not ticking anything above and any unproper behaviors...\r\nFeel free to comment and I am glad to learn.", "> Thanks for adding @Starlento!\r\n> \r\n> Could you protect this behaviour behind an `interpolate_pos_encoding` argument flag, as done in other models e.g. [like here for ViT](https://github.com/huggingface/transformers/blob/b86c54d9ff4504b7287d95a215f8d6fa9388761f/src/transformers/models/vit/modeling_vit.py#L136)?\r\n\r\nThat's a good idea. But it is somewhat ugly as it is required to involve all the related functions...\r\nI just update the code and two doc strings.", "To resolve the code quality tests, run `make fix-copies` and then `make fixup` and push the changes ", "> To resolve the code quality tests, run `make fix-copies` and then `make fixup` and push the changes\r\n\r\nThank you for the suggestion. I just run the commands, and I found it is trying to add the interpolate functions to other CLIP (e.g. chinese_clip, x_clip) files. And I checked that the implementation is not complete so I checkout those files.\r\n\r\n[check_repository_consistency](https://circleci.com/gh/huggingface/transformers/1000340?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-checks-link&utm_content=summary) - Failed\r\n[check_code_quality](https://circleci.com/gh/huggingface/transformers/1000342?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-checks-link&utm_content=summary) - Success\r\n\r\nShould I also solve the check_repository_consistency thing?", "> Should I also solve the check_repository_consistency thing?\r\n\r\n@Starlento Yep, we want the changes CLIP to applied across to any models copying its architecture. ", "> > Should I also solve the check_repository_consistency thing?\r\n> \r\n> @Starlento Yep, we want the changes CLIP to applied across to any models copying its architecture.\r\n\r\nThe thing I least wanted to happen still happened... I checked that the fixup and fix-copies only matters the first CI test. And for the failed `test_torch`, I found no reference in the repo and do not think I can have an environment to test...\r\nCould you kindly provide a guide for certain situation?\r\n", "@Starlento Don't worry! The reason is that not all of CLIPs architecture will have been copied across for all models. Looking at the diff, you'll need to copy directly the `interpolate_pos_encoding` function + any additional argument passing to `modeling_clipseg.py` ", "> @Starlento Don't worry! The reason is that not all of CLIPs architecture will have been copied across for all models. Looking at the diff, you'll need to copy directly the `interpolate_pos_encoding` function + any additional argument passing to `modeling_clipseg.py`\r\n\r\nYes, you are right. Actually there was some network issue that I cannot access the CI log.\r\nI found for clipseg, it already has a similar function `interpolate_position_embeddings`, so I keep the original one." ]
1,699
1,707
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27457/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27457", "html_url": "https://github.com/huggingface/transformers/pull/27457", "diff_url": "https://github.com/huggingface/transformers/pull/27457.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27457.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27456
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27456/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27456/comments
https://api.github.com/repos/huggingface/transformers/issues/27456/events
https://github.com/huggingface/transformers/pull/27456
1,989,557,759
PR_kwDOCUB6oc5fPd3N
27,456
Docs for AutoBackbone & Backbone
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27456). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts this didn't throw an error during inference:\r\n```python\r\nmodel = AutoBackbone.from_pretrained(\r\n\"microsoft/swin-tiny-patch4-window7-224\", out_indices=(0,1,2), out_features=[\"stage4\"]\r\n)\r\n```\r\n\r\nI addressed rest of your comments. πŸ€— ", "> @amyeroberts this didn't throw an error during inference:\r\n\r\nThanks for flagging - it should! I'll look into it πŸ‘€ ", "@MKhalusova I added illustrations and defined backbone/neck/head.", "> @amyeroberts this didn't throw an error during inference:\r\n> \r\n> ```python\r\n> model = AutoBackbone.from_pretrained(\r\n> \"microsoft/swin-tiny-patch4-window7-224\", out_indices=(0,1,2), out_features=[\"stage4\"]\r\n> )\r\n> ```\r\n> \r\n> I addressed rest of your comments. πŸ€—\r\n\r\nOK, so this is a bit tricky to resolve. The reason this is happening is: \r\n* The checkpoint `\"microsoft/swin-tiny-patch4-window7-224\"` was created before the swin backbone had been added. It therefore doesn't contain `out_features` or `out_indices` in the config. \r\n* When loading the model in `from_pretrained` the config is first created [from the json here](https://github.com/huggingface/transformers/blob/bd50402b56980ff17e957342ef69bd9b0dd45a7b/src/transformers/models/auto/configuration_auto.py#L1064) and then the kwargs (in this case `out_indices` and `out_features`) are [passed in here](https://github.com/huggingface/transformers/blob/bd50402b56980ff17e957342ef69bd9b0dd45a7b/src/transformers/models/auto/configuration_auto.py#L1081).\r\n* In the config's `from_dict` method, the kwargs are applied [one at a time](https://github.com/huggingface/transformers/blob/bd50402b56980ff17e957342ef69bd9b0dd45a7b/src/transformers/configuration_utils.py#L766). This means that they're not verified against each other (but are still verified if they're consistent with `stage_names`).\r\n\r\nThis is part of the reason I'm wanting to deprecate `out_features` because handling this is annoying. I'll see if I can do something reasonable in the `BackboneConfigMixin` which isn't too convoluted to resolve. ", "@amyeroberts since Maria has approved, can you merge? ", "@MKhalusova Can you merge this if you can? Unfortunately, this wasn't included in recent release as I hope it would :/ ", "Hi,\r\n\r\nThanks a lot for this PR! πŸ™ was wondering whether we could remove the out_features from the docs, since after offline discussion with @amyeroberts, we proposed to only favor `out_indices` from now on since it's easier to maintain. Could you open a follow-up PR regarding that?" ]
1,699
1,702
1,702
CONTRIBUTOR
null
This is the docs for `AutoBackbone` class and backbone. I'm also drafting a notebook but I will be off this week so it will be delayed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27456/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27456", "html_url": "https://github.com/huggingface/transformers/pull/27456", "diff_url": "https://github.com/huggingface/transformers/pull/27456.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27456.patch", "merged_at": 1702300938000 }
https://api.github.com/repos/huggingface/transformers/issues/27455
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27455/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27455/comments
https://api.github.com/repos/huggingface/transformers/issues/27455/events
https://github.com/huggingface/transformers/pull/27455
1,989,331,403
PR_kwDOCUB6oc5fOwW_
27,455
Fixed typo in pipelines.md documentation
{ "login": "adismort14", "id": 104080429, "node_id": "U_kgDOBjQkLQ", "avatar_url": "https://avatars.githubusercontent.com/u/104080429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adismort14", "html_url": "https://github.com/adismort14", "followers_url": "https://api.github.com/users/adismort14/followers", "following_url": "https://api.github.com/users/adismort14/following{/other_user}", "gists_url": "https://api.github.com/users/adismort14/gists{/gist_id}", "starred_url": "https://api.github.com/users/adismort14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adismort14/subscriptions", "organizations_url": "https://api.github.com/users/adismort14/orgs", "repos_url": "https://api.github.com/users/adismort14/repos", "events_url": "https://api.github.com/users/adismort14/events{/privacy}", "received_events_url": "https://api.github.com/users/adismort14/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27455). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? This PR fixes a minor (very minor, to be frank) typo in the documentation that I found while skimming through it. My OCD won't let me ignore it. Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stevhliu @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27455/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27455", "html_url": "https://github.com/huggingface/transformers/pull/27455", "diff_url": "https://github.com/huggingface/transformers/pull/27455.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27455.patch", "merged_at": 1699897840000 }
https://api.github.com/repos/huggingface/transformers/issues/27454
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27454/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27454/comments
https://api.github.com/repos/huggingface/transformers/issues/27454/events
https://github.com/huggingface/transformers/pull/27454
1,989,297,903
PR_kwDOCUB6oc5fOpoC
27,454
Clap processor: remove wasteful np.stack operations
{ "login": "m-bain", "id": 36994049, "node_id": "MDQ6VXNlcjM2OTk0MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/36994049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m-bain", "html_url": "https://github.com/m-bain", "followers_url": "https://api.github.com/users/m-bain/followers", "following_url": "https://api.github.com/users/m-bain/following{/other_user}", "gists_url": "https://api.github.com/users/m-bain/gists{/gist_id}", "starred_url": "https://api.github.com/users/m-bain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/m-bain/subscriptions", "organizations_url": "https://api.github.com/users/m-bain/orgs", "repos_url": "https://api.github.com/users/m-bain/repos", "events_url": "https://api.github.com/users/m-bain/events{/privacy}", "received_events_url": "https://api.github.com/users/m-bain/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27454). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts hows this ?\r\n\r\n```python\r\nimport time\r\nimport numpy as np\r\n\r\nwaveform = np.random.rand(100_000)\r\nn_repeat = 10\r\n\r\nt1_p = time.time()\r\nprev_impl = np.stack(np.tile(waveform, n_repeat))\r\nt2_p = time.time()\r\n\r\nt1_n = time.time()\r\nnew_impl = np.tile(waveform, n_repeat)\r\nt2_n = time.time()\r\n\r\nassert (prev_impl == new_impl).all()\r\nprint(f\"Time to process [prev. impl.]: {t2_p-t1_p:.3f}s\")\r\nprint(f\"Time to process [new. impl.]: {t2_n-t1_n:.3f}s\")\r\n```\r\n```plaintext\r\nTime to process [prev. impl.]: 0.883s\r\nTime to process [new. impl.]: 0.001s\r\n```", "@m-bain Thanks!" ]
1,699
1,700
1,699
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Upon profiling, it showed some strange result that the ClapProcessor was taking **0.5s** to apply `_get_input_mel(...)` on short audio (less than 10s), whereas medium length audio (10s-20s) was taking only **0.02s** As it turns out there was a wasteful `np.stack` operation on the 1-D waveform numpy array, meaning that the 1-D array is unpacked then stacked back together again, with no effect. This PR removes this wasteful op and short audio is now also processed in **0.02s** ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27454/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27454/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27454", "html_url": "https://github.com/huggingface/transformers/pull/27454", "diff_url": "https://github.com/huggingface/transformers/pull/27454.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27454.patch", "merged_at": 1699958473000 }
https://api.github.com/repos/huggingface/transformers/issues/27453
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27453/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27453/comments
https://api.github.com/repos/huggingface/transformers/issues/27453/events
https://github.com/huggingface/transformers/issues/27453
1,989,157,561
I_kwDOCUB6oc52kCK5
27,453
Audio-MAE - ViTMAE for audio
{ "login": "justinluong", "id": 25112608, "node_id": "MDQ6VXNlcjI1MTEyNjA4", "avatar_url": "https://avatars.githubusercontent.com/u/25112608?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justinluong", "html_url": "https://github.com/justinluong", "followers_url": "https://api.github.com/users/justinluong/followers", "following_url": "https://api.github.com/users/justinluong/following{/other_user}", "gists_url": "https://api.github.com/users/justinluong/gists{/gist_id}", "starred_url": "https://api.github.com/users/justinluong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justinluong/subscriptions", "organizations_url": "https://api.github.com/users/justinluong/orgs", "repos_url": "https://api.github.com/users/justinluong/repos", "events_url": "https://api.github.com/users/justinluong/events{/privacy}", "received_events_url": "https://api.github.com/users/justinluong/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @sanchit-gandhi ", "> cc @sanchit-gandhi\r\n\r\nCan I pick this up? Would be a valuable learning task for me :)\r\n\r\nThanks", "I should have clarified in my initial post that my intention was to contribute this model personally, as I've been working with the model a lot recently. However, I'm definitely open to collaborate! Maybe we could work together on this @Pratyush-exe :)", "This model looks very interesting! I would love to collaborate, if that's ohkay with you @justinluong :)", "Hey @Pratyush-exe sorry for the late reply! Things have been quite busy at work recently. If you'd like, please feel free to pick this up instead as I think I won't have bandwidth to work on it for a while. All the best :)", "Sure @justinluong.\n\nWould love to pick this up.", "Hi @amyeroberts @sanchit-gandhi\r\nPlease assign this to me.\r\n\r\nThanks", "Hey, we usually don't assign, just open a PR and link this issue πŸ€— ", "I have been having problems in reconstructing the `kaldi.fbank` to audio file. The audio is very noisy.\r\nI am using `librosa.feature.inverse.mel_to_audio` for the conversion. I know fbank and mel_spectogram are not the same thing but that the only thing I found through search. \r\nAlso the results shown in the original repo is good. Any idea how that is done?\r\n\r\n", "Hi all πŸ€—, @ArthurZucker @justinluong \r\n \r\nI have started working on adding AudioMAE as my first contribution to hugging-face\r\n \r\nI made some notes on [AudioMAE Notes](https://siv-x-siv.vercel.app/notes/AudioMAE), and want to add that decoder alone uses local and hybrid attention, which is thrown during finetuning.", "Awesome to see so much interest in this model! Given AST is super popular on the Hub as the de facto audio classification model, the model has a permissive license, and the original implementation is somewhat difficult to run, I think this would be a valuable new model addition. Feel free to open a PR to start the contribution! You can start by copying the most related model (either MAE or AST) and then gradually update the code to bring it into alignment with Audio-MAE. Here's the full guide for contributing a model, which explains this process https://huggingface.co/docs/transformers/add_new_model\r\n\r\ncc @ylacombe as well" ]
1,699
1,706
null
NONE
null
### Model description This model is is a Self-supervised Vision Transformer that uses patch reconstruction as the spectrogram task. It extends MAE (which is already on HuggingFace) for audio. This model would be a valuable addition as there doesn't seem to be a self-supervised ViT model on HugginFace currently. AST is the closest and uses supervised pre-training. Conceptually, Audio-MAE is also simpler but achieves comparable performance in their paper. Some differences compared to the standard MAE Model - During pre-training local and hybrid attention mechanisms can be used. - During fine-tuning masking is also used which differs to MAE. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation **Implementation** https://github.com/facebookresearch/AudioMAE Created by Po-Yao Huang @berniebear on Github **Pre-trained Weights** Available in the github repo **Paper (Masked Autoencoders that Listen)** https://arxiv.org/abs/2207.06405
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27453/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27453/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27452
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27452/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27452/comments
https://api.github.com/repos/huggingface/transformers/issues/27452/events
https://github.com/huggingface/transformers/issues/27452
1,989,086,508
I_kwDOCUB6oc52jw0s
27,452
Whisper error while not using whisper: "generation_config.return_timestamps ??= false;"
{ "login": "davidtbo", "id": 14362389, "node_id": "MDQ6VXNlcjE0MzYyMzg5", "avatar_url": "https://avatars.githubusercontent.com/u/14362389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidtbo", "html_url": "https://github.com/davidtbo", "followers_url": "https://api.github.com/users/davidtbo/followers", "following_url": "https://api.github.com/users/davidtbo/following{/other_user}", "gists_url": "https://api.github.com/users/davidtbo/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidtbo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidtbo/subscriptions", "organizations_url": "https://api.github.com/users/davidtbo/orgs", "repos_url": "https://api.github.com/users/davidtbo/repos", "events_url": "https://api.github.com/users/davidtbo/events{/privacy}", "received_events_url": "https://api.github.com/users/davidtbo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "@xenova feel free to transfer this over to the `transformers.js` as it's not related to `transformers`", "> @xenova feel free to transfer this over to the transformers.js as it's not related to transformers\r\n\r\nI don't have the option to transfer, so perhaps the OP can close and reopen in the correct repo (which is http://github.com/xenova/transformers.js). Luckily this can be solved in a single reply:\r\n\r\n@davidtbo This syntax error is due to your usage of an outdated version of Node.js. Please upgrade to at least version 18. All versions before 18 have reached [EOL](https://endoflife.date/nodejs), so we won't be adding support for them in future.\r\n", "Thanks @xenova! Closing as complete as per the Node.js recommendation." ]
1,699
1,701
1,701
NONE
null
### System Info I'm trying to use Transformers.js with a feature extraction pipeline and I happen to be using it in React NodeGUI. I'm getting an error that's from your Whisper model code and I'm not trying to use Whisper. The sample code is below. When I do `npm run dev` it goes fine, then when I run `npm start` I get this error: `"generation_config.return_timestamps ??= false;"` The trace output points to a line in the compiled` index.js` that is part of the `WhisperForConditionalGeneration` class. I have no interest in Whisper. Is there a way to disable it somehow? Why is this happening if I'm not using that class? I appreciate your help figuring this out :) @ArthurZucker and @younesbelkada it looks like the instructions say I should tag you. Thanks for this great package! ### Who can help? ### Reproduction ``` import { Text, View, Button, } from "@nodegui/react-nodegui"; import React, { useEffect, useState } from "react"; import { pipeline, Pipeline } from '@xenova/transformers'; let modelName = 'all-mpnet-base-v2' export function EmbedderService() { const [embedder, setEmbedder] = useState<Pipeline>(); useEffect(() => { (async () => { setEmbedder(await pipeline( 'feature-extraction', modelName, { quantized: true} )) })() }, []) return ( <View style={containerStyle}> <Text style={textStyle} wordWrap={true}> Embedder Status: </Text> </View> ) } const containerStyle = ` flex: 1; justify-content: 'space-around'; `; const textStyle = ` padding-right: 20px; `; const btnStyle = ` margin-horizontal: 20px; height: 40px; `; ``` ### Expected behavior Builds without error instead of failing due to a model that I'm not using with this trace: ``` dist/index.js:69312 generation_config.return_timestamps ??= false; ^ SyntaxError: Unexpected token '=' at Object.compileFunction (vm.js:344:18) at wrapSafe (internal/modules/cjs/loader.js:1106:15) at Module._compile (internal/modules/cjs/loader.js:1140:27) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1196:10) at Module.load (internal/modules/cjs/loader.js:1040:32) at Function.Module._load (internal/modules/cjs/loader.js:929:14) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12) at internal/main/run_main_module.js:17:47ith this trace: ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27452/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27451
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27451/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27451/comments
https://api.github.com/repos/huggingface/transformers/issues/27451/events
https://github.com/huggingface/transformers/issues/27451
1,989,085,002
I_kwDOCUB6oc52jwdK
27,451
Adding Flash Attention Support for StableLMEpochForCausalLM
{ "login": "reletreby", "id": 83604672, "node_id": "MDQ6VXNlcjgzNjA0Njcy", "avatar_url": "https://avatars.githubusercontent.com/u/83604672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reletreby", "html_url": "https://github.com/reletreby", "followers_url": "https://api.github.com/users/reletreby/followers", "following_url": "https://api.github.com/users/reletreby/following{/other_user}", "gists_url": "https://api.github.com/users/reletreby/gists{/gist_id}", "starred_url": "https://api.github.com/users/reletreby/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reletreby/subscriptions", "organizations_url": "https://api.github.com/users/reletreby/orgs", "repos_url": "https://api.github.com/users/reletreby/repos", "events_url": "https://api.github.com/users/reletreby/events{/privacy}", "received_events_url": "https://api.github.com/users/reletreby/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @reletreby, thanks for opening this feature request! \r\n\r\nThe stability LM model has its code on the hub. I'd suggest opening up a discussion to request and/or PR to add the feature there. Let us know if you have any questions about the implementation! ", "Sure! Will start a discussion. Thanks" ]
1,699
1,699
1,699
NONE
null
### Feature request Please add flash attention 2 support for StableLMEpochForCausalLM. Will help with finetuning Stability LM 3B model. ### Motivation This will enable fine-tuning stability 3B LLM with flash attention 2 ### Your contribution Happy to help if needed!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27451/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27450
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27450/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27450/comments
https://api.github.com/repos/huggingface/transformers/issues/27450/events
https://github.com/huggingface/transformers/pull/27450
1,988,999,259
PR_kwDOCUB6oc5fNuWb
27,450
Support ONNX export for causal LM sequence classifiers
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The failing test might just need a rebase to main otherwise I'll skip it on main and work on a fix ", "> The failing test might just need a rebase to main otherwise I'll skip it on main and work on a fix\r\n\r\n@ArthurZucker I rebased in https://github.com/huggingface/transformers/pull/27450/commits/807625026226edcd52c747f4c70bc37eddd9bf8e but looks like something is still up with CI. Perhaps different tests get selected based on files changed or between PRs/main\r\n\r\n* `torch_tests` from this PR (14627 tests selected): https://app.circleci.com/pipelines/github/huggingface/transformers/78297/workflows/e34f774a-5910-488e-942d-7121d1007bf1/jobs/998415\r\n* `torch_tests` on `main` https://github.com/huggingface/transformers/commit/78f6ed6c70b29c1560780e3869a7ad4c6b3d2710 (7619 tests selected): https://app.circleci.com/pipelines/github/huggingface/transformers/78269/workflows/3e9cb27d-42b0-4b00-bf5f-432eec81c4c0/jobs/997963\r\n\r\n", "@dwyatte Yes, the test fetcher selects a subset of the tests to run based on the files that are touched. In this case, the failing tests (I believe) are unreleated to your PR. The tests involving safetensors have had a patch pushed on main. Could you rebase on main to include these in the test runners?", "@amyeroberts I think there are some other problems on `main` still. Here's what's failing in the tests for this PR after the rebases in https://github.com/huggingface/transformers/commit/807625026226edcd52c747f4c70bc37eddd9bf8e and https://github.com/huggingface/transformers/commit/d8ab2c9d02c6d5a658145db87c1abc920fd7cea9\r\n\r\n```\r\nFAILED tests/models/switch_transformers/test_modeling_switch_transformers.py::SwitchTransformersModelTest::test_assisted_decoding_sample - RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 3\r\nFAILED tests/models/t5/test_modeling_t5.py::T5ModelTest::test_assisted_decoding_sample - RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 3\r\nFAILED tests/models/speech_to_text/test_modeling_speech_to_text.py::Speech2TextModelTest::test_tf_from_pt_safetensors - AssertionError: False is not true\r\n```", "@dwyatte Exactly same unrelated CI failures in #27351 . In addition to the failures related to `safetensors` (you already mentioned above), we also need to resolve the other CI failures caused by `test_assisted_decoding_sample` in `tests_torch`.", "Sorry both for the delays, I'll skip these 3 tests as well. cc @gante I'll look into the test_assisted_decoding_sample. ", "Hi, @ArthurZucker , regarding the failures caused by `test_assisted_decoding_sample`, there have already been some discussions in #26892 , these failures did not only happen for `switch_transformers` and `t5`, but also happened for `blenderbot` , `pegasus` and `umt5` in my previous CI checks. It seems to be a more general issue that may happen to more than these listed models. Thanks for the help in fixing them!", "just merged #27508 which should skip it for all models ", "Thanks @ArthurZucker that took care of the remaining failures. This is ready to merge" ]
1,699
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Partial fix for https://github.com/huggingface/optimum/issues/1527 in `optimum` when exporting causal LMs with sequence classification support to ONNX ONNX's argmax operator does not support int64, but that should not be needed here since these are just boolean tensors ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker and @younesbelkada (CC @fxmarty)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27450/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27450", "html_url": "https://github.com/huggingface/transformers/pull/27450", "diff_url": "https://github.com/huggingface/transformers/pull/27450.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27450.patch", "merged_at": 1700128594000 }
https://api.github.com/repos/huggingface/transformers/issues/27449
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27449/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27449/comments
https://api.github.com/repos/huggingface/transformers/issues/27449/events
https://github.com/huggingface/transformers/issues/27449
1,988,956,402
I_kwDOCUB6oc52jRDy
27,449
Compute logits and past_key_values once for the initial context rather than `num_beams` times?
{ "login": "tom-p-reichel", "id": 43631024, "node_id": "MDQ6VXNlcjQzNjMxMDI0", "avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tom-p-reichel", "html_url": "https://github.com/tom-p-reichel", "followers_url": "https://api.github.com/users/tom-p-reichel/followers", "following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}", "gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}", "starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions", "organizations_url": "https://api.github.com/users/tom-p-reichel/orgs", "repos_url": "https://api.github.com/users/tom-p-reichel/repos", "events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}", "received_events_url": "https://api.github.com/users/tom-p-reichel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Hi @tom-p-reichel πŸ‘‹ Thank you for opening this issue. It is something I really want to bring to `generate`! In fact, a few weeks ago at the PyTorch conference, they mentioned that isolating this stage (also known as prefill) is a significant source of speedups.\r\n\r\nThe devil is, as always, in the details. We can't move the input expansion inside the decoding functions (like `sample`), as the functions are part of our public API and it would be a breaking change. We can, however, create a `prefill` function in `GenerationMixin`, and call it before `# 10. go into different generation modes`. The prepared inputs in `prefill` would then be expanded on a need basis, before entering the decoding functions. This would preserve the API while enabling a faster prefill stage!\r\n\r\nWould you be interested in working on this? πŸ€— I'd be happy to provide pointers and reviews along the way!", "@gante Sure, I can give this a try. Will send some initial questions soon.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,703
1,703
CONTRIBUTOR
null
### Feature request It looks like (at least) the following methods: - sample - beam search - beam sample - group beam search all immediately expand their input batch size to either `num_beams` or `num_return_sequences`. This means that the initial context, which can be very long, will be run through the model `num_beams`/`num_return_sequences` times, which seems to be unnecessary and causes a large overhead, especially when the initial context is long. It seems to be possible to run the model once on the initial context within `beam_search` and related functions and broadcast the model's outputs to the desired batch size rather than expand the batch beforehand and run a full batch of the same context. ### Motivation I wrote a short hack in the contribution section of the report that fixes this redundancy, at least for beam search, and at least for decoder only models, etc-- it's not a complete PR, but it demonstrates the kind of change that should be made. I also tested the performance using this script: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer torch.set_default_device("cuda") model = AutoModelForCausalLM.from_pretrained("PY007/TinyLlama-1.1B-step-50K-105b", load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("PY007/TinyLlama-1.1B-step-50K-105b") inputs = tokenizer('''The unanimous Declaration of the thirteen united States of America, When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation. We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, --That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.--Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world. ''', return_tensors="pt", return_attention_mask=False) import time start = time.time() for _ in range(10): outputs = model.generate(**inputs, max_new_tokens=15, do_sample=False, num_beams=6, num_return_sequences=6) print("elapsed",(time.time()-start)/10) text = tokenizer.batch_decode(outputs[:,-15:]) print(text) ``` Before the diff below was applied, this was the output: ``` elapsed 4.5888751745224 ['We hold these truths to be self-evident, that all men', 'We hold these truths to be self evident, that all men are created', 'We hold these truths to be self-evident, that all Men', 'He has refused his Assent to Laws for establishing Judiciary', 'We hold these truths to be self-evident, that allmen', 'He has refused his Assent to Laws for establishing Judicial Systems'] ``` After the diff below was applied, this was the output: ``` elapsed 2.614666700363159 ['We hold these truths to be self-evident, that all men', 'We hold these truths to be self evident, that all men are created', 'We hold these truths to be self-evident, that all Men', 'He has refused his Assent to Laws for establishing Judiciary', 'We hold these truths to be self-evident, that allmen', 'He has refused his Assent to Laws for establishing Judicial Systems'] ``` So, the change seems to give a ~1.7x speedup for this specific case while leaving the results of `beam_search` unchanged. The speedup should be better for higher `num_beams` or longer contexts. ### Your contribution As an example, I wrote a short hack that changes the behavior of `beam_search` on the main branch of huggingface so that it does not recompute `logits` and `past_key_values` `num_beams` times: ```diff diff --git a/src/transformers/generation/utils.py b/src/transformers/generation/utils.py index 4dbfc3670..63f863c80 100644 --- a/src/transformers/generation/utils.py +++ b/src/transformers/generation/utils.py @@ -1821,13 +1821,6 @@ class GenerationMixin: num_beam_hyps_to_keep=generation_config.num_return_sequences, max_length=generation_config.max_length, ) - # 12. interleave input_ids with `num_beams` additional sequences per batch - input_ids, model_kwargs = self._expand_inputs_for_generation( - input_ids=input_ids, - expand_size=generation_config.num_beams, - is_encoder_decoder=self.config.is_encoder_decoder, - **model_kwargs, - ) # 13. run beam search return self.beam_search( input_ids, @@ -3142,12 +3135,7 @@ class GenerationMixin: batch_size = len(beam_scorer._beam_hyps) num_beams = beam_scorer.num_beams - batch_beam_size, cur_len = input_ids.shape - - if num_beams * batch_size != batch_beam_size: - raise ValueError( - f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}." - ) + batch_beam_size, cur_len = batch_size*num_beams, input_ids.shape[1] # init attention / hidden states / scores tuples scores = () if (return_dict_in_generate and output_scores) else None @@ -3171,6 +3159,7 @@ class GenerationMixin: beam_scores[:, 1:] = -1e9 beam_scores = beam_scores.view((batch_size * num_beams,)) + expander = lambda x : x.expand([num_beams,*x.shape[1:]]) this_peer_finished = False # used by synced_gpus only while True: if synced_gpus: @@ -3192,6 +3181,13 @@ class GenerationMixin: output_hidden_states=output_hidden_states, ) + + if num_beams * batch_size != input_ids.shape[0]: + # ok, NOW expand the batch dimension + # AFTER the hard part is done + outputs["logits"] = expander(outputs["logits"]) + outputs["past_key_values"] = [[expander(y) for y in x] for x in outputs["past_key_values"]] + if synced_gpus and this_peer_finished: cur_len = cur_len + 1 continue # don't waste resources running the code we don't need @@ -3237,6 +3233,11 @@ class GenerationMixin: next_indices = torch.div(next_tokens, vocab_size, rounding_mode="floor") next_tokens = next_tokens % vocab_size + if input_ids.shape[0] != num_beams: + input_ids = expander(input_ids) + if "attention_mask" in model_kwargs: + model_kwargs["attention_mask"] = expander(model_kwargs["attention_mask"]) + # stateless beam_outputs = beam_scorer.process( input_ids, ``` a change would have to be made to each one of the affected functions. I may be able to expand this into a PR if desired.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27449/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27448
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27448/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27448/comments
https://api.github.com/repos/huggingface/transformers/issues/27448/events
https://github.com/huggingface/transformers/pull/27448
1,988,822,332
PR_kwDOCUB6oc5fNJ9l
27,448
Fix bug for T5x to PyTorch convert script with varying encoder and decoder layers
{ "login": "JamesJiang97", "id": 49048129, "node_id": "MDQ6VXNlcjQ5MDQ4MTI5", "avatar_url": "https://avatars.githubusercontent.com/u/49048129?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JamesJiang97", "html_url": "https://github.com/JamesJiang97", "followers_url": "https://api.github.com/users/JamesJiang97/followers", "following_url": "https://api.github.com/users/JamesJiang97/following{/other_user}", "gists_url": "https://api.github.com/users/JamesJiang97/gists{/gist_id}", "starred_url": "https://api.github.com/users/JamesJiang97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JamesJiang97/subscriptions", "organizations_url": "https://api.github.com/users/JamesJiang97/orgs", "repos_url": "https://api.github.com/users/JamesJiang97/repos", "events_url": "https://api.github.com/users/JamesJiang97/events{/privacy}", "received_events_url": "https://api.github.com/users/JamesJiang97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,699
1,700
1,700
CONTRIBUTOR
null
When I try to use [this script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py) to convert ByT5 T5x model to PyTorch model, I get the following error : ``` Traceback (most recent call last): File "/home/jiang/t5x_home/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 231, in <module> convert_t5x_checkpoint_to_pytorch( File "/home/jiang/t5x_home/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 200, in convert_t5x_checkpoint_to_pytorch load_t5x_weights_in_t5(model, config, t5x_checkpoint_path, is_encoder_only) File "/home/jiang/t5x_home/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 180, in load_t5x_weights_in_t5 converted = convert_t5x_to_pytorch(variables, num_layers=config.num_layers, is_encoder_only=is_encoder_only) File "/home/jiang/t5x_home/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 117, in convert_t5x_to_pytorch layer_norm = t5x_layer_norm_lookup(old, i, "decoder", "pre_self_attention_layer_norm") File "/home/jiang/t5x_home/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 69, in t5x_layer_norm_lookup return params[f"{prefix}/layers_{i}/{layer_name}/scale"] KeyError: 'decoder/layers_4/pre_self_attention_layer_norm/scale' ``` I believe this is because the script did not distinguish between the number of decoder layers and encoder layers, and in the ByT5 model, the number of decoder layers is different from that of encoder layers. I fixed this bug by passing a 'num_decoder_layers' parameter to the relevant functions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27448/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27448", "html_url": "https://github.com/huggingface/transformers/pull/27448", "diff_url": "https://github.com/huggingface/transformers/pull/27448.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27448.patch", "merged_at": 1700074822000 }
https://api.github.com/repos/huggingface/transformers/issues/27447
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27447/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27447/comments
https://api.github.com/repos/huggingface/transformers/issues/27447/events
https://github.com/huggingface/transformers/pull/27447
1,988,809,179
PR_kwDOCUB6oc5fNHJC
27,447
[DataCollator] Warn on identical `eos_token_id` and `pad_token_id`
{ "login": "MustSave", "id": 58774251, "node_id": "MDQ6VXNlcjU4Nzc0MjUx", "avatar_url": "https://avatars.githubusercontent.com/u/58774251?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MustSave", "html_url": "https://github.com/MustSave", "followers_url": "https://api.github.com/users/MustSave/followers", "following_url": "https://api.github.com/users/MustSave/following{/other_user}", "gists_url": "https://api.github.com/users/MustSave/gists{/gist_id}", "starred_url": "https://api.github.com/users/MustSave/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MustSave/subscriptions", "organizations_url": "https://api.github.com/users/MustSave/orgs", "repos_url": "https://api.github.com/users/MustSave/repos", "events_url": "https://api.github.com/users/MustSave/events{/privacy}", "received_events_url": "https://api.github.com/users/MustSave/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,699
1,699
1,699
NONE
null
# What does this PR do? This PR displays a warning message when the values of `pad_token_id` and `eos_token_id` are identical. This is to prevent unexpected behavior during multi-turn training. After the multi-turn data training with [DataCollatorForCompletionOnlyLM](https://github.com/huggingface/trl/blob/9e9f024399b76842ece3552884bbc4f304fd4153/trl/trainer/utils.py#L56), I encountered an issue where the model continued generating outputs even after the assistant's turn had been completed. This issue was due to the equivalence of the tokenizer's eos token and pad token by default, resulting in the eos token not being properly trained. For instance, in the torch_call() function within [data_collator.py](https://github.com/huggingface/transformers/blob/7ee995fd9c692761c4601ddbffa2ac2ec9f27b0b/src/transformers/data/data_collator.py#L740C10-L740C10), the pad_token_id is converted to ignore_id(-100). ```python if self.mlm: batch["input_ids"], batch["labels"] = self.torch_mask_tokens( batch["input_ids"], special_tokens_mask=special_tokens_mask ) else: labels = batch["input_ids"].clone() if self.tokenizer.pad_token_id is not None: labels[labels == self.tokenizer.pad_token_id] = -100 batch["labels"] = labels ``` If the multi-turn data is formatted as shown below, and `eos_token_id` and `pad_token_id` are identical, the eos_token would not be properly trained. This could lead to a scenario where the model continuously generates both user and assistant turns without recognizing the end of sequence (eos) token. ``` <s>### User: user message 1 ### Assistant: assistant message 1</s> ### User: user message 2 ### Assistant: assistant message 2</s> ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27447/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27447", "html_url": "https://github.com/huggingface/transformers/pull/27447", "diff_url": "https://github.com/huggingface/transformers/pull/27447.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27447.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27446
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27446/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27446/comments
https://api.github.com/repos/huggingface/transformers/issues/27446/events
https://github.com/huggingface/transformers/issues/27446
1,988,697,297
I_kwDOCUB6oc52iRzR
27,446
Whisper Large v3 Word Level Timestamps Error
{ "login": "souvikqb", "id": 135606016, "node_id": "U_kgDOCBUvAA", "avatar_url": "https://avatars.githubusercontent.com/u/135606016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/souvikqb", "html_url": "https://github.com/souvikqb", "followers_url": "https://api.github.com/users/souvikqb/followers", "following_url": "https://api.github.com/users/souvikqb/following{/other_user}", "gists_url": "https://api.github.com/users/souvikqb/gists{/gist_id}", "starred_url": "https://api.github.com/users/souvikqb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/souvikqb/subscriptions", "organizations_url": "https://api.github.com/users/souvikqb/orgs", "repos_url": "https://api.github.com/users/souvikqb/repos", "events_url": "https://api.github.com/users/souvikqb/events{/privacy}", "received_events_url": "https://api.github.com/users/souvikqb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm having the same issue, and seems to be related to the batch size. At least it works for me if i set `batch_size=1`.\r\n\r\nNevertheless, it would good to be able to use a higher batch size.", "Thanks for the ping - will be fixed by #26699. Will try and fix this asap!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Fixed by https://github.com/huggingface/transformers/pull/28114 - feel free to install Transformers on `main` to get the update: https://huggingface.co/docs/transformers/installation#install-from-source\r\n\r\nOtherwise, it'll be in the next release!" ]
1,699
1,705
1,702
NONE
null
### System Info - `transformers` version: 4.36.0.dev0 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.19.0 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "openai/whisper-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=30, batch_size=16, return_timestamps=True, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) sample = 'Denoised_Audio.wav' result = pipe(sample, return_timestamps="word") print(result["chunks"]) ``` Error - ![image](https://github.com/huggingface/transformers/assets/135606016/d1dc0e4f-d1ae-4021-9fe5-aac7e75c2119) ### Expected behavior Expecting to get back Word Level timestamps Just to be clear the file is **NOT** Corrupted by any chance. It still gives out the text transcript. Only facing issue in word level transcription
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27446/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27446/timeline
completed
null
null