url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
โŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
โŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/27748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27748/comments
https://api.github.com/repos/huggingface/transformers/issues/27748/events
https://github.com/huggingface/transformers/pull/27748
2,014,816,598
PR_kwDOCUB6oc5gkxlV
27,748
Deprecate `LegacyIndex` used in `RagRetriever`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27748). All of your documentation changes will be reflected on that endpoint.", "Hi @LysandreJik \r\n\r\nAfter #27776, this PR doesn't add much value. Let me know if we would like to close this or still keep this deprecation." ]
1,701
1,701
1,701
COLLABORATOR
null
# What does this PR do? A first step to reduce the security issue of pickle for `RAG`. In general, we can also ask users to set some environment variable to enable using the code block that contains `pickle.load` to have even better security. But that could be done in a follow up PR (which also applies to `TransfoXLTokenizer`)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27748/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27748", "html_url": "https://github.com/huggingface/transformers/pull/27748", "diff_url": "https://github.com/huggingface/transformers/pull/27748.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27748.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27747/comments
https://api.github.com/repos/huggingface/transformers/issues/27747/events
https://github.com/huggingface/transformers/issues/27747
2,014,620,199
I_kwDOCUB6oc54FKon
27,747
List index out of range
{ "login": "MinnTrit", "id": 151976884, "node_id": "U_kgDOCQ77tA", "avatar_url": "https://avatars.githubusercontent.com/u/151976884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MinnTrit", "html_url": "https://github.com/MinnTrit", "followers_url": "https://api.github.com/users/MinnTrit/followers", "following_url": "https://api.github.com/users/MinnTrit/following{/other_user}", "gists_url": "https://api.github.com/users/MinnTrit/gists{/gist_id}", "starred_url": "https://api.github.com/users/MinnTrit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MinnTrit/subscriptions", "organizations_url": "https://api.github.com/users/MinnTrit/orgs", "repos_url": "https://api.github.com/users/MinnTrit/repos", "events_url": "https://api.github.com/users/MinnTrit/events{/privacy}", "received_events_url": "https://api.github.com/users/MinnTrit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello ๐Ÿค— \r\nI would recommend you to try putting some breakpoints here and there to check what you are feeding to the model. \r\nAlso truncation and padding should be done with the ` truncation` and `padding` arguments of the ` __call__` method of the tokenizers. You should also set ` return_tensors = \"pt\" ` to directly have tensors (and batched tensors) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,704
1,704
NONE
null
### System Info Hi everyone, I'm currently having the issue with my code. This is my code: ```python import transformers from transformers import AutoTokenizer, RobertaForSequenceClassification import pandas as pd import numpy as np import torch df = pd.read_excel(r'D:\Data Intense\Level 2\Day9\comments_df.xlsx') model = RobertaForSequenceClassification.from_pretrained("wonrax/phobert-base-vietnamese-sentiment") tokenizer = AutoTokenizer.from_pretrained("wonrax/phobert-base-vietnamese-sentiment", use_fast=False, vocab_size=64001) #Test with one sentence #sentence = 'Tรดi yรชu Viแป‡t Nam' def sentiment(sentence): max_sequence_length = model.config.max_position_embeddings encoding = tokenizer(sentence, add_special_tokens=True, padding='max_length', max_length = max_sequence_length, return_attention_mask=True) #Turn words to token IDs tokens_id = encoding['input_ids'] attention_mask = encoding['attention_mask'] #Check if the tokenized sentence length exceeds the model's maximum sequence length if len(tokens_id) > max_sequence_length: # Truncate the tokens and attention mask tokens_id = tokens_id[:max_sequence_length] attention_mask = attention_mask[:max_sequence_length] #Pad the tokens and attention mask if needed while len(tokens_id) < max_sequence_length: tokens_id.append(tokenizer.pad_token_id) attention_mask.append(0) #Turn tokens Id to the tensor input_ids = torch.tensor(tokens_id) attention_mask = torch.tensor(attention_mask) #Feed the inputs for the embedding layers torch.no_grad() output = model(input_ids.unsqueeze(0), attention_mask=attention_mask.unsqueeze(0)) #Take the result predicted_class = torch.argmax(output.logits.softmax(dim=-1)).item() return predicted_class df['sentiment'] = df['content'].apply(sentiment) ``` I understand that the maximum length of the token sequence must equal to the maximum length provided by the model, in this case, I have checked it, and the length is 256, I also added 2 condition to truncate (in case the length of the token is longer than the maximum length) and add the padding tokens (in case the length of the token is shorter than the maximum length). I've also tried to set out the vocabulary size at the beginning to make sure the model can work consistently within this range, then, I have this error below: ```--------------------------------------------------------------------------- IndexError Traceback (most recent call last) [d:\Hugging](file:///D:/Hugging) Face\Day2 - Transformer\Apply.ipynb Cell 2 line 5 [47](vscode-notebook-cell:/d%3A/Hugging%20Face/Day2%20-%20Transformer/Apply.ipynb#W2sZmlsZQ%3D%3D?line=46) predicted_class = torch.argmax(output.logits.softmax(dim=-1)).item() [49](vscode-notebook-cell:/d%3A/Hugging%20Face/Day2%20-%20Transformer/Apply.ipynb#W2sZmlsZQ%3D%3D?line=48) return predicted_class ---> [51](vscode-notebook-cell:/d%3A/Hugging%20Face/Day2%20-%20Transformer/Apply.ipynb#W2sZmlsZQ%3D%3D?line=50) df['sentiment'] = df['content'].apply(sentiment) File [c:\Users\PC\anaconda3\Lib\site-packages\pandas\core\series.py:4630](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4630), in Series.apply(self, func, convert_dtype, args, **kwargs) [4520](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4520) def apply( [4521](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4521) self, [4522](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4522) func: AggFuncType, (...) [4525](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4525) **kwargs, [4526](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4526) ) -> DataFrame | Series: [4527](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4527) """ [4528](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4528) Invoke function on values of Series. [4529](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4529) (...) [4628](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4628) dtype: float64 [4629](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4629) """ -> [4630](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4630) return SeriesApply(self, func, convert_dtype, args, kwargs).apply() File [c:\Users\PC\anaconda3\Lib\site-packages\pandas\core\apply.py:1025](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/apply.py:1025), in SeriesApply.apply(self) [1022](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/apply.py:1022) return self.apply_str() [1024](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/apply.py:1024) # self.f is Callable ... [2231](file:///C:/Users/PC/anaconda3/Lib/site-packages/torch/nn/functional.py:2231) # remove once script supports set_grad_enabled [2232](file:///C:/Users/PC/anaconda3/Lib/site-packages/torch/nn/functional.py:2232) _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> [2233](file:///C:/Users/PC/anaconda3/Lib/site-packages/torch/nn/functional.py:2233) return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?e54bc3e5-d73c-439c-88f2-3de3b6fcfba2) or open in a [text editor](command:workbench.action.openLargeOutput?e54bc3e5-d73c-439c-88f2-3de3b6fcfba2). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ``` Does anybody know how to resolve this one, I'm a new learner in machine learning as well :< ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction --------------------------------------------------------------------------- IndexError Traceback (most recent call last) [d:\Hugging](file:///D:/Hugging) Face\Day2 - Transformer\Apply.ipynb Cell 2 line 5 [47](vscode-notebook-cell:/d%3A/Hugging%20Face/Day2%20-%20Transformer/Apply.ipynb#W2sZmlsZQ%3D%3D?line=46) predicted_class = torch.argmax(output.logits.softmax(dim=-1)).item() [49](vscode-notebook-cell:/d%3A/Hugging%20Face/Day2%20-%20Transformer/Apply.ipynb#W2sZmlsZQ%3D%3D?line=48) return predicted_class ---> [51](vscode-notebook-cell:/d%3A/Hugging%20Face/Day2%20-%20Transformer/Apply.ipynb#W2sZmlsZQ%3D%3D?line=50) df['sentiment'] = df['content'].apply(sentiment) File [c:\Users\PC\anaconda3\Lib\site-packages\pandas\core\series.py:4630](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4630), in Series.apply(self, func, convert_dtype, args, **kwargs) [4520](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4520) def apply( [4521](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4521) self, [4522](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4522) func: AggFuncType, (...) [4525](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4525) **kwargs, [4526](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4526) ) -> DataFrame | Series: [4527](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4527) """ [4528](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4528) Invoke function on values of Series. [4529](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4529) (...) [4628](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4628) dtype: float64 [4629](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4629) """ -> [4630](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/series.py:4630) return SeriesApply(self, func, convert_dtype, args, kwargs).apply() File [c:\Users\PC\anaconda3\Lib\site-packages\pandas\core\apply.py:1025](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/apply.py:1025), in SeriesApply.apply(self) [1022](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/apply.py:1022) return self.apply_str() [1024](file:///C:/Users/PC/anaconda3/Lib/site-packages/pandas/core/apply.py:1024) # self.f is Callable ... [2231](file:///C:/Users/PC/anaconda3/Lib/site-packages/torch/nn/functional.py:2231) # remove once script supports set_grad_enabled [2232](file:///C:/Users/PC/anaconda3/Lib/site-packages/torch/nn/functional.py:2232) _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> [2233](file:///C:/Users/PC/anaconda3/Lib/site-packages/torch/nn/functional.py:2233) return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?e54bc3e5-d73c-439c-88f2-3de3b6fcfba2) or open in a [text editor](command:workbench.action.openLargeOutput?e54bc3e5-d73c-439c-88f2-3de3b6fcfba2). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Expected behavior I
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27747/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27746/comments
https://api.github.com/repos/huggingface/transformers/issues/27746/events
https://github.com/huggingface/transformers/pull/27746
2,014,522,949
PR_kwDOCUB6oc5gjwsS
27,746
Move tensors to same device to enable IDEFICS naive MP training
{ "login": "willemsenbram", "id": 22574774, "node_id": "MDQ6VXNlcjIyNTc0Nzc0", "avatar_url": "https://avatars.githubusercontent.com/u/22574774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willemsenbram", "html_url": "https://github.com/willemsenbram", "followers_url": "https://api.github.com/users/willemsenbram/followers", "following_url": "https://api.github.com/users/willemsenbram/following{/other_user}", "gists_url": "https://api.github.com/users/willemsenbram/gists{/gist_id}", "starred_url": "https://api.github.com/users/willemsenbram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willemsenbram/subscriptions", "organizations_url": "https://api.github.com/users/willemsenbram/orgs", "repos_url": "https://api.github.com/users/willemsenbram/repos", "events_url": "https://api.github.com/users/willemsenbram/events{/privacy}", "received_events_url": "https://api.github.com/users/willemsenbram/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For the test I suggest to do it in a follow-up PR to test that for all architectures, through a test that runs on the daily CI" ]
1,701
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? Fixes #27736 `logits`, `labels`, and `attention_mask` are expected to be on the same device. This PR moves these tensors to the same device, making IDEFICS training compatible with naive model parallelism. See also #22561 @amyeroberts @ArthurZucker @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27746/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27746", "html_url": "https://github.com/huggingface/transformers/pull/27746", "diff_url": "https://github.com/huggingface/transformers/pull/27746.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27746.patch", "merged_at": 1701785206000 }
https://api.github.com/repos/huggingface/transformers/issues/27745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27745/comments
https://api.github.com/repos/huggingface/transformers/issues/27745/events
https://github.com/huggingface/transformers/pull/27745
2,014,464,762
PR_kwDOCUB6oc5gjj6S
27,745
Enforce pin memory disabling when using cpu only
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27745). All of your documentation changes will be reflected on that endpoint." ]
1,701
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? Fixes #26556 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @muellerzr I would skip writing new test, it's really hard to test, plus it's a minor fix
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27745/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27745", "html_url": "https://github.com/huggingface/transformers/pull/27745", "diff_url": "https://github.com/huggingface/transformers/pull/27745.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27745.patch", "merged_at": 1701187387000 }
https://api.github.com/repos/huggingface/transformers/issues/27744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27744/comments
https://api.github.com/repos/huggingface/transformers/issues/27744/events
https://github.com/huggingface/transformers/issues/27744
2,014,404,231
I_kwDOCUB6oc54EV6H
27,744
Trainer fails when using torchrun for distributed run of transformer model wrapped with PEFT
{ "login": "Ahmed-Roushdy", "id": 68569076, "node_id": "MDQ6VXNlcjY4NTY5MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/68569076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ahmed-Roushdy", "html_url": "https://github.com/Ahmed-Roushdy", "followers_url": "https://api.github.com/users/Ahmed-Roushdy/followers", "following_url": "https://api.github.com/users/Ahmed-Roushdy/following{/other_user}", "gists_url": "https://api.github.com/users/Ahmed-Roushdy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ahmed-Roushdy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ahmed-Roushdy/subscriptions", "organizations_url": "https://api.github.com/users/Ahmed-Roushdy/orgs", "repos_url": "https://api.github.com/users/Ahmed-Roushdy/repos", "events_url": "https://api.github.com/users/Ahmed-Roushdy/events{/privacy}", "received_events_url": "https://api.github.com/users/Ahmed-Roushdy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @pacman100 or @BenjaminBossan ", "@Ahmed-Roushdy did you see these threads?\r\n\r\n- https://github.com/pytorch/pytorch/issues/100945\r\n- https://github.com/pytorch/pytorch/issues/104690\r\n\r\nPerhaps the info there could help. You should not have to use a nightly PyTorch build if you use the latest PyTorch version.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.33.3 [1/1854] - Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.19.2 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @muellerzr ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The following code train LLaMA2 transformer model with PEFT using torchrun The code snippet ```ruby model_args, data_args, training_args = parser.parse_args_into_dataclasses() print(training_args) print('Start Loading Model') model = transformers.AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=training_args.cache_dir, ) config = LoraConfig( r=8, lora_alpha=16, lora_dropout=0.1, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) model.print_trainable_parameters() trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module) ``` scripet to run ``` torchrun \ --standalone \ --nnodes=1 \ --nproc-per-node=7 \ train.py \ --model_name_or_path "meta-llama/Llama-2-7b-hf" \ --bf16 True \ --output_dir checkpoints/dist-LLaMa-7B \ --num_train_epochs 3 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 8 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True ``` I got the following error ``` Start building the trainer module Traceback (most recent call last): File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 285, in <module> train() File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 278, in train trainer.train() File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1556, in train return inner_training_loop( File "/home//.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1675, in _inner_training_loop self.model = self.accelerator.prepare(self.model) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1288, in prepare result = tuple( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1289, in <genexpr> self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1094, in _prepare_one return self.prepare_model(obj, device_placement=device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1464, in prepare_model model = FSDP(model, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 391, in __init__ _auto_wrap(auto_wrap_kwargs, fsdp_kwargs, FullyShardedDataParallel) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 73, in _auto_wrap _recursive_wrap(**auto_wrap_kwargs, **fsdp_kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( [Previous line repeated 2 more times] File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 388, in _recursive_wrap return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 317, in _wrap return wrapper_cls(module, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 408, in __init__ _init_param_handle_from_module( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 429, in _init_param_handle_from_module _init_param_handle_from_params(state, managed_params, fully_sharded_module) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 525, in _init_param_handle_from_params handle = FlatParamHandle( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 366, in __init__ self._init_flat_param(params, fully_sharded_module, use_orig_params) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 440, in _init_flat_param raise ValueError( ValueError: `FlatParameter` requires uniform `requires_grad` Traceback (most recent call last): File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 285, in <module> Traceback (most recent call last): train() File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 285, in <module> File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 278, in train trainer.train() File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1556, in train train() File "/vault/aelkordy/NLP_projects/pruning/MAmmoTH/train.py", line 278, in train trainer.train() File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1556, in train return inner_training_loop( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1675, in _inner_training_loop return inner_training_loop( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/transformers/trainer.py", line 1675, in _inner_training_loop self.model = self.accelerator.prepare(self.model) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1288, in prepare result = tuple( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1289, in <genexpr> self.model = self.accelerator.prepare(self.model) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1288, in prepare result = tuple( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1289, in <genexpr> self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1094, in _prepare_one self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1094, in _prepare_one return self.prepare_model(obj, device_placement=device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1464, in prepare_model return self.prepare_model(obj, device_placement=device_placement) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/accelerate/accelerator.py", line 1464, in prepare_model model = FSDP(model, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 391, in __init__ _auto_wrap(auto_wrap_kwargs, fsdp_kwargs, FullyShardedDataParallel) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 73, in _auto_wrap model = FSDP(model, **kwargs)_recursive_wrap(**auto_wrap_kwargs, **fsdp_kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 391, in __init__ File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap _auto_wrap(auto_wrap_kwargs, fsdp_kwargs, FullyShardedDataParallel) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 73, in _auto_wrap _recursive_wrap(**auto_wrap_kwargs, **fsdp_kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( [Previous line repeated 2 more times] File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 388, in _recursive_wrap wrapped_child, num_wrapped_params = _recursive_wrap( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 370, in _recursive_wrap return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 317, in _wrap wrapped_child, num_wrapped_params = _recursive_wrap( [Previous line repeated 2 more times] File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 388, in _recursive_wrap return wrapper_cls(module, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 408, in __init__ return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/wrap.py", line 317, in _wrap _init_param_handle_from_module( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 429, in _init_param_handle_from_module return wrapper_cls(module, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 408, in __init__ _init_param_handle_from_params(state, managed_params, fully_sharded_module) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 525, in _init_param_handle_from_params _init_param_handle_from_module( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 429, in _init_param_handle_from_module handle = FlatParamHandle( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 366, in __init__ _init_param_handle_from_params(state, managed_params, fully_sharded_module) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/_init_utils.py", line 525, in _init_param_handle_from_params self._init_flat_param(params, fully_sharded_module, use_orig_params) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 440, in _init_flat_param handle = FlatParamHandle( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 366, in __init__ raise ValueError( ValueError: `FlatParameter` requires uniform `requires_grad` self._init_flat_param(params, fully_sharded_module, use_orig_params) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/fsdp/flat_param.py", line 440, in _init_flat_param raise ValueError( ValueError: `FlatParameter` requires uniform `requires_grad` ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1854106) of binary: /home/aelkordy/.conda/envs/mamoth/bin/python Traceback (most recent call last): File "/home/aelkordy/.conda/envs/mamoth/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/aelkordy/.conda/envs/mamoth/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ train.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2023-11-16_10:52:13 host : g1lmd1 rank : 1 (local_rank: 1) exitcode : 1 (pid: 1854107) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2023-11-16_10:52:13 host : g1lmd1 rank : 2 (local_rank: 2) exitcode : 1 (pid: 1854108) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-11-16_10:52:13 host : g1lmd1 rank : 0 (local_rank: 0) exitcode : 1 (pid: 1854106) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ``` ### Expected behavior Successful run of the distribuited training of LLaMA2 with PEFT similar to LLaMA2 without parameter efficient fine-tuning methods
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27744/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27743/comments
https://api.github.com/repos/huggingface/transformers/issues/27743/events
https://github.com/huggingface/transformers/pull/27743
2,014,400,635
PR_kwDOCUB6oc5gjVvs
27,743
restructure AMD scheduled CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27743). All of your documentation changes will be reflected on that endpoint." ]
1,701
1,701
1,701
COLLABORATOR
null
# What does this PR do? So far, the AMD scheduled CI is run as a single workflow with `mi210` and `mi250` both in it (each has ~500 jobs): see [here](https://github.com/huggingface/transformers/actions/runs/6999845840) <img width="791" alt="Screenshot 2023-11-28 140000" src="https://github.com/huggingface/transformers/assets/2521628/b16b47f4-d5bf-426f-9eb2-e964e09455ee"> This causes 2 issues: - the workflow run page is too large to display (A unicorn image with `This page is taking too long to load.`) - the artifact produced by the runs of `mi210` and `mi250` are mixed (overwritten by each other), so the report might be inaccurate. This PR restructure AMD scheduled CI to make `mi210` and `mi250` run in 2 workflow run, so avoid the above 2 issues.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27743/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27743", "html_url": "https://github.com/huggingface/transformers/pull/27743", "diff_url": "https://github.com/huggingface/transformers/pull/27743.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27743.patch", "merged_at": 1701700325000 }
https://api.github.com/repos/huggingface/transformers/issues/27742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27742/comments
https://api.github.com/repos/huggingface/transformers/issues/27742/events
https://github.com/huggingface/transformers/pull/27742
2,014,334,077
PR_kwDOCUB6oc5gjHJV
27,742
Add Swinv2 backbone
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27742). All of your documentation changes will be reflected on that endpoint." ]
1,701
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? This PR is a continuation of #25799. It adds `Swinv2Backbone`, such that it can be combined with the DPT framework (known as [MiDaS 3.1](https://github.com/isl-org/MiDaS/releases)). It also required some updates to `modeling_swinv2.py` which are fully backwards compatible (all slow integration tests passing). To do: - [x] verify image processor before pushing models to the hub - [x] transfer checkpoints, add integration test - [x] investigate how MiDas does forward pass with BEiT backbone and keeping the aspect ratio
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27742/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27742", "html_url": "https://github.com/huggingface/transformers/pull/27742", "diff_url": "https://github.com/huggingface/transformers/pull/27742.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27742.patch", "merged_at": 1703243576000 }
https://api.github.com/repos/huggingface/transformers/issues/27741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27741/comments
https://api.github.com/repos/huggingface/transformers/issues/27741/events
https://github.com/huggingface/transformers/issues/27741
2,014,328,953
I_kwDOCUB6oc54EDh5
27,741
FuyuForCausalLM.forward() got an unexpected keyword argument 'labels'
{ "login": "Nyandwi", "id": 52796597, "node_id": "MDQ6VXNlcjUyNzk2NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/52796597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nyandwi", "html_url": "https://github.com/Nyandwi", "followers_url": "https://api.github.com/users/Nyandwi/followers", "following_url": "https://api.github.com/users/Nyandwi/following{/other_user}", "gists_url": "https://api.github.com/users/Nyandwi/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nyandwi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nyandwi/subscriptions", "organizations_url": "https://api.github.com/users/Nyandwi/orgs", "repos_url": "https://api.github.com/users/Nyandwi/repos", "events_url": "https://api.github.com/users/Nyandwi/events{/privacy}", "received_events_url": "https://api.github.com/users/Nyandwi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Yes this was updated recently, could you print\r\n```python \r\nimport transformers\r\nprint(transformers.__versions__)\r\n```\r\n", "Thanks for quick reply. I have `4.36.0.dev0`. ", "It is also the version I get when I install from the source again(now).", "That's strange see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/fuyu/modeling_fuyu.py#L222)", "I cleared the whole things and pip caches and pulled again from the source. Now I can see the labels in forward signature. Thanks again!" ]
1,701
1,701
1,701
NONE
null
### System Info - `transformers` version: 4.36.0.dev0 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script? True - Using distributed or parallel set-up in script? False ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The fuyu model created with the code below doesn't seem to have `labels` argument in forward yet labels seems supported in model source code. ```py outputs = model(**inputs, labels=labels_input) ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[19], line 3 ----> 3 outputs = model(**inputs, labels=labels_input) File ~/miniconda3/envs/fuyu/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs) 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(*args, **kwargs) File ~/miniconda3/envs/fuyu/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs) 1522 # If we don't have any hooks, we want to skip the rest of the logic in 1523 # this function, and just call forward. 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(*args, **kwargs) 1529 try: 1530 result = None TypeError: FuyuForCausalLM.forward() got an unexpected keyword argument 'labels' The forward signature in my environment doesn't have labels yet I am using the dev version updated very recently. ```py @add_start_docstrings_to_model_forward(FUYU_INPUTS_DOCSTRING) def forward( self, input_ids: torch.LongTensor = None, image_patches: torch.Tensor = None, # [batch_size, num_total_patches, patch_size_ x patch_size x num_channels ] image_patches_indices: torch.Tensor = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPast]: ``` ### Expected behavior According to the source code, I believe the model should take the labels. Or maybe there is something that I am not aware of, or maybe it has something to do with other packages. I tried to update the transformers to latest dev version and the issue persisted. Would appreciate your support regarding the issue!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27741/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27740/comments
https://api.github.com/repos/huggingface/transformers/issues/27740/events
https://github.com/huggingface/transformers/pull/27740
2,014,199,375
PR_kwDOCUB6oc5gipx0
27,740
Docs: Fix broken cross-references, i.e. `~transformer.` -> `~transformers.`
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Gladly! Thank you for your work on the docs! It's always an underappreciated part of any OS project." ]
1,701
1,701
1,701
MEMBER
null
# What does this PR do? I noticed that some of the docs use incorrect cross-references, e.g. here: https://huggingface.co/docs/transformers/v4.35.2/en/main_classes/trainer#transformers.Trainer.add_callback Then I noticed these use `~transformer.foo` rather than `~transformers.foo`. I've replaced all `~transformer.` with `~transformers.` in this PR. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @stevhliu and @MKhalusova - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27740/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27740/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27740", "html_url": "https://github.com/huggingface/transformers/pull/27740", "diff_url": "https://github.com/huggingface/transformers/pull/27740.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27740.patch", "merged_at": 1701189644000 }
https://api.github.com/repos/huggingface/transformers/issues/27739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27739/comments
https://api.github.com/repos/huggingface/transformers/issues/27739/events
https://github.com/huggingface/transformers/issues/27739
2,014,160,509
I_kwDOCUB6oc54DaZ9
27,739
RuntimeError: Could not infer dtype of JpegImageFile
{ "login": "realbigi", "id": 96737615, "node_id": "U_kgDOBcQZTw", "avatar_url": "https://avatars.githubusercontent.com/u/96737615?v=4", "gravatar_id": "", "url": "https://api.github.com/users/realbigi", "html_url": "https://github.com/realbigi", "followers_url": "https://api.github.com/users/realbigi/followers", "following_url": "https://api.github.com/users/realbigi/following{/other_user}", "gists_url": "https://api.github.com/users/realbigi/gists{/gist_id}", "starred_url": "https://api.github.com/users/realbigi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/realbigi/subscriptions", "organizations_url": "https://api.github.com/users/realbigi/orgs", "repos_url": "https://api.github.com/users/realbigi/repos", "events_url": "https://api.github.com/users/realbigi/events{/privacy}", "received_events_url": "https://api.github.com/users/realbigi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @realbigi, thanks for raising an issue! \r\n\r\nThe error itself is arising because torch can't decode jpeg files. From the notebook, it's not immediately obvious what's happening, as the images should be opened as PIL.Image.Image. I can see there was an error reading in the dataset and some of the cells have been run out of order. Because of the statefulness of notebooks, it is therefore difficult to debug wrt to the error message. Could you update the notebook so that each cell has been executed in order? ", "Hi @amyeroberts \r\n\r\nI just executed the notebook in order and reuploaded it. But you should be able to run it anyways as I uploaded the data as well to GitHub. Could the problem arise because there are webp images instead of jpeg's? Altough I don't think that is the problem because they were converted to PIL objects right?", "@amyeroberts \r\n\r\nI must have made a mistake while copying the code. I downloaded the original notebook and loaded my data and it seems to work. ", "Thanks for clarifying ! ๐Ÿค— ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,704
1,704
NONE
null
### System Info transformers, windows 11, python 3.11.6 I have been trying to replicate this https://huggingface.co/docs/transformers/tasks/image_classification image classification with PyTorch. But when I run the skript I get this error: RuntimeError: Could not infer dtype of JpegImageFile Does anyone have an idea what the problem could be? ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://github.com/realbigi/Classifier_2_Pytorch_Image_Classification ### Expected behavior It should train on the data without a mistake
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27739/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27738/comments
https://api.github.com/repos/huggingface/transformers/issues/27738/events
https://github.com/huggingface/transformers/pull/27738
2,014,053,752
PR_kwDOCUB6oc5giKMW
27,738
single word should be set to False
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,701
1,701
1,701
COLLABORATOR
null
# What does this PR do? Fixes qn issue reported on the hub: ```python from transformers import AutoTokenizer, MT5Tokenizer tokenizer = MT5Tokenizer.from_pretrained("google/mt5-base", extra_ids = 100) ``` (this is rarely tested) trigers this ```diff # for legacy purpose, we keep this. Will be removed and tests updated. (when `added_tokens_decoder` is not passed as kwargs) self._added_tokens_decoder = {} for i in range(len(extra_tokens)): self._added_tokens_decoder[len(self.sp_model) - 1 + extra_ids - i] = -AddedToken(f"<extra_id_{i}>", single_word=True, lstrip=True, rstrip=True, special=True, normalized=False) +AddedToken(f"<extra_id_{i}>", single_word=False, lstrip=True, rstrip=True, special=True, normalized=False) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27738/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27738", "html_url": "https://github.com/huggingface/transformers/pull/27738", "diff_url": "https://github.com/huggingface/transformers/pull/27738.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27738.patch", "merged_at": 1701698211000 }
https://api.github.com/repos/huggingface/transformers/issues/27737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27737/comments
https://api.github.com/repos/huggingface/transformers/issues/27737/events
https://github.com/huggingface/transformers/issues/27737
2,013,567,783
I_kwDOCUB6oc54BJsn
27,737
How to save the generated output of BarkModel to an npz file?
{ "login": "chet-chen", "id": 16471378, "node_id": "MDQ6VXNlcjE2NDcxMzc4", "avatar_url": "https://avatars.githubusercontent.com/u/16471378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chet-chen", "html_url": "https://github.com/chet-chen", "followers_url": "https://api.github.com/users/chet-chen/followers", "following_url": "https://api.github.com/users/chet-chen/following{/other_user}", "gists_url": "https://api.github.com/users/chet-chen/gists{/gist_id}", "starred_url": "https://api.github.com/users/chet-chen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chet-chen/subscriptions", "organizations_url": "https://api.github.com/users/chet-chen/orgs", "repos_url": "https://api.github.com/users/chet-chen/repos", "events_url": "https://api.github.com/users/chet-chen/events{/privacy}", "received_events_url": "https://api.github.com/users/chet-chen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ylacombe ", "Hey @chet-chen, let me take a look!\r\nFrom the back of my mind, I think it requires to add something similar to `output_full = True` to BarkModel. It could be great that you do this if you add time, WDYT ?", "@ylacombe I'm relatively new to Python, and I'm facing a challenge in converting the results generated by the BarkSemanticModel, BarkCoarseModel, and BarkFineModel models into a numpy.ndarray with the right shape. These models output torch.LongTensor, and I need help transforming them.\r\n\r\nhttps://github.com/huggingface/transformers/blob/80377eb018c077dba434bc8e7912bcaed3a64d09/src/transformers/models/bark/modeling_bark.py#L1820-L1854", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,704
1,704
NONE
null
Hello there! I'm using the BarkModel from Hugging Face Transformers and I'm wondering how to save the generated results to an npz file. I'd like to use these saved results as history prompts for the next generation. In the [suno-ai/bark](https://github.com/suno-ai/bark) , when using the [`semantic_to_waveform`](https://github.com/suno-ai/bark/blob/main/bark/api.py#L35) method, I can pass `output_full = True`. This allows me to save the output to an npz file using `numpy.savez`. However, as I transition to using the BarkModel within the transformers framework, I am uncertain about the equivalent process. Could you kindly provide guidance on how to save the generated results of the BarkModel to an npz file in the Transformers library? Any assistance or code examples you could offer would be greatly appreciated. Thank you for your time and support.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27737/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27736/comments
https://api.github.com/repos/huggingface/transformers/issues/27736/events
https://github.com/huggingface/transformers/issues/27736
2,013,402,394
I_kwDOCUB6oc54AhUa
27,736
RuntimeError(s) when attempting multi-GPU fine-tuning of IDEFICS with naive model parallelism
{ "login": "willemsenbram", "id": 22574774, "node_id": "MDQ6VXNlcjIyNTc0Nzc0", "avatar_url": "https://avatars.githubusercontent.com/u/22574774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willemsenbram", "html_url": "https://github.com/willemsenbram", "followers_url": "https://api.github.com/users/willemsenbram/followers", "following_url": "https://api.github.com/users/willemsenbram/following{/other_user}", "gists_url": "https://api.github.com/users/willemsenbram/gists{/gist_id}", "starred_url": "https://api.github.com/users/willemsenbram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willemsenbram/subscriptions", "organizations_url": "https://api.github.com/users/willemsenbram/orgs", "repos_url": "https://api.github.com/users/willemsenbram/repos", "events_url": "https://api.github.com/users/willemsenbram/events{/privacy}", "received_events_url": "https://api.github.com/users/willemsenbram/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @willemsenbram - thanks for raising this issue! \r\n\r\nThe proposed solution sounds good to me - it's consistent with other model implementations. If you open a PR we'll be happy to review :) ", "Hi @amyeroberts, thanks for checking! I've opened the [PR](https://github.com/huggingface/transformers/pull/27746)" ]
1,701
1,701
1,701
CONTRIBUTOR
null
### System Info - `transformers` version: 4.34.0 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @ArthurZucker @younesbelkada @amyeroberts ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Related issue: [#22561](https://github.com/huggingface/transformers/issues/22561) The problems can be reproduced with [the IDEFICS PEFT demo](https://github.com/huggingface/notebooks/blob/main/examples/idefics/finetune_image_captioning_peft.ipynb). Using `device_map="auto"` may result in `RuntimeError: Expected all tensors to be on the same device, but found at least two devices`. On my end, the error disappears when setting `CUDA_VISIBLE_DEVICES` to be only those that have no other processes already running, but it seems like having other processes use the same GPU is not always an issue: is it possible that the auto mapping can cause an improper split of the model when having to work around limited resource availability on devices with running processes? With a working device mapping, the next problem is `RuntimeError: indices should be either on cpu or on the same device as the indexed tensor`. Relevant part of the traceback: ``` File "/home/user/miniconda3/envs/idefics/lib/python3.10/site-packages/transformers/models/idefics/modeling_idefics.py", line 1516, in forward shift_logits = logits[..., :-1, :][shift_attention_mask != 0].contiguous() ``` `logits`, `labels`, and `attention_mask` are expected to be on the same device. A possible workaround is moving them all to the same device (in [modeling_idefics.py](https://github.com/huggingface/transformers/blob/ce315081340fdf6846f16c321eb53878b6272d53/src/transformers/models/idefics/modeling_idefics.py#L1512-L1515)), like so: ```diff if labels is not None: + labels = labels.to(logits.device) # Shift so that tokens < n predict n if attention_mask is not None: - shift_attention_mask = attention_mask[..., 1:] + shift_attention_mask = attention_mask[..., 1:].to(logits.device) ``` Can someone confirm that this is proper? If so, I can create a PR. ### Expected behavior Proper spread of the model over available GPUs when using `device_map="auto"` and have `logits`, `labels`, and `attention_mask` be on the same device to enable training with naive model parallelism.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27736/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27735/comments
https://api.github.com/repos/huggingface/transformers/issues/27735/events
https://github.com/huggingface/transformers/pull/27735
2,013,261,833
PR_kwDOCUB6oc5gfdp6
27,735
Adding SegGPT
{ "login": "EduardoPach", "id": 69953243, "node_id": "MDQ6VXNlcjY5OTUzMjQz", "avatar_url": "https://avatars.githubusercontent.com/u/69953243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EduardoPach", "html_url": "https://github.com/EduardoPach", "followers_url": "https://api.github.com/users/EduardoPach/followers", "following_url": "https://api.github.com/users/EduardoPach/following{/other_user}", "gists_url": "https://api.github.com/users/EduardoPach/gists{/gist_id}", "starred_url": "https://api.github.com/users/EduardoPach/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EduardoPach/subscriptions", "organizations_url": "https://api.github.com/users/EduardoPach/orgs", "repos_url": "https://api.github.com/users/EduardoPach/repos", "events_url": "https://api.github.com/users/EduardoPach/events{/privacy}", "received_events_url": "https://api.github.com/users/EduardoPach/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@NielsRogge just pinging you here", "Hey @NielsRogge, could you take a look at the PR? Code quality is failing, but is unrelated and the documentation tests I don't know what is going on because it's failing on an example that it's not even in the `modeling_seggpt.py`", "Looks like we're nearly there. Just one comment, could we make the current `post_process_masks` method consistent with the existing `post_process_semantic_segmentation` and `post_process_instance_segmentation` methods in the library (perhaps also leveraging the same name)? These return a set of binary masks.\r\n\r\nI have this notebook to showcase inference: https://colab.research.google.com/drive/1MZ0quroT0E2c5mnJjmjdDo_FrDXTUCQD. ", "> Looks like we're nearly there. Just one comment, could we make the current `post_process_masks` method consistent with the existing `post_process_semantic_segmentation` and `post_process_instance_segmentation` methods in the library (perhaps also leveraging the same name)? These return a set of binary masks.\r\n> \r\n> I have this notebook to showcase inference: https://colab.research.google.com/drive/1MZ0quroT0E2c5mnJjmjdDo_FrDXTUCQD.\r\n\r\nSome of the modifications I did might break your inference example. Instead of using `post_process_masks` you should now use `post_process_semantic_segmentation` here is an example of my own https://colab.research.google.com/drive/1UfgLOOVHZJhdxIzfIc6Z1njdOkGTFTJT?usp=sharing", "> Awesome work! Thanks for all the work adding and iterating on this\r\n> \r\n> Just some tiny nits. Please do not just mark comments as resolved when they are not - if you don't think the suggestion should be applied then comment with why on the comment before marking.\r\n> \r\n> Main thing left to do is update all the checkpoints and make sure all the model tests (incl slow) pass with these checkpoints. Once that's all done it'll be ready to merge :)\r\n\r\n@amyeroberts what do you mean by updating the checkpoints? Move to the appropriate org or re-upload them (it's just one) with the conversion script. ", "@EduardoPach The checkpoints should be uploaded under the correct org - @NielsRogge can help you with that. Then all the checkpoint references in this PR need to be updated to point to the right location " ]
1,701
1,708
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds `SegGPT` to the transformers library - [Paper](https://arxiv.org/pdf/2304.03284.pdf) - [Code](https://github.com/baaivision/Painter/tree/main/SegGPT/SegGPT_inference) - [Checkpoint](https://huggingface.co/BAAI/SegGPT/blob/main/seggpt_vit_large.pth) Fixes https://github.com/huggingface/transformers/issues/27514 ## TO-DOs - [x] Finish Implementation - [x] Write conversion script - [x] Finish `ImageProcessor` - [x] Make all tests green
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27735/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27735", "html_url": "https://github.com/huggingface/transformers/pull/27735", "diff_url": "https://github.com/huggingface/transformers/pull/27735.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27735.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27734/comments
https://api.github.com/repos/huggingface/transformers/issues/27734/events
https://github.com/huggingface/transformers/pull/27734
2,013,072,983
PR_kwDOCUB6oc5gezhS
27,734
Deberta can now be exported to TorchScript
{ "login": "Szustarol", "id": 61427290, "node_id": "MDQ6VXNlcjYxNDI3Mjkw", "avatar_url": "https://avatars.githubusercontent.com/u/61427290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Szustarol", "html_url": "https://github.com/Szustarol", "followers_url": "https://api.github.com/users/Szustarol/followers", "following_url": "https://api.github.com/users/Szustarol/following{/other_user}", "gists_url": "https://api.github.com/users/Szustarol/gists{/gist_id}", "starred_url": "https://api.github.com/users/Szustarol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Szustarol/subscriptions", "organizations_url": "https://api.github.com/users/Szustarol/orgs", "repos_url": "https://api.github.com/users/Szustarol/repos", "events_url": "https://api.github.com/users/Szustarol/events{/privacy}", "received_events_url": "https://api.github.com/users/Szustarol/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Great work @Szustarol ! I think this solution still pins the device for the traced model. One solution is to create an ONNX model instead. Perhaps you might have a better solution? ", "Thanks for the response @demq, I would be more than happy to expand on my solution, however I am not quite sure if I get your suggestion right.\r\nI wanted to fix the issue of the Deberta model being untracable with PyTorch `torch.jit.trace` api, do you think I am missing something here? Also I believe that, if the device is pinned, it is an effect of `torch.jit.trace` usage, but the only place where this happens is the testing code where it should not be a problem, since it is run in a single test setup and not actually saved anywhere. Unless you mean a complete reimplementation of the Deberta model to not use calls that lead to a pinned device (if this actually happens, I can check for that tomorrow)?\r\nI might have misunderstood something since I'm just learning the ropes of ๐Ÿค— Transformers, I am sorry in advance!", "> Thanks for the response @demq, I would be more than happy to expand on my solution, however I am not quite sure if I get your suggestion right. I wanted to fix the issue of the Deberta model being untracable with PyTorch `torch.jit.trace` api, do you think I am missing something here? Also I believe that, if the device is pinned, it is an effect of `torch.jit.trace` usage, but the only place where this happens is the testing code where it should not be a problem, since it is run in a single test setup and not actually saved anywhere. Unless you mean a complete reimplementation of the Deberta model to not use calls that lead to a pinned device (if this actually happens, I can check for that tomorrow)? I might have misunderstood something since I'm just learning the ropes of ๐Ÿค— Transformers, I am sorry in advance!\r\n\r\nYes - your solution here correctly addresses the open issue. You are absolutely correct that the device pinning is caused by jit.trace(), and it is a separate issue from what you have addressed here. Previously, I experienced the issue of device pinning on a traced (using the decorator trick on XSoftmax) fine-tuned Deberta model, so we had to export to ONNX instead of torch script to get around it. \r\n\r\nMy comment was inspired by seeing how quick you were to submit a solution to this issue :) I would imagine one way to ensure no device pinning is by exporting the model through jit.script() instead of jit.trace().\r\n\r\n", "Okay, I think now I understand your point, thanks! I will surely have a look into it and report if I can find a solution. I think one way to fix this is to check what parts of the code cause model pinning and try to get rid of them which is what I will try. Should we open an issue for this?", "Okay I have done some extensive research and testing on this subject mentioned by @demq and it appears to me that during the tracing process all devices in the `forward` calls are pinned, so no `.to(device=...)` or `tensor(..., device=some_tensor.device)` lines should be present in the traced code. \r\nSince there is no review yet I decided to solve it in the same PR.\r\nLuckily we can intertwine traced and scripted code and the solution is to move all of the device-dependent tensor creations to a separate scripted callable which is exactly what I did. \r\nSadly I cannot provide a test for this case since I can never be sure what devices are available on a testbench, but if someone wants to try this out, it can be done with this snippet of code, which traces the model on the GPU, but executes it on CPU:\r\n\r\n```py\r\nimport torch\r\nimport io\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\n\r\n# tokenizer = AutoTokenizer.from_pretrained(\"microsoft/deberta-v2-xlarge\")\r\n# model = AutoModel.from_pretrained(\"microsoft/deberta-v2-xlarge\", torchscript=True).to(\"cuda\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/deberta-base\")\r\nmodel = AutoModel.from_pretrained(\"microsoft/deberta-base\", torchscript=True).to(\"cuda\")\r\n\r\ntokenized_dict = tokenizer(\r\n [\"Is this working\",], [\"Not yet\",], \r\n return_tensors=\"pt\"\r\n).to('cuda')\r\ninput_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])\r\n\r\n\r\ntraced_model = torch.jit.trace(model, input_tuple)\r\n\r\nmodel_bytes = io.BytesIO()\r\ntorch.jit.save(traced_model, model_bytes)\r\nmodel_bytes.seek(0)\r\n\r\nprint(\"######### tracing and saving done\")\r\n\r\nloaded_model = torch.jit.load(model_bytes, map_location='cpu')\r\ntokenized_dict = tokenized_dict.to('cpu')\r\ninput_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])\r\nprint(loaded_model(*input_tuple)[0].device) # outputs cpu\r\n\r\n```\r\n\r\nI would love to hear some feedback on this code, as with the extension of this task by the device problem I feel like the model modifications have now become quite extensive. ", "\r\n\r\nThanks for the great update. I can confirm that tracing works now without device pinning in my local linux env: I have used slightly changed test script:\r\n```\r\nimport torch\r\nimport io\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\n# tokenizer = AutoTokenizer.from_pretrained(\"microsoft/deberta-v2-xlarge\")\r\n# model = AutoModel.from_pretrained(\"microsoft/deberta-v2-xlarge\", torchscript=True).to(\"cuda\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/deberta-base\")\r\nmodel = AutoModel.from_pretrained(\"microsoft/deberta-base\", torchscript=True).to(\"cuda\")\r\n\r\nencodings_cuda = tokenizer(\r\n [\"The DeBerta tracing works without device pinning!\"],\r\n return_token_type_ids=False,\r\n return_tensors=\"pt\"\r\n).to(\"cuda\")\r\n\r\ntraced_model = torch.jit.trace(model, list(encodings_cuda.values()))\r\nprint(f\"{traced_model(*encodings_cuda.values())[0].device=}\")\r\n\r\nmodel_bytes = io.BytesIO()\r\ntorch.jit.save(traced_model, model_bytes)\r\nmodel_bytes.seek(0)\r\n\r\nprint(\"######### tracing and saving done\")\r\n\r\nloaded_model = torch.jit.load(model_bytes, map_location='cpu')\r\nencodings_cpu = encodings_cuda.copy().to('cpu')\r\nprint(f\"{loaded_model(*encodings_cpu.values())[0].device=}\") # outputs cpu\r\n```\r\n\r\nIt works fine, but I get the following warnings during the jit.trace():\r\n```\r\n>>> traced_model = torch.jit.trace(model, list(encodings_cuda.values()))\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:694: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n scale = torch.sqrt(torch.tensor(query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:694: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n scale = torch.sqrt(torch.tensor(query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:733: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n att_span = min(max(query_layer.size(-2), key_layer.size(-2)), self.max_relative_positions)\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:754: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n pos_query_layer /= torch.sqrt(torch.tensor(pos_query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:754: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n pos_query_layer /= torch.sqrt(torch.tensor(pos_query_layer.size(-1), dtype=torch.float) * scale_factor)\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:755: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if query_layer.size(-2) != key_layer.size(-2):\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:765: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if query_layer.size(-2) != key_layer.size(-2):\r\n./transformers/src/transformers/models/deberta/modeling_deberta.py:140: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n output = input.masked_fill(rmask, torch.tensor(torch.finfo(input.dtype).min))\r\n```\r\n", "Yes of course you are right, first time seeing those errors I have erroneously assumed that the tensor shape will be constant for a given config which might be true for `size(-1)` but is certainly not true for `size(-2)`. Those parts will have to be exported as a script as well. I will take care of it as soon as possible.\r\nThank you for bringing this to my attention!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey! Do you need a review on this? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "The reason it took so long is that while trying to fix the issue mentioned above, with constant size pinning, I have noticed that the HF tests use a HPFProxy object during tracing, which is not compatible with `torch.jit.script` - at the same time the usage of script is required to get of the constant sizes being pinned during model tracing. As I have found no way to reliably resolve this issue I ask here for counsel. \r\nPlease see the test run below to see what test I am referring to.", "> The reason it took so long is that while trying to fix the issue mentioned above, with constant size pinning, I have noticed that the HF tests use a HPFProxy object during tracing, which is not compatible with `torch.jit.script` - at the same time the usage of script is required to get of the constant sizes being pinned during model tracing. As I have found no way to reliably resolve this issue I ask here for counsel. Please see the test run below to see what test I am referring to.\r\n\r\nThe latest version seems to work fine for me @Szustarol , thank you very much for your efforts! I can trace a model on a cpu without any warnings and run inference on GPUs, there is no apparent device pinning or warnings on constant size pinning.\r\n\r\nPerhaps it is worth to ask @ArthurZucker to review the PR and merge it to master?", "Sure let me review it! " ]
1,701
1,707
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #20815 Generally, `torch.autograd.Functions` cannot be traced in Torch, as per this open issue: https://github.com/pytorch/pytorch/issues/32822 This issue is thus more of a PyTorch problem, but nevertheless can be resolved. ๐Ÿค— Transformers' implementation is basically the same as the original https://github.com/microsoft/DeBERTa, which was tracable with a dirty trick of using a tracing context: https://github.com/microsoft/DeBERTa/blob/4d7fe0bd4fb3c7d4f4005a7cafabde9800372098/DeBERTa/utils/jit_tracing.py#L10C1-L17C6 Of course such a solution is not applicable here as it would conflict with the existing API and usage of the ๐Ÿค— Transformers. I have decided to explore a bit the recent development in PyTorch and it seems `is_tracing` is now publicly accessible through `torch.jit` (though it is not yet documented), which gets rid of the context problem. So I have basically implemented the original solution but with the newly available `is_tracing` call. I have also added tests to check if the traced model outputs the same tensors as the model that is being traced. This was not mentioned in the issue but I have applied the same changes to Deberta_v2 since it is obviously also affected. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27734/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27734", "html_url": "https://github.com/huggingface/transformers/pull/27734", "diff_url": "https://github.com/huggingface/transformers/pull/27734.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27734.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27733/comments
https://api.github.com/repos/huggingface/transformers/issues/27733/events
https://github.com/huggingface/transformers/issues/27733
2,012,763,875
I_kwDOCUB6oc53-Fbj
27,733
ZERO loss while finetuning Llama2 usin SFT trainer and the use of collator
{ "login": "Sosycs", "id": 6597399, "node_id": "MDQ6VXNlcjY1OTczOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6597399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sosycs", "html_url": "https://github.com/Sosycs", "followers_url": "https://api.github.com/users/Sosycs/followers", "following_url": "https://api.github.com/users/Sosycs/following{/other_user}", "gists_url": "https://api.github.com/users/Sosycs/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sosycs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sosycs/subscriptions", "organizations_url": "https://api.github.com/users/Sosycs/orgs", "repos_url": "https://api.github.com/users/Sosycs/repos", "events_url": "https://api.github.com/users/Sosycs/events{/privacy}", "received_events_url": "https://api.github.com/users/Sosycs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey ๐Ÿค— thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!", "Sure!\r\nand Sorry for the inconvenience", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey, closing this as https://github.com/huggingface/transformers/pull/28142#issuecomment-1869513914 answers it." ]
1,701
1,704
1,704
NONE
null
Hello everyone, my code is: ``` response_template = "Answer: [/INST]" collator = DataCollatorForCompletionOnlyLM(response_template=response_template_tokenized , tokenizer=tokenizer) example = """<s>[INST] <<SYS>> Please select the correct answer from the given multiple Options based on the given Context: <</SYS>> Context: Geology is the study of the Earths solid material and structures and the processes that create them. Some ideas geologists might consider include how rocks and landforms are created or the composition of rocks, minerals, or various landforms. Geologists consider how natural processes create and destroy materials on Earth, and how humans can use Earth materials as resources, among other topics. Geologists study rocks in the field to learn what they can from them. Question: Earth science is the study of Options:(A) solid Earth (B) Earths oceans (C) Earths atmosphere (D) all of the above Answer: [/INST] D </s>""" example_encoded = tokenizer(example) collator([example_encoded]) ``` So I'm using the collator to only compute the loss on the predicted answer of the Llama2 model as pointed by @BayesRulez (thanks to you!). but what I am getting is zero for the loss on every training step. this output is printed while fine-tuning: ``` Context: Your sense of taste is controlled by sensory neurons, or nerve cells, on your tongue that sense the chemicals in food. The neurons are grouped in bundles within taste buds. Each taste bud actually has a pore that opens out to the surface of the tongue enabling molecules and ions taken into the mouth to reach the receptor cells inside. There are five different types of taste neurons on the tongue. Each type detects a different taste. The tastes are: 1. Sweet, which is produced by the presence of sugars, such as the common table sugar sucrose, and a few other substances. 2. Salty, which is produced primarily by the presence of sodium ions. Common salt is sodium chloride, NaCl. The use of salt can donate the sodium ion producing this taste. 3. Sour, which is the taste that detects acidity. The most common food group that contains naturally sour foods is fruit, such as lemon, grape, orange, and sometimes melon. Children show a greater enjoyment of sour flavors than adults, and sour candy such as Lemon Drops, Shock Tarts and sour versions of Skittles and Starburst, is popular. Many of these candies contain citric acid. 4. Bitter is an unpleasant, sharp, or disagreeable taste. Common bitter foods and beverages include coffee, unsweetened cocoa, beer (due to hops), olives, and citrus peel. 5. Umami, which is a meaty or savory taste. This taste can be found in fish, shellfish, cured meats, mushrooms, cheese, tomatoes, grains, and beans. A single taste bud contains 50100 taste cells representing all 5 taste sensations. A stimulated taste receptor cell triggers action potentials in a nearby sensory neuron, which send messages to the brain about the taste. The brain then decides what tastes you are sensing. Question: which taste will be associated with citrus fruits? Options:(A) sweet (B) sour (C) salty (D) bitter Answer: [/INST] B </s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s></s> This instance will be ignored in loss calculation. Note, if this happens often, consider increasing the `max_seq_length`. warnings.warn( /usr/local/lib/python3.10/dist-packages/trl/trainer/utils.py:120: UserWarning: Could not find response key `Answer: [/INST]` in the following instance: <s><s> [INST] <<SYS>> Please select the correct answer from the given multiple Options based on the given Context: <</SYS>> ``` Is there something I'm missing and need to be fixed?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27733/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27732/comments
https://api.github.com/repos/huggingface/transformers/issues/27732/events
https://github.com/huggingface/transformers/pull/27732
2,012,732,723
PR_kwDOCUB6oc5gdo2g
27,732
Fix AMD Push CI not triggered
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,701
1,701
1,701
COLLABORATOR
null
# What does this PR do? #26940 introduced a bug, and AMD Push CI is not triggered in the past week. See https://github.com/huggingface/transformers/actions/workflows/self-push-amd-mi210-caller.yml
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27732/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27732", "html_url": "https://github.com/huggingface/transformers/pull/27732", "diff_url": "https://github.com/huggingface/transformers/pull/27732.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27732.patch", "merged_at": 1701160221000 }
https://api.github.com/repos/huggingface/transformers/issues/27731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27731/comments
https://api.github.com/repos/huggingface/transformers/issues/27731/events
https://github.com/huggingface/transformers/pull/27731
2,012,630,069
PR_kwDOCUB6oc5gdShK
27,731
Use proper weights name when logging in `PreTrainedModel.save_pretrained`
{ "login": "michaelbenayoun", "id": 25418079, "node_id": "MDQ6VXNlcjI1NDE4MDc5", "avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelbenayoun", "html_url": "https://github.com/michaelbenayoun", "followers_url": "https://api.github.com/users/michaelbenayoun/followers", "following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}", "gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions", "organizations_url": "https://api.github.com/users/michaelbenayoun/orgs", "repos_url": "https://api.github.com/users/michaelbenayoun/repos", "events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelbenayoun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you rebase and make sure CIs are green? ๐Ÿค— ", "I think the issue is fixed in the current `main` branch now.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27731). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,701
1,705
1,705
MEMBER
null
# What does this PR do? It seems that `WEIGHTS_NAME` even when `safe_serialization=True`. This PR fixes that by using `weights_name` which is defined as follows: ``` if not _hf_peft_config_loaded: weights_name = SAFE_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME weights_name = _add_variant(weights_name, variant) else: weights_name = ADAPTER_SAFE_WEIGHTS_NAME if safe_serialization else ADAPTER_WEIGHTS_NAME ``` [Link to the code](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L2189-L2193).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27731/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27731", "html_url": "https://github.com/huggingface/transformers/pull/27731", "diff_url": "https://github.com/huggingface/transformers/pull/27731.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27731.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27730/comments
https://api.github.com/repos/huggingface/transformers/issues/27730/events
https://github.com/huggingface/transformers/issues/27730
2,012,579,385
I_kwDOCUB6oc539YY5
27,730
Add flag for easily finetuning heads / linear probing to AutoModelforSequenceClassification
{ "login": "0amp", "id": 28636996, "node_id": "MDQ6VXNlcjI4NjM2OTk2", "avatar_url": "https://avatars.githubusercontent.com/u/28636996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0amp", "html_url": "https://github.com/0amp", "followers_url": "https://api.github.com/users/0amp/followers", "following_url": "https://api.github.com/users/0amp/following{/other_user}", "gists_url": "https://api.github.com/users/0amp/gists{/gist_id}", "starred_url": "https://api.github.com/users/0amp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0amp/subscriptions", "organizations_url": "https://api.github.com/users/0amp/orgs", "repos_url": "https://api.github.com/users/0amp/repos", "events_url": "https://api.github.com/users/0amp/events{/privacy}", "received_events_url": "https://api.github.com/users/0amp/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[]
1,701
1,704
null
NONE
null
### Feature request Previous work has shown that last layer linear probing is cheaper and often generalizes better than normal finetuning (see [1](https://arxiv.org/pdf/2202.10054.pdf)). I imagine this could be implemented as a flag to AutoModelforSequenceClassification so that only the last layer classification head is trained. I believe this can be done manually by setting all the parameters except the last one to not track gradients, but a flag may be easier and encourage adoption. It may also be nice to have linear probing available at earlier layers (eg halfway through the model). This could be done through using the output_hidden_states flag during a forward pass. Mid layer linear probing can occasionally be more effective and is a widely used technique in the interpretability literature (see [2](https://direct.mit.edu/coli/article/48/1/207/107571/Probing-Classifiers-Promises-Shortcomings-and), [3](https://proceedings.neurips.cc/paper_files/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf), [4](https://arxiv.org/abs/2311.03658#:~:text=Informally%2C%20the%20%27linear%20representation%20hypothesis,directions%20in%20some%20representation%20space.), [5](https://arxiv.org/abs/2310.01405), and many others). Alternatively, if this were implemented generally for AutoModel (or maybe AutoModelforCausalLM?), it could use a wider variety of models. This feature could also be paired with an update that automatically allows models to be used for sequence classification by appending a final linear layer. ### Motivation Finetuning a head is extremely memory efficient and extremely fast (order of 1k parameters for most models, linear probes generally train in seconds, the main bottleneck will just be the forward pass) and oftentimes performs close to finetuning for classification tasks. It has also been shown to perform better OOD. ### Your contribution I can provide feedback and testing, have not looked deep enough to know how to fully implement this
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27730/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27730/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27729/comments
https://api.github.com/repos/huggingface/transformers/issues/27729/events
https://github.com/huggingface/transformers/pull/27729
2,012,556,720
PR_kwDOCUB6oc5gdCbI
27,729
Improve forward signature test
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,701
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? As a follow-up of #27681, I'd like to make sure that every new backbone that is added to the library follows the same API. Hence, this PR extends the `test_forward_signature` test to make sure this is tested. Rather than only checking the first keyword argument, it checks all of them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27729/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27729", "html_url": "https://github.com/huggingface/transformers/pull/27729", "diff_url": "https://github.com/huggingface/transformers/pull/27729.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27729.patch", "merged_at": 1701671903000 }
https://api.github.com/repos/huggingface/transformers/issues/27728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27728/comments
https://api.github.com/repos/huggingface/transformers/issues/27728/events
https://github.com/huggingface/transformers/issues/27728
2,012,537,779
I_kwDOCUB6oc539OOz
27,728
`hub_strategy`'s documentation for `checkpoint` option is wrong and misleading
{ "login": "omermazig", "id": 95534441, "node_id": "U_kgDOBbG9aQ", "avatar_url": "https://avatars.githubusercontent.com/u/95534441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omermazig", "html_url": "https://github.com/omermazig", "followers_url": "https://api.github.com/users/omermazig/followers", "following_url": "https://api.github.com/users/omermazig/following{/other_user}", "gists_url": "https://api.github.com/users/omermazig/gists{/gist_id}", "starred_url": "https://api.github.com/users/omermazig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omermazig/subscriptions", "organizations_url": "https://api.github.com/users/omermazig/orgs", "repos_url": "https://api.github.com/users/omermazig/repos", "events_url": "https://api.github.com/users/omermazig/events{/privacy}", "received_events_url": "https://api.github.com/users/omermazig/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A PR to the docs would be great, I also believe that if we use `save_total_limit`, it should take into account *not* deleting the one that has the best metric, and looking at the oldest scale to delete before that. I can look into this if it's a bit too complex on your end ๐Ÿ˜„ ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,704
1,704
NONE
null
### System Info The documentation for [hub_strategy](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.hub_strategy) seems to be mistaken, because it stated: > "checkpoint": like "every_save" but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint="last-checkpoint"). But when I did that, I got: > ValueError: 'videomae-finetuned/checkpoint-3000' is not in list Where checkpoint 3000 was my **best** checkpoint (according to my `metric_for_best_model`) So for `resume_from_checkpoint` to work, I found that I need to have both the last checkpoint AND the best checkpoint available in the output dir of the `Trainer`, and pass resume_from_checkpoint=True. ### Who can help? @muellerzr and @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction On any training process using the trainer, pass `hub_strategy="checkpoint"`, get the saved checkpoint from the hub, and run: `trainer.train(resume_from_checkpoint="local/path/to/checkpoint"` ### Expected behavior Other than fixing the docs, I think there should be a "best_and_last" option, which updates both the best (if changed) and last checkpoints in the hub, and that way really support resuming from a checkpoint. A more complex solution would be to incorporate it with the `save_total_limit` mechanism - since both are "keeping" checkpoints - but that's a bigger change.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27728/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27727/comments
https://api.github.com/repos/huggingface/transformers/issues/27727/events
https://github.com/huggingface/transformers/issues/27727
2,012,212,416
I_kwDOCUB6oc537-zA
27,727
Streaming support in automatic-speech-recognition pipeline
{ "login": "CoderHam", "id": 11223643, "node_id": "MDQ6VXNlcjExMjIzNjQz", "avatar_url": "https://avatars.githubusercontent.com/u/11223643?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CoderHam", "html_url": "https://github.com/CoderHam", "followers_url": "https://api.github.com/users/CoderHam/followers", "following_url": "https://api.github.com/users/CoderHam/following{/other_user}", "gists_url": "https://api.github.com/users/CoderHam/gists{/gist_id}", "starred_url": "https://api.github.com/users/CoderHam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CoderHam/subscriptions", "organizations_url": "https://api.github.com/users/CoderHam/orgs", "repos_url": "https://api.github.com/users/CoderHam/repos", "events_url": "https://api.github.com/users/CoderHam/events{/privacy}", "received_events_url": "https://api.github.com/users/CoderHam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! This seems to already be supported see this: #21196 does this not work for you problem? ", "Shoot I wasn't aware this is already a feature. Yes I can make it work for me with the changes from that PR\r\n\r\nClosing this ticket but if there are official docs for the above I'd appreciate a pointer to it. Thank you for your quick response ๐Ÿ˜" ]
1,701
1,701
1,701
NONE
null
### Feature request I'd like to request for the ability to stream back chunks of audio transcripts instead of having to wait for the entire audio to be processed. For real time use cases, it helps to be able to have small chunks of audio (10 seconds) and have their transcriptions returned as soon as they are available. Think live closed captioning with whisper. This will open up the real world applications that we can use whisper and similar models for. ### Motivation When I was working on transcribing long form videos of lectures/talks for a personal project I noticed that I need to wait for the entire audio to be transcribed before I can pass the same down to a downstream application. **Details of my use case / project:** * Take a video as input * Play the video with closed captions using whisper * During the video if the user asks a question, pause the lecture and use an LLM (plus some custom logic) to answer the users question. * Resume the video after answering the user's question I am concerned that breaking the audio into 10 second pieces will have lower throughput and underutilize the GPU. Is there something I have not considered here? ### Your contribution I have dug through the code and narrowed the location of the bulk of the changes down to the [following lines](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/automatic_speech_recognition.py#L681-L693) in the pipeline for "automatic-speech-recognition" Even though the outputs are accumulated much earlier and appended to the `chunks` object, we only get the results once the audio is completely processed. Using a yield operator or a similar solution will allow us to return the processed chunks as soon as they are available. I would be happy to make a PR for this is functionality folks are interested. For now I have a messy hack that updates the ASR pipeline to fit my needs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27727/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27726/comments
https://api.github.com/repos/huggingface/transformers/issues/27726/events
https://github.com/huggingface/transformers/issues/27726
2,012,080,747
I_kwDOCUB6oc537epr
27,726
How to load PixArtAlphaPipeline in 8bit?
{ "login": "FurkanGozukara", "id": 19240467, "node_id": "MDQ6VXNlcjE5MjQwNDY3", "avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FurkanGozukara", "html_url": "https://github.com/FurkanGozukara", "followers_url": "https://api.github.com/users/FurkanGozukara/followers", "following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}", "gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}", "starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions", "organizations_url": "https://api.github.com/users/FurkanGozukara/orgs", "repos_url": "https://api.github.com/users/FurkanGozukara/repos", "events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}", "received_events_url": "https://api.github.com/users/FurkanGozukara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, @sayakpaul and @SunMarc should know more about this. Untagging myself. Thank you.", "You need to point to the exact line of code that breaks stuff and I am unable to see what that is. \r\n\r\n```python\r\ntext_encoder = T5EncoderModel.from_pretrained(\r\n \"PixArt-alpha/PixArt-XL-2-1024-MS\",\r\n subfolder=\"text_encoder\",\r\n load_in_8bit=True,\r\n device_map=\"auto\",\r\n\r\n)\r\n```\r\n\r\nis totally possible given `bitsandbytes` is installed. ", "> You need to point to the exact line of code that breaks stuff and I am unable to see what that is.\r\n> \r\n> ```python\r\n> text_encoder = T5EncoderModel.from_pretrained(\r\n> \"PixArt-alpha/PixArt-XL-2-1024-MS\",\r\n> subfolder=\"text_encoder\",\r\n> load_in_8bit=True,\r\n> device_map=\"auto\",\r\n> \r\n> )\r\n> ```\r\n> \r\n> is totally possible given `bitsandbytes` is installed.\r\n\r\nthank you for reply\r\n\r\nbitsandbytes properly installed\r\n\r\neverything loads no error\r\n\r\nduring inference exactly breaking line like this\r\n\r\n **File \"G:\\pixArt installer\\PixArt-alpha\\app_8bit.py\", line 176, in generate**\r\n\r\nthe real error happens in below\r\n\r\n```\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\diffusers\\pipelines\\pixart_alpha\\pipeline_pixart_alpha.py\", line 731, in __call__\r\n ASPECT_RATIO_1024_BIN if self.transformer.config.sample_size == 128 else ASPECT_RATIO_512_BIN\r\nAttributeError: 'NoneType' object has no attribute 'config'\r\n```\r\n\r\nuse_resolution_binning is set True\r\n\r\nhere full code\r\n\r\n```\r\n\r\nif torch.cuda.is_available():\r\n\ttext_encoder = T5EncoderModel.from_pretrained(\r\n\t\t\"PixArt-alpha/PixArt-XL-2-1024-MS\",\r\n\t\tsubfolder=\"text_encoder\",\r\n\t\tload_in_8bit=True,\r\n\t\tdevice_map=\"auto\",\r\n\r\n\t)\r\n\tpipe = PixArtAlphaPipeline.from_pretrained(\r\n\t\t\"PixArt-alpha/PixArt-XL-2-1024-MS\",\r\n\t\ttext_encoder=text_encoder,\r\n\t\ttransformer=None,\r\n\t\tdevice_map=\"auto\"\r\n)\r\n\r\n\r\ndef create_output_folders():\r\n base_dir = \"outputs\"\r\n today = datetime.now().strftime(\"%Y-%m-%d\")\r\n folder_path = os.path.join(base_dir, today)\r\n if not os.path.exists(folder_path):\r\n os.makedirs(folder_path)\r\n return folder_path\r\n\r\n# Modified save_image function\r\ndef save_image(img):\r\n folder_path = create_output_folders()\r\n unique_name = str(uuid.uuid4()) + \".png\"\r\n file_path = os.path.join(folder_path, unique_name)\r\n img.save(file_path)\r\n return file_path\r\n\r\n# Modified randomize_seed_fn function\r\ndef randomize_seed_fn(seed: int, randomize_seed: bool) -> int:\r\n if randomize_seed:\r\n seed = random.randint(0, MAX_SEED)\r\n return seed\r\n\r\n# Modified generate function to include batch count\r\ndef generate(\r\n prompt: str,\r\n negative_prompt: str = \"\",\r\n style: str = DEFAULT_STYLE_NAME,\r\n use_negative_prompt: bool = False,\r\n seed: int = 0,\r\n width: int = 1024,\r\n height: int = 1024,\r\n schedule: str = DEFAULT_SCHEDULE_NAME,\r\n dpms_guidance_scale: float = 4.5,\r\n sas_guidance_scale: float = 3,\r\n dpms_inference_steps: int = 20,\r\n sas_inference_steps: int = 25,\r\n randomize_seed: bool = False,\r\n batch_count: str = \"1\",\r\n use_resolution_binning: bool = True,\r\n progress=gr.Progress(track_tqdm=True),\r\n):\r\n image_paths = []\r\n print(f\"batch_count {batch_count}\")\r\n batch_count_int = int(batch_count)\r\n for _ in range(batch_count_int):\r\n seed = int(randomize_seed_fn(seed, randomize_seed))\r\n generator = torch.Generator().manual_seed(seed)\r\n\r\n if schedule == 'DPM-Solver':\r\n if not isinstance(pipe.scheduler, DPMSolverMultistepScheduler):\r\n pipe.scheduler = DPMSolverMultistepScheduler()\r\n num_inference_steps = dpms_inference_steps\r\n guidance_scale = dpms_guidance_scale\r\n elif schedule == \"SA-Solver\":\r\n if not isinstance(pipe.scheduler, SASolverScheduler):\r\n pipe.scheduler = SASolverScheduler.from_config(pipe.scheduler.config, algorithm_type='data_prediction', tau_func=lambda t: 1 if 200 <= t <= 800 else 0, predictor_order=2, corrector_order=2)\r\n num_inference_steps = sas_inference_steps\r\n guidance_scale = sas_guidance_scale\r\n else:\r\n raise ValueError(f\"Unknown schedule: {schedule}\")\r\n\r\n if not use_negative_prompt:\r\n negative_prompt = None # type: ignore\r\n prompt, negative_prompt = apply_style(style, prompt, negative_prompt)\r\n\r\n images = pipe(\r\n prompt=prompt,\r\n width=width,\r\n height=height,\r\n guidance_scale=guidance_scale,\r\n num_inference_steps=num_inference_steps,\r\n generator=generator,\r\n num_images_per_prompt=NUM_IMAGES_PER_PROMPT,\r\n use_resolution_binning=use_resolution_binning,\r\n output_type=\"pil\",\r\n ).images\r\n\r\n image_paths.extend([save_image(img) for img in images])\r\n\r\n return image_paths, seed\r\n```\r\n\r\nThe error of above code is shown below\r\n\r\n```\r\nTo create a public link, set `share=True` in `launch()`.\r\nbatch_count 1\r\nTraceback (most recent call last):\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\queueing.py\", line 427, in call_prediction\r\n output = await route_utils.call_process_api(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\route_utils.py\", line 232, in call_process_api\r\n output = await app.get_blocks().process_api(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\blocks.py\", line 1484, in process_api\r\n result = await self.call_function(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\blocks.py\", line 1106, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\anyio\\to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\utils.py\", line 665, in wrapper\r\n response = f(*args, **kwargs)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\utils.py\", line 665, in wrapper\r\n response = f(*args, **kwargs)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\app_8bit.py\", line 176, in generate\r\n images = pipe(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\diffusers\\pipelines\\pixart_alpha\\pipeline_pixart_alpha.py\", line 731, in __call__\r\n ASPECT_RATIO_1024_BIN if self.transformer.config.sample_size == 128 else ASPECT_RATIO_512_BIN\r\nAttributeError: 'NoneType' object has no attribute 'config'\r\nTraceback (most recent call last):\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\queueing.py\", line 427, in call_prediction\r\n output = await route_utils.call_process_api(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\route_utils.py\", line 232, in call_process_api\r\n output = await app.get_blocks().process_api(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\blocks.py\", line 1484, in process_api\r\n result = await self.call_function(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\blocks.py\", line 1106, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\anyio\\to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\utils.py\", line 665, in wrapper\r\n response = f(*args, **kwargs)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\utils.py\", line 665, in wrapper\r\n response = f(*args, **kwargs)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\app_8bit.py\", line 176, in generate\r\n images = pipe(\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\diffusers\\pipelines\\pixart_alpha\\pipeline_pixart_alpha.py\", line 731, in __call__\r\n ASPECT_RATIO_1024_BIN if self.transformer.config.sample_size == 128 else ASPECT_RATIO_512_BIN\r\nAttributeError: 'NoneType' object has no attribute 'config'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\queueing.py\", line 472, in process_events\r\n response = await self.call_prediction(awake_events, batch)\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\gradio\\queueing.py\", line 436, in call_prediction\r\n raise Exception(str(error) if show_error else None) from error\r\nException: None\r\n```\r\n", "@sayakpaul sorry i had given incorrect code above (it was working one) fixed now", "The error happens on this file : https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_alpha.py\r\n\r\nLine 731\r\n\r\n```\r\n if use_resolution_binning:\r\n aspect_ratio_bin = (\r\n ASPECT_RATIO_1024_BIN if self.transformer.config.sample_size == 128 else ASPECT_RATIO_512_BIN\r\n )\r\n orig_height, orig_width = height, width\r\n height, width = self.classify_height_width_bin(height, width, ratios=aspect_ratio_bin)\r\n```\r\n\r\nWhen use_resolution_binning is disabled a new error occurs\r\n\r\n```\r\n File \"G:\\pixArt installer\\PixArt-alpha\\venv\\lib\\site-packages\\diffusers\\pipelines\\pixart_alpha\\pipeline_pixart_alpha.py\", line 790, in __call__\r\n latent_channels = self.transformer.config.in_channels\r\nAttributeError: 'NoneType' object has no attribute 'config'\r\n```\r\n\r\nLine 790\r\n\r\n```\r\n # 5. Prepare latents.\r\n latent_channels = self.transformer.config.in_channels\r\n latents = self.prepare_latents(\r\n```\r\n\r\n**So basically when I try 8 bit there is no self.transformer .** \r\n\r\nok now i understand my error\r\n\r\nyou are loading without transformer computing latents deleting and reloading \r\n\r\nwow this would be so slow i assume\r\n\r\n**also your example code deletes and loads everything for every image is that really expected?** \r\n\r\nit would be immensely slow\r\n\r\n====\r\n\r\nok i made it work but still using 13,989 VRAM compared to 16,222 VRAM :)\r\n\r\nI guess only way to work it with low VRAM is keep deleting and loading again which would be super slow. thank you", "ok here comparison\r\n\r\n16 bit \r\n\r\n![image](https://github.com/huggingface/transformers/assets/19240467/e740037f-4d3c-4dbb-bdf2-5e0273df397d)\r\n\r\n8 bit text encoder\r\n\r\n![image](https://github.com/huggingface/transformers/assets/19240467/c1869d8c-6e60-4734-b92c-ad95dced3205)\r\n\r\n16 bit 512 model\r\n\r\n![image](https://github.com/huggingface/transformers/assets/19240467/a52ee303-8796-4002-a730-298f955275a6)\r\n\r\n8 bit 512 model\r\n\r\n![image](https://github.com/huggingface/transformers/assets/19240467/925b8ea4-6520-484b-88a3-a370356cde40)\r\n\r\n512 model and 1024 model VRAM usages are almost same :(\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,704
1,704
NONE
null
I know there is example but I couldn't make it work. I am trying to make an auto installer and gradio interface for Pix Art Alpha Pipeline so common people can install and use on their Windows PCs Currently my below code working and I want to make it load in 8 bit is that possible? ``` if torch.cuda.is_available(): pipe = PixArtAlphaPipeline.from_pretrained( "PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16, use_safetensors=True, ) if ENABLE_CPU_OFFLOAD: pipe.enable_model_cpu_offload() else: pipe.to(device) print("Loaded on Device!") # speed-up T5 pipe.text_encoder.to_bettertransformer() if USE_TORCH_COMPILE: pipe.transformer = torch.compile(pipe.transformer, mode="reduce-overhead", fullgraph=True) print("Model Compiled!") ``` ``` seed = int(randomize_seed_fn(seed, randomize_seed)) generator = torch.Generator().manual_seed(seed) if schedule == 'DPM-Solver': if not isinstance(pipe.scheduler, DPMSolverMultistepScheduler): pipe.scheduler = DPMSolverMultistepScheduler() num_inference_steps = dpms_inference_steps guidance_scale = dpms_guidance_scale elif schedule == "SA-Solver": if not isinstance(pipe.scheduler, SASolverScheduler): pipe.scheduler = SASolverScheduler.from_config(pipe.scheduler.config, algorithm_type='data_prediction', tau_func=lambda t: 1 if 200 <= t <= 800 else 0, predictor_order=2, corrector_order=2) num_inference_steps = sas_inference_steps guidance_scale = sas_guidance_scale else: raise ValueError(f"Unknown schedule: {schedule}") if not use_negative_prompt: negative_prompt = None # type: ignore prompt, negative_prompt = apply_style(style, prompt, negative_prompt) images = pipe( prompt=prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps, generator=generator, num_images_per_prompt=NUM_IMAGES_PER_PROMPT, use_resolution_binning=use_resolution_binning, output_type="pil", ).images ``` ### Who can help? @sayakpaul @Narsil @SunMarc @younesbelkada @gante I tried below but it broken the app ``` text_encoder = T5EncoderModel.from_pretrained( "PixArt-alpha/PixArt-XL-2-1024-MS", subfolder="text_encoder", load_in_8bit=True, device_map="auto", ) pipe = PixArtAlphaPipeline.from_pretrained( "PixArt-alpha/PixArt-XL-2-1024-MS", text_encoder=text_encoder, transformer=None, device_map="auto" ) ``` The error I am getting is like below ``` Downloading shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<?, ?it/s] bin G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:06<00:00, 3.09s/it] Loading pipeline components...: 0%| | 0/4 [00:00<?, ?it/s]Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:00<00:00, 9.50it/s] Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. batch_count 1 Traceback (most recent call last): File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\queueing.py", line 427, in call_prediction output = await route_utils.call_process_api( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1484, in process_api result = await self.call_function( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1106, in call_function prediction = await anyio.to_thread.run_sync( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\utils.py", line 665, in wrapper response = f(*args, **kwargs) File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\utils.py", line 665, in wrapper response = f(*args, **kwargs) File "G:\pixArt installer\PixArt-alpha\app.py", line 176, in generate images = pipe( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\diffusers\pipelines\pixart_alpha\pipeline_pixart_alpha.py", line 731, in __call__ ASPECT_RATIO_1024_BIN if self.transformer.config.sample_size == 128 else ASPECT_RATIO_512_BIN AttributeError: 'NoneType' object has no attribute 'config' Traceback (most recent call last): File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\queueing.py", line 427, in call_prediction output = await route_utils.call_process_api( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1484, in process_api result = await self.call_function( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\blocks.py", line 1106, in call_function prediction = await anyio.to_thread.run_sync( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\utils.py", line 665, in wrapper response = f(*args, **kwargs) File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\utils.py", line 665, in wrapper response = f(*args, **kwargs) File "G:\pixArt installer\PixArt-alpha\app.py", line 176, in generate images = pipe( File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\diffusers\pipelines\pixart_alpha\pipeline_pixart_alpha.py", line 731, in __call__ ASPECT_RATIO_1024_BIN if self.transformer.config.sample_size == 128 else ASPECT_RATIO_512_BIN AttributeError: 'NoneType' object has no attribute 'config' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\queueing.py", line 472, in process_events response = await self.call_prediction(awake_events, batch) File "G:\pixArt installer\PixArt-alpha\venv\lib\site-packages\gradio\queueing.py", line 436, in call_prediction raise Exception(str(error) if show_error else None) from error Exception: None ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27726/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27725/comments
https://api.github.com/repos/huggingface/transformers/issues/27725/events
https://github.com/huggingface/transformers/pull/27725
2,012,055,894
PR_kwDOCUB6oc5gbVD8
27,725
Fix oneformer instance segmentation RuntimeError
{ "login": "yhshin11", "id": 5031800, "node_id": "MDQ6VXNlcjUwMzE4MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/5031800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yhshin11", "html_url": "https://github.com/yhshin11", "followers_url": "https://api.github.com/users/yhshin11/followers", "following_url": "https://api.github.com/users/yhshin11/following{/other_user}", "gists_url": "https://api.github.com/users/yhshin11/gists{/gist_id}", "starred_url": "https://api.github.com/users/yhshin11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yhshin11/subscriptions", "organizations_url": "https://api.github.com/users/yhshin11/orgs", "repos_url": "https://api.github.com/users/yhshin11/repos", "events_url": "https://api.github.com/users/yhshin11/events{/privacy}", "received_events_url": "https://api.github.com/users/yhshin11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27725). All of your documentation changes will be reflected on that endpoint." ]
1,701
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? Fixes RuntimeError when running on GPU due to inconsistent devices in OneFormerImageProcessor. Same issue was fixed in [image_processing_mask2former.py](https://github.com/huggingface/transformers/blame/514de24abfd4416aeba6a6455ad5920f57f3567d/src/transformers/models/mask2former/image_processing_mask2former.py#L1071) but not in image_processing_oneformer.py <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27725/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27725", "html_url": "https://github.com/huggingface/transformers/pull/27725", "diff_url": "https://github.com/huggingface/transformers/pull/27725.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27725.patch", "merged_at": 1701093599000 }
https://api.github.com/repos/huggingface/transformers/issues/27723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27723/comments
https://api.github.com/repos/huggingface/transformers/issues/27723/events
https://github.com/huggingface/transformers/pull/27723
2,011,968,032
PR_kwDOCUB6oc5gbBta
27,723
[Table Transformer] Convert more checkpoints
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27723). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,707
null
CONTRIBUTOR
null
# What does this PR do? Microsoft released some new Table Transformer (TATR) checkpoints, I've converted them and pushed them to the hub: * https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-fin * https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-pub * https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27723/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27723/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27723", "html_url": "https://github.com/huggingface/transformers/pull/27723", "diff_url": "https://github.com/huggingface/transformers/pull/27723.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27723.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27722/comments
https://api.github.com/repos/huggingface/transformers/issues/27722/events
https://github.com/huggingface/transformers/issues/27722
2,011,952,478
I_kwDOCUB6oc536_Ve
27,722
Adding support for prompt lookup decoding (variant of assisted generation)
{ "login": "apoorvumang", "id": 1957903, "node_id": "MDQ6VXNlcjE5NTc5MDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1957903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apoorvumang", "html_url": "https://github.com/apoorvumang", "followers_url": "https://api.github.com/users/apoorvumang/followers", "following_url": "https://api.github.com/users/apoorvumang/following{/other_user}", "gists_url": "https://api.github.com/users/apoorvumang/gists{/gist_id}", "starred_url": "https://api.github.com/users/apoorvumang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apoorvumang/subscriptions", "organizations_url": "https://api.github.com/users/apoorvumang/orgs", "repos_url": "https://api.github.com/users/apoorvumang/repos", "events_url": "https://api.github.com/users/apoorvumang/events{/privacy}", "received_events_url": "https://api.github.com/users/apoorvumang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "tagging @gante since you have recently worked on lookahead decoding", "Hi @apoorvumang ๐Ÿ‘‹ \r\n\r\nFirst of all, thank you for creating this clever strategy and for sharing it openly! It's simple and elegant, which makes it really great.\r\n\r\nI've been thinking about it from a software point of view. The core functionality (`find_candidate_pred_tokens`) is simple, and it reuses the core of `assisted_generation`. I'm also seeing more techniques that speed up LLMs through the generation of candidate sequences. As such, here's my proposal:\r\n1. I'll open a PR today to refactor the contents of `assisted_generation` into a generalist decoding technique that accepts an arbitrary function to generate candidates. `assisted_generation` would be a variant of this function, as is your technique.\r\n2. In parallel, you can work to add your technique on top of the generalist decoding technique with candidates:\r\n a. You'll have to define the controlling parameters of your technique in the `GenerationConfig` class, defaulting to `None`\r\n b. When the parameters above are non-`None`, your technique would get triggered in `generate`, using the same pattern\r\n c. After the code is in a nearly ready state, we'll do some benchmarks over different tasks and share in social media\r\n \r\nDoes it sound good to you? ๐Ÿค— \r\n\r\n(LMK if you'd like further pointers)", "Sounds good! I will try to read up and implement it\r\n\r\nAgree on the `assisted_generation` refactoring as well - maybe we could even have user provided `assistant_function`? (but that's a software decision I'm not qualified to make)", "@apoorvumang #27750 has the new `assisted_decoding` structure. It is still subject to review, but you can now have a concrete idea of what I had in mind :)\r\n\r\nAdding your technique on top of it should be straightforward!", "Thanks @gante ! Will look into the the refactored code now. I think I should be able to get something running by tonight (IST)", "I have made a working implementation here, based off of #27750 : https://github.com/apoorvumang/transformers/tree/prompt_lookup_decoding . Should I start a PR with it?", "Also, if you suggest any benchmarks/benchmarking code, I can help with that. I have access to A100 40GB GPU and M1 Max 32GB @gante ", "@apoorvumang Yes, open a PR! I can add a few suggestions even before #27750 is merged :)\r\n\r\nMy advice for benchmarks would be the following: users love it when a certain method works well with little to no hyperparameters. At the moment, I see two hyperparameters -- `prompt_lookup_num_tokens` and `prompt_lookup_max_matching_ngram`. I'd run a few benchmarks over a few datasets changing these hyperparameters to find whether we can:\r\na) get away with only one hyperparameter OR\r\nb) set an update heuristic that gets the best hyperparameters for the input at hand (through the `update_candidate_strategy` method)\r\n\r\nIf you find a way to make a) or b) work, the technique would become more user-friendly, and thus with a higher chance of being correctly used. For us, `transformers` maintainers, having fewer flags is also great!\r\n\r\nAfter we settle on a final implementation, I can validate the benchmarks on different devices (e.g. a T4, a 3090, ...). Given the simplicity of the technique, I suspect the results will be mostly hardware agnostic on GPU :)", "Started PR here: https://github.com/huggingface/transformers/pull/27775/commits . Please do leave suggestions @gante \r\n\r\nI will start some benchmarking on my side to find optimal hyperparameters (or update schedules). Maybe both of these can be best tuned using just a default value + update schedule, and if user wants to really change default value they can go instantiate and provide a `PromptLookupCandidateGenerator` with new params.\r\n\r\nWill get back once I start some tests. I will be trying on some standard summarization, QA and maybe look for a code editing sort of dataset.", "![image](https://github.com/huggingface/transformers/assets/1957903/4838bd83-3cc6-42e1-80f7-faeadedcbcc0)\r\n\r\nThere is significant difference between greedy and sampling when summarizing, but there are still gains. Proper analysis of the phenomenon would be a paper-worthy effort probably.\r\n\r\nI will try to run a similar thing for code editing as well. If you think there's something I could try pls let me know.\r\n\r\nOne question @gante : Is the most popular method greedy or sampling (I would assume greedy since its the default, but I know sampling is better for quality)? If I could optimize for only one of these, which one should be the 'default'?", "> If I could optimize for only one of these, which one should be the 'default'?\r\n\r\nNaive question/input here.. but assuming you can figure the optimisations, and they don't apply equally to both, would it be possible to have 2 settings for it? One when used with greedy and one when used with sampling? Even if that's handled automagically under the hood (or even presumably if it's exposed to users, it would be simpler than having to know the exact hyperparameters to tune?)", "Thanks! Yes it can ofc - `_get_candidate_generator` has access to `generation_config`, which can be passed on here to check for stuff like this.\r\n\r\nAny other thoughts/ideas @0xdevalias ?", "> Thanks! Yes it can ofc\r\n\r\n@apoorvumang Awesome :)\r\n\r\n> Any other thoughts/ideas?\r\n\r\n@apoorvumang None at this stage; was more of a 'drive by random brain spark' type moment :)\r\n", "@apoorvumang @0xdevalias the preliminary results seem to point out that there is no obvious parameterization ๐Ÿค” Let's wait to see the results for coding!\r\n\r\nRegarding sampling vs greedy: greedy is the default for legacy reasons, sampling is by far the most popular with chat LLMs :) tasks like summarization, translation, or automatic speech recognition tend to use greedy decoding or beam search, though.\r\n\r\nFinally, regarding default values: we'll have to default the values to `None`, so we can detect whether the user wants to use it or not. We have a few default values for legacy reasons, but the defaults should be set at a model level (with the `generation_config.json`). This does not prevent us, however, from suggesting values in the parameters' docstring ๐Ÿค— ", "Here's using mt-bench, only 2nd turn code\r\n<img width=\"1214\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/1957903/277abc15-5f69-4a0b-b19f-31867e5973d8\">\r\n", "All 80 samples from mt-bench, 2nd turn only. \r\n<img width=\"1214\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/1957903/80a71c7f-d92e-45c9-b873-c56adee5b9c5\">\r\n", "> All 80 samples from mt-bench, 2nd turn only. <img alt=\"image\" width=\"1214\" src=\"https://private-user-images.githubusercontent.com/1957903/287409701-80a71c7f-d92e-45c9-b873-c56adee5b9c5.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTEiLCJleHAiOjE3MDE5ODA3NDQsIm5iZiI6MTcwMTk4MDQ0NCwicGF0aCI6Ii8xOTU3OTAzLzI4NzQwOTcwMS04MGE3MWM3Zi1kOTJlLTQ1YzktYjg3My1jNTZhZGVlNWI5YzUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQUlXTkpZQVg0Q1NWRUg1M0ElMkYyMDIzMTIwNyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyMzEyMDdUMjAyMDQ0WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NGEyY2EzMTE1YjkxMjBlODViZDA0NGZhMmZjODkzZGY1OGY2ZDE0NjE4ZGIxOWJkNGYwNGEzYmZmMTEwMWUwMCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.sFuGA90vD_k4XOlejmyUN2I25Frelu515MRvlmgFWh8\">\r\n\r\nHi @apoorvumang โ€“ Thanks for sharing your great work!\r\n\r\nTwo quick questions:\r\n1. What temperature did you use in \"Sampling baseline\" and \"Sampling PLD\"?\r\n2. How should we interpret the black-colored lines that go below 0? (What is their minimal tokens per second rate?)", "@keyboardAnt the error bars are usually the standard deviation of the measurement, which is a centered (and symmetric) moment -- it does not denote the minimum/maximum of a measurement, nor a range between percentiles.\r\n\r\nAs such, I'm reading it as a long-tailed distribution. Some speedups are huge (e.g. 5x), while most are moderate (e.g. 1.5x) ", "Hi @keyboardAnt , thank you!\r\n\r\n1. Default temperature, so probably 1.0\r\n2. As @gante said, the black coloured lines are standard deviation, not min or max. I didn't save the exact data for these so can't share that. But for places where it seems to be less than 0, its probably because of very high variance in speedups (1x to 10x).\r\n\r\nHere's an example of this phenomenon, courtesy ChatGPT\r\n<img width=\"734\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/1957903/1801131d-3842-4846-99c3-b6d9385d3fc6\">\r\n\r\n\r\nPS: Sorry for the delay in working on this PR - I will try to work on it this weekend", "@gante, @apoorvumang, yes. Because of the high variance, we better consider the minimal tokens/sec rate. This could ensure the long tail is one-sided. Otherwise, it might suggest a slowdown.", "@keyboardAnt Could you please expand on what you mean? Like we should look for configs with a good lower bound for tokens/sec rather than a good average?", "@apoorvumang, my suggestion is to measure `speedup`. That is\r\n\r\n```txt\r\nspeedup := (The ratio of tokens per second with PLD) / (The ratio of tokens per second without PLD)\r\n```\r\nwhere with-PLD and without-PLD share the same variables (e.g., prompt, target model, GPU device). We want to show that `speedup >> 1` in most cases, and to rule out the possibility that `speedup < 1` (i.e., a slowdown). The visualizations you shared do not rule out the possibility that `speedup < 1`.\r\n\r\nWe must measure `speedup` in varied configurations so we can better understand it. Each configuration has a unique prompt, target model, or `(max_matching_ngram, num_token_output)` hyperparameter. Visualizing the distribution of `speedup` and calculating its harmonic mean can help.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,705
1,705
CONTRIBUTOR
null
### Feature request Recently proposed method prompt lookup decoding, which replaces the draft model with string matching in prompt Code: https://github.com/apoorvumang/prompt-lookup-decoding ### Motivation - The method gives significant speedups in input grounded tasks (2x-4x) - Applicable to all decoder models, supports sampling - Easy to implement - we can just modify assisted generation to also support a function for assistant model (rather than a LLM) ### Your contribution I have a not-so-well written implementation [here](https://github.com/apoorvumang/prompt-lookup-decoding/blob/main/demo-pld.ipynb) (python notebook). I can contribute in making it better, but will need help since its my first time
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27722/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 7, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27722/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27721/comments
https://api.github.com/repos/huggingface/transformers/issues/27721/events
https://github.com/huggingface/transformers/pull/27721
2,011,939,210
PR_kwDOCUB6oc5ga7du
27,721
Log a warning in `TransfoXLTokenizer.__init__`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,701
1,701
1,701
COLLABORATOR
null
# What does this PR do? I missed this part before merging #27607 https://github.com/huggingface/transformers/pull/27607#pullrequestreview-1747452403
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27721/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27721", "html_url": "https://github.com/huggingface/transformers/pull/27721", "diff_url": "https://github.com/huggingface/transformers/pull/27721.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27721.patch", "merged_at": 1701164645000 }
https://api.github.com/repos/huggingface/transformers/issues/27720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27720/comments
https://api.github.com/repos/huggingface/transformers/issues/27720/events
https://github.com/huggingface/transformers/pull/27720
2,011,907,053
PR_kwDOCUB6oc5ga0hp
27,720
Add common processor tests
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27720). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge If you are OK with that, let's put the tests in another PR #27761 and close this one?", "Isn't the other PR for saving processors? Or will it also add common tests?", "Yes, that is a PR about saving processors. But we want to have tests to make sure it works as we want it. (to avoid later changes of files on the Hub - which is always almost impossible). \r\n\r\n> add common tests?\r\n\r\nYes, probably with a different version than this PR" ]
1,701
1,706
null
CONTRIBUTOR
null
# What does this PR do? Multimodal processors currently don't have common tests. This PR aims to work towards having a common API for our multimodal processors, making sure they all have the same inputs and outputs (e.g. making sure text+vision processors accept `text` as first kwarg, then `images`, etc.). As a first work, I refactor the CLIP and BLIP-2 processor tests to leverage the common ones.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27720/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27720", "html_url": "https://github.com/huggingface/transformers/pull/27720", "diff_url": "https://github.com/huggingface/transformers/pull/27720.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27720.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27719/comments
https://api.github.com/repos/huggingface/transformers/issues/27719/events
https://github.com/huggingface/transformers/issues/27719
2,011,845,097
I_kwDOCUB6oc536lHp
27,719
Batch QuestionAnsweringPipeline prediction with different postprocess_params (e.g. max_answer_len)
{ "login": "KatHaruto", "id": 74958594, "node_id": "MDQ6VXNlcjc0OTU4NTk0", "avatar_url": "https://avatars.githubusercontent.com/u/74958594?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KatHaruto", "html_url": "https://github.com/KatHaruto", "followers_url": "https://api.github.com/users/KatHaruto/followers", "following_url": "https://api.github.com/users/KatHaruto/following{/other_user}", "gists_url": "https://api.github.com/users/KatHaruto/gists{/gist_id}", "starred_url": "https://api.github.com/users/KatHaruto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KatHaruto/subscriptions", "organizations_url": "https://api.github.com/users/KatHaruto/orgs", "repos_url": "https://api.github.com/users/KatHaruto/repos", "events_url": "https://api.github.com/users/KatHaruto/events{/privacy}", "received_events_url": "https://api.github.com/users/KatHaruto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! ๐Ÿค— I don't think this is possible out of the box as it's a custom usage! Feel free to share your solution here or ask on the [forum](https://discuss.huggingface.co/)! " ]
1,701
1,701
1,701
NONE
null
Is it possible to change `QuestionAnsweringPipeline.postprocess` parameters such as `max_answer_len` for each question in batch prediction? In my use case, `max_answer_len` is an important parameter for the output I want. And I want to use batch prediction to improve performance, but currently I can only set a single (pre)`postprocess_params` in one batch.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27719/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27718
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27718/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27718/comments
https://api.github.com/repos/huggingface/transformers/issues/27718/events
https://github.com/huggingface/transformers/pull/27718
2,011,737,709
PR_kwDOCUB6oc5gaQWx
27,718
Add CogVLM
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A cleaner implementation I'm working on is here: https://github.com/NielsRogge/transformers/tree/add_cogvlm_cleaner. It implements the model like llava, by adding the image tokens inside the model, rather than creating them in the processor class.", "Closing this one in favor of the PR above." ]
1,701
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? This PR adds CogVLM natively into the Transformers library (it's already usable with `trust_remote_code=True`, but with this PR one can run it without the xformers, einops and triton dependencies). To do: - [x] remove triton dependency for rotary embeddings (or make `FastRotaryEmbedding` optional) - [x] wait for #27690 to be merged - [ ] decide on attributes to be saved for multimodal processors: see https://github.com/huggingface/transformers/pull/27761
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27718/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27718", "html_url": "https://github.com/huggingface/transformers/pull/27718", "diff_url": "https://github.com/huggingface/transformers/pull/27718.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27718.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27717
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27717/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27717/comments
https://api.github.com/repos/huggingface/transformers/issues/27717/events
https://github.com/huggingface/transformers/pull/27717
2,011,646,997
PR_kwDOCUB6oc5gZ82R
27,717
[`NllbTokenizer`] refactor with added tokens decoder
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27717). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts failing test is flaky, no idea why it's behaving that way will check main in the mean time", "Regarding failing tests - there's an internal thread here: https://huggingface.slack.com/archives/C01NE71C4F7/p1705082667041509 cc @ydshieh ", "`=========================================================================== 97 passed, 4 skipped, 9 warnings in 75.13s (0:01:15) ===========================================================================`\r\nslow tests all pass ready to merge! \r\n\r\nThis breaks the initialization as you will not always have the extra langages. But it's important to use the new `added_tokens_decoder` behaviour for that!" ]
1,701
1,707
1,707
COLLABORATOR
null
# What does this PR do? Fixes #26497 by make the list of languages optional. By default these languages will be added, but otherwise by setting `additional_special_tokens = None` can be suppressed in both the fast and slow tokenizer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27717/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27717", "html_url": "https://github.com/huggingface/transformers/pull/27717", "diff_url": "https://github.com/huggingface/transformers/pull/27717.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27717.patch", "merged_at": 1707792560000 }
https://api.github.com/repos/huggingface/transformers/issues/27716
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27716/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27716/comments
https://api.github.com/repos/huggingface/transformers/issues/27716/events
https://github.com/huggingface/transformers/pull/27716
2,011,528,153
PR_kwDOCUB6oc5gZiju
27,716
Fix unsupported setting of self._n_gpu in training_args on XPU devices
{ "login": "Liangliang-Ma", "id": 17159645, "node_id": "MDQ6VXNlcjE3MTU5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/17159645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Liangliang-Ma", "html_url": "https://github.com/Liangliang-Ma", "followers_url": "https://api.github.com/users/Liangliang-Ma/followers", "following_url": "https://api.github.com/users/Liangliang-Ma/following{/other_user}", "gists_url": "https://api.github.com/users/Liangliang-Ma/gists{/gist_id}", "starred_url": "https://api.github.com/users/Liangliang-Ma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Liangliang-Ma/subscriptions", "organizations_url": "https://api.github.com/users/Liangliang-Ma/orgs", "repos_url": "https://api.github.com/users/Liangliang-Ma/repos", "events_url": "https://api.github.com/users/Liangliang-Ma/events{/privacy}", "received_events_url": "https://api.github.com/users/Liangliang-Ma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Hey! This seems to have been introduce in #25714, and thus I am not convinced that the fix is as simple as that! Would you mind also sharing a reproducer of the issue? Might be related to specific hard / soft versions\r\n\r\n Hi, @ArthurZucker! I have discussed with @abhilash1910 about the issue before this PR and we thought it can be fixed like this. This could be reproduced in a multi intel GPUs env, for the `distributedType `should be set to `MULTI_XPU` [in accelerator](https://github.com/huggingface/accelerate/blob/main/src/accelerate/state.py#L230). I think most example scripts using Trainer and more than one gpu should be reproducer.", "@ArthurZucker Seems CI got block by Internet issue. Could you please check with that? Thanks", "Seems like I can't would you mind merging with main to trigger it?! ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27716). All of your documentation changes will be reflected on that endpoint.", "Thanks both ๐Ÿค— " ]
1,701
1,701
1,701
CONTRIBUTOR
null
In current training_args, self._n_gpu is set to device_count on XPU device, which will cause crash on XPU devices. In Trainer, if self.args.n_gpu greater than one, it will utilize torch.nn.DataParallel to wrap the model. But Ipex(intel_extension_for_pytorch) don't support DataParallel, while it suggests using DDP instead. So to make huggingface Trainer work on intel devices, this fix should be applied.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27716/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27716/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27716", "html_url": "https://github.com/huggingface/transformers/pull/27716", "diff_url": "https://github.com/huggingface/transformers/pull/27716.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27716.patch", "merged_at": 1701423255000 }
https://api.github.com/repos/huggingface/transformers/issues/27715
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27715/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27715/comments
https://api.github.com/repos/huggingface/transformers/issues/27715/events
https://github.com/huggingface/transformers/pull/27715
2,011,342,832
PR_kwDOCUB6oc5gY7QJ
27,715
Enhancing Code Readability and Maintainability with Simplified Activation Function Selection.
{ "login": "hi-sushanta", "id": 93595990, "node_id": "U_kgDOBZQpVg", "avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hi-sushanta", "html_url": "https://github.com/hi-sushanta", "followers_url": "https://api.github.com/users/hi-sushanta/followers", "following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}", "gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}", "starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions", "organizations_url": "https://api.github.com/users/hi-sushanta/orgs", "repos_url": "https://api.github.com/users/hi-sushanta/repos", "events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}", "received_events_url": "https://api.github.com/users/hi-sushanta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, because it is defined once globally and can be used throughout the entire codebase, improving code clarity and reducing redundancy.\r\n\r\n\r\n", "Sorry ๐Ÿค— I mean that these are choices and we chose to have `nn.functional` rather than `F` which for us is less readable ", "Thank you for your feedback. I understand that you prefer using nn.functional in your codebase. I chose to use F in this instance for two reasons:\r\n\r\n**Conciseness:** Using F is more concise and reduces code clutter, especially when using multiple activation functions.\r\n\r\n**Global accessibility:** Defining F globally makes it accessible throughout the codebase, eliminating the need to import nn.functional every time it's needed.", "If you approve of my changes, I'd be grateful if you could review them.", "Sorry but no, I think we just disagree on this matter but it's alright ๐Ÿค— ", "I understand and respect your position, even though we have different perspectives on this. \r\nThanks for considering my input.", "If it looks like's good then please merge them.", "please can you review this PR?", "Hey, as I mentioned before, this is not planned ๐Ÿค— ", "If these other changes are ok.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27715). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "While I wouldn't propose reverting all my changes at this point, I understand some revisions might be needed. Would it be beneficial to close this pull request and open a new one with the refinements we discussed?" ]
1,701
1,704
1,704
CONTRIBUTOR
null
This code optimization enhances code readability and maintainability by utilizing aliases, simplified activation function selection, and consistent function definitions. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27715/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27715", "html_url": "https://github.com/huggingface/transformers/pull/27715", "diff_url": "https://github.com/huggingface/transformers/pull/27715.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27715.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27714
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27714/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27714/comments
https://api.github.com/repos/huggingface/transformers/issues/27714/events
https://github.com/huggingface/transformers/issues/27714
2,011,276,844
I_kwDOCUB6oc534aYs
27,714
Overflow error: Can't convert negative int to unsigned [finetuning Bart]
{ "login": "matsuobasho", "id": 13874772, "node_id": "MDQ6VXNlcjEzODc0Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matsuobasho", "html_url": "https://github.com/matsuobasho", "followers_url": "https://api.github.com/users/matsuobasho/followers", "following_url": "https://api.github.com/users/matsuobasho/following{/other_user}", "gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}", "starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions", "organizations_url": "https://api.github.com/users/matsuobasho/orgs", "repos_url": "https://api.github.com/users/matsuobasho/repos", "events_url": "https://api.github.com/users/matsuobasho/events{/privacy}", "received_events_url": "https://api.github.com/users/matsuobasho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not a bug, merely has to do with `compute_metrics` not having a step to convert the -100 padded label values to `pad_token_id`. Adding that line gets rid of the error.\r\n```\r\ndef compute_metrics(eval_preds):\r\n preds, labels = eval_preds \r\n if isinstance(preds, tuple):\r\n preds = preds[0]\r\n\r\n preds = np.where(preds != -100, preds, tokenizer.pad_token_id)\r\n\r\n decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\r\n\r\n # Replace -100s in the labels as we can't decode them\r\n labels = np.where(labels != -100, labels, tokenizer.pad_token_id)\r\n decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\r\n\r\n\r\n decoded_preds = [pred.strip() for pred in decoded_preds]\r\n decoded_labels = [[label.strip()] for label in decoded_labels]\r\n\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n return {\"bleu\": result[\"score\"]}\r\n``` ", "Closing issue" ]
1,701
1,701
1,701
NONE
null
### System Info Platform: Windows 10 Device: cpu Python: 3.9.6 Transformers: 4.35.2 Datasets: 2.15.0 Torch: 2.1.1 ### Who can help? @muellerzr @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python MAX_OUTPUT_SEQ = 35 BATCH_SIZE = 2 DIR_NAME = './test_dir' DEVICE = 'cpu' EPOCHS = 3 def tokenize_function(tok, seq_length, example): inp = tok(example['input_seq'], padding=True, truncation=True) outp = tok(example['orig_sent'], padding="max_length", max_length=seq_length) res = { 'input_ids': inp['input_ids'], 'attention_mask': inp['attention_mask'], 'decoder_input_ids': outp['input_ids'], 'labels': outp['input_ids'], 'decoder_attention_mask': outp['attention_mask'] } return res checkpoint = "facebook/bart-large" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) # not including the dataset creation here ds = datasets.Dataset.from_dict({ "input_seq": input_filtered, "orig_sent": orig_sent_filtered }) data_prepped = ds.train_test_split() def compute_metrics(eval_preds): preds, labels = eval_preds # shape during debug is 9 x 32 # In case the model returns more than the prediction logits if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) # Replace -100s in the labels as we can't decode them labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Some simple post-processing decoded_preds = [pred.strip() for pred in decoded_preds] decoded_labels = [[label.strip()] for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels) return {"bleu": result["score"]} data_tokenized = data_prepped.map( partial(funcs.tokenize_function, tokenizer, MAX_OUTPUT_SEQ), batched=True, batch_size=BATCH_SIZE, remove_columns=['input_seq', 'orig_sent']) training_args = Seq2SeqTrainingArguments( output_dir=DIR_NAME, evaluation_strategy='epoch', logging_strategy='epoch', gradient_checkpointing=True, num_train_epochs=EPOCHS, predict_with_generate = True, generation_max_length=MAX_OUTPUT_SEQ, per_device_train_batch_size=BATCH_SIZE) trainer = Seq2SeqTrainer(model, training_args, train_dataset=data_tokenized["train"], eval_dataset=data_tokenized["test"], data_collator=data_collator, compute_metrics = compute_metrics, tokenizer=tokenizer) trainer.train() ``` Error I get: ``` OverflowError Traceback (most recent call last) c:\Users\Alf\project\notebooks\test_w_compute_metrics.ipynb Cell 8 line 2 1 training_args = Seq2SeqTrainingArguments( 2 output_dir=DIR_NAME, 3 #fp16=True, (...) 9 generation_max_length=MAX_OUTPUT_SEQ, 10 per_device_train_batch_size=BATCH_SIZE) 12 trainer = Seq2SeqTrainer(model, 13 training_args, 14 train_dataset=data_tokenized["train"], (...) 17 compute_metrics = compute_metrics, 18 tokenizer=tokenizer) ---> 20 trainer.train() File c:\Users\Alf\.virtualenvs\env_name\site-packages\transformers\trainer.py:1555, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1553 hf_hub_utils.enable_progress_bars() 1554 else: -> 1555 return inner_training_loop( 1556 args=args, 1557 resume_from_checkpoint=resume_from_checkpoint, 1558 trial=trial, 1559 ignore_keys_for_eval=ignore_keys_for_eval, ... 630 else self.clean_up_tokenization_spaces 631 ) 632 if clean_up_tokenization_spaces: OverflowError: can't convert negative int to unsigned ``` Important to note that there is no error if I comment out the `generation_max_length` argument in `Seq2SeqTrainingArguments`. Also note that this is the same error as in [this ](https://github.com/huggingface/transformers/issues/7517)closed issue from 3 years ago, but my datasets is updated. ### Expected behavior Trainer runs without errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27714/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27713
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27713/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27713/comments
https://api.github.com/repos/huggingface/transformers/issues/27713/events
https://github.com/huggingface/transformers/pull/27713
2,011,261,063
PR_kwDOCUB6oc5gYqML
27,713
Added Guarani Language Code to the Whisper Model
{ "login": "mfidabel", "id": 18636378, "node_id": "MDQ6VXNlcjE4NjM2Mzc4", "avatar_url": "https://avatars.githubusercontent.com/u/18636378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfidabel", "html_url": "https://github.com/mfidabel", "followers_url": "https://api.github.com/users/mfidabel/followers", "following_url": "https://api.github.com/users/mfidabel/following{/other_user}", "gists_url": "https://api.github.com/users/mfidabel/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfidabel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfidabel/subscriptions", "organizations_url": "https://api.github.com/users/mfidabel/orgs", "repos_url": "https://api.github.com/users/mfidabel/repos", "events_url": "https://api.github.com/users/mfidabel/events{/privacy}", "received_events_url": "https://api.github.com/users/mfidabel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @mfidabel! The dictionary [`LANGUAGES`](https://github.com/huggingface/transformers/blob/9270ab082740a55344a851049a0b69673b6cbdc5/src/transformers/models/whisper/tokenization_whisper.py#L94) is defined as the languages that the pre-trained Whisper model supports. Since Guarani is not supported by the pre-trained Whisper model, we should avoid adding it to this dictionary. Note that you can still fine-tune Whisper for Guarani without this change. For a detailed guide, refer to this tutorial: https://huggingface.co/learn/audio-course/chapter5/fine-tuning", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Hey @mfidabel! The dictionary [`LANGUAGES`](https://github.com/huggingface/transformers/blob/9270ab082740a55344a851049a0b69673b6cbdc5/src/transformers/models/whisper/tokenization_whisper.py#L94) is defined as the languages that the pre-trained Whisper model supports. Since Guarani is not supported by the pre-trained Whisper model, we should avoid adding it to this dictionary. Note that you can still fine-tune Whisper for Guarani without this change. For a detailed guide, refer to this tutorial: https://huggingface.co/learn/audio-course/chapter5/fine-tuning\r\n\r\nHi, I finetuned the model using [PEFT](https://huggingface.co/docs/peft/task_guides/int8-asr#evaluate) and when trying to do the inference part, I get the following error:\r\n\r\n```\r\nValueError: Unsupported language: guarani. Language should be one of: ['english', 'chinese', 'german', 'spanish', 'russian', 'korean', 'french', 'japanese', 'portuguese', 'turkish', 'polish', 'catalan', 'dutch', 'arabic', 'swedish', 'italian', 'indonesian', 'hindi', 'finnish', 'vietnamese', 'hebrew', 'ukrainian', 'greek', 'malay', 'czech', 'romanian', 'danish', 'hungarian', 'tamil', 'norwegian', 'thai', 'urdu', 'croatian', 'bulgarian', 'lithuanian', 'latin', 'maori', 'malayalam', 'welsh', 'slovak', 'telugu', 'persian', 'latvian', 'bengali', 'serbian', 'azerbaijani', 'slovenian', 'kannada', 'estonian', 'macedonian', 'breton', 'basque', 'icelandic', 'armenian', 'nepali', 'mongolian', 'bosnian', 'kazakh', 'albanian', 'swahili', 'galician', 'marathi', 'punjabi', 'sinhala', 'khmer', 'shona', 'yoruba', 'somali', 'afrikaans', 'occitan', 'georgian', 'belarusian', 'tajik', 'sindhi', 'gujarati', 'amharic', 'yiddish', 'lao', 'uzbek', 'faroese', 'haitian creole', 'pashto', 'turkmen', 'nynorsk', 'maltese', 'sanskrit', 'luxembourgish', 'myanmar', 'tibetan', 'tagalog', 'malagasy', 'assamese', 'tatar', 'hawaiian', 'lingala', 'hausa', 'bashkir', 'javanese', 'sundanese', 'cantonese', 'burmese', 'valencian', 'flemish', 'haitian', 'letzeburgesch', 'pushto', 'panjabi', 'moldavian', 'moldovan', 'sinhalese', 'castilian', 'mandarin'].\r\n```\r\n\r\nthis happens when trying to get the forced_decoder_ids: \r\n\r\n```python\r\ntokenizer = WhisperTokenizer.from_pretrained(peft_config.base_model_name_or_path, language=language, task=task)\r\nprocessor = WhisperProcessor.from_pretrained(peft_config.base_model_name_or_path, language=language, task=task)\r\nfeature_extractor = processor.feature_extractor\r\nforced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task=task)\r\n```\r\n\r\nAdding Guarani works for now, let me know how can i fix this @sanchit-gandhi .\r\n\r\nWill try using the pipeline function, to see if it works for me\r\n", "Using the pipeline function as recommended does work but it doesn't give good results. \r\n\r\nFor example:\r\n\r\n**Ground Truth:** \"omba'apรณva รฑanduti iporรฃve hagฬƒua\"\r\n**pipeline(\"automatic-speech-recognition\"):** \" On ba'a po'ba รฑanduti iponawe hawa.\"\r\n**Following the int8 tutorial**: \"omba'apรณva รฑanduti iporรฃve hagฬƒua\"\r\n\r\n\r\n" ]
1,701
1,707
1,704
NONE
null
# What does this PR do? This Pull Request adds the Guarani Language Code to the whisper tokenizer so it can be fine-tuned for the Guaranรญ Language. The change is so small that it shouldn't break anything. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. This wasn't discussed - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). No updates were necessary - [x] Did you write any new necessary tests? No test were necessary ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27713/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27713", "html_url": "https://github.com/huggingface/transformers/pull/27713", "diff_url": "https://github.com/huggingface/transformers/pull/27713.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27713.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27712/comments
https://api.github.com/repos/huggingface/transformers/issues/27712/events
https://github.com/huggingface/transformers/issues/27712
2,011,204,646
I_kwDOCUB6oc534Iwm
27,712
Add support for llama.cpp
{ "login": "oobabooga", "id": 112222186, "node_id": "U_kgDOBrBf6g", "avatar_url": "https://avatars.githubusercontent.com/u/112222186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oobabooga", "html_url": "https://github.com/oobabooga", "followers_url": "https://api.github.com/users/oobabooga/followers", "following_url": "https://api.github.com/users/oobabooga/following{/other_user}", "gists_url": "https://api.github.com/users/oobabooga/gists{/gist_id}", "starred_url": "https://api.github.com/users/oobabooga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oobabooga/subscriptions", "organizations_url": "https://api.github.com/users/oobabooga/orgs", "repos_url": "https://api.github.com/users/oobabooga/repos", "events_url": "https://api.github.com/users/oobabooga/events{/privacy}", "received_events_url": "https://api.github.com/users/oobabooga/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @oobabooga !\r\nApologies for my late reply \r\nIn general we are very interested in adding new quantization schemes in HF transformers. Currently, we're waiting to merge https://github.com/huggingface/transformers/pull/26610 in order to make the support for new quantization methods easier for anyone in the future. \r\nWe had some internal discussion about adding Llama.cpp inference support in transformers and currently we feel that the LlamaCpp library is quite fast moving to be added in HF transformers making it quite challenging to maintain overall. This is debatable, so feel free to let us know what do you think about it and we can consider adding Llama.cpp after #26610 gets merged", "@oobabooga, It should be possible to create externally a subclass to `HFTransformers` with `llama.cpp` support, independent from `GptqHfQuantizer` class. It could be hosted outside `transformers`.", "Just discussed offline with @ArthurZucker - indeed you can import the auto mapping that live here: https://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/auto.py and add the new quantizers that would firstly live inside text-generation-webui - if we see that everything is quite stable and not subject to a lot of breaking change we can port the quantizers back in transformers core. How does that sound @oobabooga ? I can also work on a PoC PR in your repo as well ", "@younesbelkada a PR kickstarting that addition in text-generation-webui would be extremely appreciated. I am not familiar enough with the transformers internals to do it myself -- in particular, porting the llama.cpp cache to transformers has been a blocker in my attempts.\r\n\r\nllama_cpp_python has a very complete and comprehensive API. The necessary functions should all be in this file:\r\n\r\nhttps://github.com/abetlen/llama-cpp-python/blob/da003d87681f02475eedb6937443e5f07db889b0/llama_cpp/llama_cpp.py#L1291\r\n\r\nAfter your PoC, I should be able to maintain the code afterwards and accept PRs so it becomes more stable over time." ]
1,701
1,706
null
CONTRIBUTOR
null
### Feature request I would like to request [llama.cpp](https://github.com/ggerganov/llama.cpp) as a new model backend in the transformers library. ### Motivation llama.cpp offers: 1) Excellent performance in scenarios where memory bandwidth is an issue, namely CPU inference and GPU + CPU inference. 2) Support for a wide range of GPU vendors and models. 3) Adequate quantization accuracy -- I have compared the perplexities of 4-bit GGUF models to GPTQ, AWQ, EXL2, and bitsandbytes and found them to be competitive ([link](https://oobabooga.github.io/blog/posts/gptq-awq-exl2-llamacpp/)). By making the transformers library compatible with GGUF models, the llama.cpp performance on consumer hardware could hopefully be integrated with the features available in transformers and its surrounding ecosystem. In particular, it would be interesting to see the following working seamlessly with llama.cpp: * [Assisted generation](https://huggingface.co/blog/assisted-generation) (speculative decoding) * [StreamingLLM](https://github.com/huggingface/transformers/pull/26681) ### Your contribution I have implemented a "llamacpp_HF" wrapper in the file below: https://github.com/oobabooga/text-generation-webui/blob/main/modules/llamacpp_hf.py It makes it possible to use the transformers `model.generate` with llama.cpp models, and it exemplifies how to make forward calls in llama.cpp and get the logits. It works for perplexity evaluation when `logits_all=True` is passed while loading the model. I additionally implemented some prefix-matching logic and a hacky way to recognize forward calls for negative prompts to make CFG functional. For the llama.cpp transformers integration, I recommend the following: * Relying on the llama-cpp-python library: https://github.com/abetlen/llama-cpp-python/ * Requiring the user to manually install llama-cpp-python with the appropriate command for their hardware rather than adding it as a direct requirement to transformers. I believe that's how it already works for GPTQ models, where AutoGPTQ has to be installed manually. * In the `from_pretrained` call, having a `LlamaCppConfig` object that takes as input arbitrary kwargs that later on get passed to the `llama_cpp.Llama` model loading call. That would be similar to the `BitsAndBytesConfig` object that is passed to `from_pretrained` when `load_in_4bit=True` is used. Some important parameters are `n_gpu_layers` and `n_ctx`; it would be interesting to make this future-proof and allow arbitrary kwargs to be passed to `LlamaCppConfig`. I'll tag @younesbelkada who worked with RWKV and AWQ integration in transformers and may find this interesting.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27712/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27712/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27711/comments
https://api.github.com/repos/huggingface/transformers/issues/27711/events
https://github.com/huggingface/transformers/issues/27711
2,011,195,770
I_kwDOCUB6oc534Gl6
27,711
Inquiry about the difference between two Approaches to Mask Infilling using BART Model in the official document
{ "login": "Hyfred", "id": 38806779, "node_id": "MDQ6VXNlcjM4ODA2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/38806779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hyfred", "html_url": "https://github.com/Hyfred", "followers_url": "https://api.github.com/users/Hyfred/followers", "following_url": "https://api.github.com/users/Hyfred/following{/other_user}", "gists_url": "https://api.github.com/users/Hyfred/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hyfred/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hyfred/subscriptions", "organizations_url": "https://api.github.com/users/Hyfred/orgs", "repos_url": "https://api.github.com/users/Hyfred/repos", "events_url": "https://api.github.com/users/Hyfred/events{/privacy}", "received_events_url": "https://api.github.com/users/Hyfred/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Sorry I didn't really have time to dig and must say that I didn't know the generate pipeline could fill up the hole for you! \r\nThe `beam-search` algorithm is used for that, which for me would mean that you explore more with this technique. Otherwise it should be similar ๐Ÿ˜‰ ", "By default the generate method here also uses beam search, so don't think there is any big difference, but it's more convenient to use generate rather than having to compute the mask position (more so if you have 2 mask or more!). ๐Ÿค— ", "> By default the generate method here also uses beam search, so don't think there is any big difference, but it's more convenient to use generate rather than having to compute the mask position (more so if you have 2 mask or more!). ๐Ÿค—\r\n\r\nYes, I think you're right. when I use the 'bart_base' version, there is no difference between those two decoding strategies. Because although the bart is an encoder-decoder framework, the pre-train task 'infilling the blank' hasn't utilized the generation ability. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,701
1,705
1,705
NONE
null
I came across two different methods of using the BART model for mask infilling at this URL: https://huggingface.co/docs/transformers/main/en/model_doc/bart#bart The first method involves taking the logit corresponding to the mask word in the decoding last hidden representation and using softmax to directly predict which word in the vocabulary it belongs to. ```python from transformers import AutoTokenizer, BartForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base") model = BartForConditionalGeneration.from_pretrained("facebook/bart-base") TXT = "My friends are <mask> but they eat too many carbs." input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"] logits = model(input_ids).logits masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) tokenizer.decode(predictions).split() ``` The second method involves using the .generate function to restore the masked word in a sentence. ```python from transformers import BartForConditionalGeneration, BartTokenizer model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0) tok = BartTokenizer.from_pretrained("facebook/bart-large") example_english_phrase = "UN Chief Says There Is No <mask> in Syria" batch = tok(example_english_phrase, return_tensors="pt") generated_ids = model.generate(batch["input_ids"]) assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [ "UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria" ] ``` I'm keen on understanding the differences between these two approaches. Unfortunately, I haven't found any relevant explanations. Additionally, I'd like to ascertain whether both methods rely on context to predict the masked word rather than solely considering the words before the mask. Any insights or documentation clarifying these would be greatly appreciated!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27711/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27710/comments
https://api.github.com/repos/huggingface/transformers/issues/27710/events
https://github.com/huggingface/transformers/issues/27710
2,011,127,681
I_kwDOCUB6oc5331-B
27,710
Resume training with deepspeed resets learning rate
{ "login": "BiEchi", "id": 60613238, "node_id": "MDQ6VXNlcjYwNjEzMjM4", "avatar_url": "https://avatars.githubusercontent.com/u/60613238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BiEchi", "html_url": "https://github.com/BiEchi", "followers_url": "https://api.github.com/users/BiEchi/followers", "following_url": "https://api.github.com/users/BiEchi/following{/other_user}", "gists_url": "https://api.github.com/users/BiEchi/gists{/gist_id}", "starred_url": "https://api.github.com/users/BiEchi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BiEchi/subscriptions", "organizations_url": "https://api.github.com/users/BiEchi/orgs", "repos_url": "https://api.github.com/users/BiEchi/repos", "events_url": "https://api.github.com/users/BiEchi/events{/privacy}", "received_events_url": "https://api.github.com/users/BiEchi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Solved, we should add the specification for scheduler in the config json for deepspeed as well." ]
1,701
1,701
1,701
NONE
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.0 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Launch script uses `run_mlm.py`: ``` python transformers/examples/pytorch/language-modeling/run_mlm.py \ --resume_from_checkpoint $CKPT_TO_RESUME \ --config_name $MODEL \ --tokenizer_name $TOKENIZER \ --dataset_name $PT_DATASET \ --max_steps $MAX_STEPS \ --preprocessing_num_workers 32 \ --logging_steps $LOG_STEPS \ --cache_dir $CACHE_DIR \ --warmup_steps $WARMUP_STEPS \ --learning_rate $PT_PEAK_LR \ --lr_scheduler_type $PT_LR_DECAY \ --output_dir $OUTPUT_DIR \ --overwrite_output_dir \ --deepspeed $DS_CONFIG ``` Deepspeed config: ``` { "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "fp16": { "enabled": false }, "bf16": { "enabled": true }, "train_micro_batch_size_per_gpu": "auto", "gradient_accumulation_steps": "auto", "train_batch_size": "auto" } ``` When resuming from deepspeed checkpoint, learning rate resets: <img width="655" alt="image" src="https://github.com/huggingface/transformers/assets/60613238/6d9b3043-1b46-4505-bfeb-81eec0736aed"> If I comment out the deepspeed config (which means that Trainer gets rid of DS), the problem resolves: <img width="654" alt="image" src="https://github.com/huggingface/transformers/assets/60613238/1ca4d189-f58c-4237-9246-6d6d0bfc8f6b"> Is there anything wrong with my deepspeed config? ### Expected behavior The learning rate shoudn't reset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27710/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27709/comments
https://api.github.com/repos/huggingface/transformers/issues/27709/events
https://github.com/huggingface/transformers/pull/27709
2,011,104,926
PR_kwDOCUB6oc5gYLFL
27,709
[`from_pretrained`] Make from_pretrained fast again
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "- explicit overwrite breaks fx and is not that faster \r\n- Non initialized is not always zeros (failing tests) just make sure itโ€™s not initialized ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27709). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,701
1,702
1,702
COLLABORATOR
null
# what does this PR do Skips all layer initialization when loading from pretrained without accelerate. From ~20seconds to 5 seconds for a 7B model like llama. The Weights are effectively initialized in ยซย init_weights_ย ยป of the pretrained method. All internal calls are skipped - [x] Check that if a linear layer is missing it will be initialized! (Loading `AutoModelForCausalLM` from `AutoModel` fixes #26258 and fixes #18505 ` model = XXXX.from_pretrained(model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True")` might fail
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27709/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27709", "html_url": "https://github.com/huggingface/transformers/pull/27709", "diff_url": "https://github.com/huggingface/transformers/pull/27709.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27709.patch", "merged_at": 1702294698000 }
https://api.github.com/repos/huggingface/transformers/issues/27708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27708/comments
https://api.github.com/repos/huggingface/transformers/issues/27708/events
https://github.com/huggingface/transformers/issues/27708
2,010,989,582
I_kwDOCUB6oc533UQO
27,708
`load_in_4bit=True` works only with models in `safetensors` format
{ "login": "danielkorat", "id": 32893314, "node_id": "MDQ6VXNlcjMyODkzMzE0", "avatar_url": "https://avatars.githubusercontent.com/u/32893314?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danielkorat", "html_url": "https://github.com/danielkorat", "followers_url": "https://api.github.com/users/danielkorat/followers", "following_url": "https://api.github.com/users/danielkorat/following{/other_user}", "gists_url": "https://api.github.com/users/danielkorat/gists{/gist_id}", "starred_url": "https://api.github.com/users/danielkorat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielkorat/subscriptions", "organizations_url": "https://api.github.com/users/danielkorat/orgs", "repos_url": "https://api.github.com/users/danielkorat/repos", "events_url": "https://api.github.com/users/danielkorat/events{/privacy}", "received_events_url": "https://api.github.com/users/danielkorat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting. I would not be surprised that `safetensors` supports loading each parameters indivudally, casting them which is not supported by torch. I think that using smaller shards is usually what we do in this case, feel free to use: https://huggingface.co/ybelkada/Mistral-7B-v0.1-bf16-sharded", "Thanks @ArthurZucker \r\nSince this is a very general use case, and also happens out of the blue for many existing models like `Intel/neural-chat-7b-v3-1` as well, I think there should be some fix or at least a warning. Thanks", "We changed the default sharding size to make sure colab supports this. ", "Hi @danielkorat \r\nAs @ArthurZucker stated we changed the default max shard size to support google colab inference easily, for previous models, unfortunately one needs to manually shard the model and push it on the hub under a new model id. ", "I see.\nI solved it in my case by merging the safetensors PR, under the same model_id (`Intel/neural-chat-7b-v3-1`). \nI think users like me and others I've seen might be running into this issue right now in Colab, without any explanation of the cause. Would be nice if there was at least a warning when loading very large shards.\n@younesbelkada ", "Hi @danielkorat \r\nThanks! \r\nHmmm I think this might introduce a lot of verbosity in transformers as this would only be relevant in the case people use google colab", "Then you can probably close this, thanks.", "Thank you @danielkorat !" ]
1,700
1,701
1,701
CONTRIBUTOR
null
### System Info Hi, When testing on `Google Colab (Free Tier T4 GPU)`, this code crashes with RAM OOM [(Notebook)](https://colab.research.google.com/drive/1zAzdcH_KRQuc_0zWBEzYuaV1h4ERgzPy?usp=sharing): ```python AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", load_in_4bit=True, device_map="auto") ``` ![image](https://github.com/huggingface/transformers/assets/32893314/11f62cdc-b94c-4e68-9783-c4d25904f194) However, when loading the safetensors revision, it works: ```python AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", load_in_4bit=True, device_map="auto", revision="/refs/pr/91") ``` ```bash transformers==4.35.2 accelerate==0.24.1 bitsandbytes==0.41.2.post2 ``` cc @ArthurZucker @younesbelkada ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Colab link that reproduces the problem: [Notebook](https://colab.research.google.com/drive/1zAzdcH_KRQuc_0zWBEzYuaV1h4ERgzPy?usp=sharing) ### Expected behavior Successful loading of the model onto GPU.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27708/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27707/comments
https://api.github.com/repos/huggingface/transformers/issues/27707/events
https://github.com/huggingface/transformers/issues/27707
2,010,981,157
I_kwDOCUB6oc533SMl
27,707
When labels is integer instead of float, the training crashes in MPS / CUDA. Could this be validated and fail with a better error higher up the stack ?
{ "login": "przem8k", "id": 1824302, "node_id": "MDQ6VXNlcjE4MjQzMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/1824302?v=4", "gravatar_id": "", "url": "https://api.github.com/users/przem8k", "html_url": "https://github.com/przem8k", "followers_url": "https://api.github.com/users/przem8k/followers", "following_url": "https://api.github.com/users/przem8k/following{/other_user}", "gists_url": "https://api.github.com/users/przem8k/gists{/gist_id}", "starred_url": "https://api.github.com/users/przem8k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/przem8k/subscriptions", "organizations_url": "https://api.github.com/users/przem8k/orgs", "repos_url": "https://api.github.com/users/przem8k/repos", "events_url": "https://api.github.com/users/przem8k/events{/privacy}", "received_events_url": "https://api.github.com/users/przem8k/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hey! Thanks for raising the issue! ๐Ÿค— \r\nIt's kind of hard to have architecture specific errors for something that works very well anywhere else! specifically given that it makes more sense for labels to be int rather than floats ๐Ÿ˜… So not really in favor of this\r\n", "Hi @ArthurZucker , thanks for taking the look! This actually does *not* work on CUDA as well, resulting in:\r\n\r\n```\r\nRuntimeError: \"mse_cuda\" not implemented for 'Long'\r\n```\r\n\r\nSee https://www.kaggle.com/przem8k/transformers-issue-27707-re-when-label-is-int\r\n\r\nGiven that it breaks on both Metal and CUDA I assumed it's not supported. Do you think the issue may be specific to the microsoft/deberta-v3-small model ?", "Hello @przem8k, let's take a step back here and understand the loss function implemented in https://huggingface.co/microsoft/deberta-v3-small. If we go to the modeling file and check below lines:\r\n\r\nhttps://github.com/huggingface/transformers/blob/bd50402b56980ff17e957342ef69bd9b0dd45a7b/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L1321\r\n\r\nWe observe that it is using MSELoss as the `num_labels` is 1 and as such it is thinking of this task as regression task instead of classification. For regression tasks, the label is a float and so is the prediction.\r\n\r\nNow, if you change the `num_labels=2` as shown below which is the default and fits your usecase of binary classification. In this case, training happens as expected because it now uses `CrossEntropyLoss` which accepts integer labels.\r\n```\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)\r\n```\r\n\r\n![Screenshot 2023-11-29 at 1 17 27โ€ฏPM](https://github.com/huggingface/transformers/assets/13534540/1f152758-3aa5-4d13-9280-6df27f691c79)\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@pacman100 that makes a lot of sense, thank you for taking the time to point this out!\r\n\r\nI think what ultimately confused me was 'num_labels' -> I thought it's the number of resulting labels (in this case we only apply one label), but I now understand it's the number of different possible **label values**. I took some notes [here](https://pnote.eu/notes/transformers-fine-tuning-crash/).\r\n\r\nThank you again!" ]
1,700
1,705
1,704
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: macOS-14.1.1-arm64-arm-64bit - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0.dev20230804 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes MPS - Using distributed or parallel set-up in script?: No ### Who can help? @muellerzr @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import pandas as pd from datasets import Dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer from io import StringIO from transformers import TrainingArguments, Trainer sample_data = """input,labels bazinga,0 please-just-work,1 """ df = pd.read_csv(StringIO(sample_data)) ds = Dataset.from_pandas(df) model_name = "microsoft/deberta-v3-small" tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenizer_func(x): return tokenizer(x["input"]) ds_tokenized = ds.map(tokenizer_func, batched=True) dds = ds_tokenized.train_test_split(0.2, seed=42) bs = 16 epochs = 4 lr = 8e-5 args = TrainingArguments( "outputs", learning_rate=lr, warmup_ratio=0.1, lr_scheduler_type="cosine", fp16=False, evaluation_strategy="epoch", per_device_train_batch_size=bs, per_device_eval_batch_size=bs * 2, num_train_epochs=epochs, weight_decay=0.01, report_to="none", ) model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1) trainer = Trainer( model, args, train_dataset=dds["train"], eval_dataset=dds["test"], tokenizer=tokenizer, ) # This crashes trainer.train() ``` ### Expected behavior When running on Apple Sillicon Mac, the repro above crashes with MPS crash: ``` 2023-11-25 09:29:20.582 Python[15924:310024] Error getting visible function: (null) Function square_i64 was not found in the library /AppleInternal/Library/BuildRoots/495c257e-668e-11ee-93ce-926038f30c31/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSKernelDAG.mm:805: failed assertion `Error getting visible function: ``` EDIT: When running on CUDA, the error is `RuntimeError: "mse_cuda" not implemented for 'Long'`, see [this notebook](https://www.kaggle.com/przem8k/transformers-issue-27707-re-when-label-is-int/edit) after much head-banging I realized that the issue is just that in my sample data, labels are integers instead of floats. If integer labels are not supported, could this be validated and fail with a better error higher up the stack ? Thanks a lot for all the awesome work on transformers ๐Ÿฅณ, I'm having a lot of fun learning the library !
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27707/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27706/comments
https://api.github.com/repos/huggingface/transformers/issues/27706/events
https://github.com/huggingface/transformers/issues/27706
2,010,978,026
I_kwDOCUB6oc533Rbq
27,706
implement TemplateConstraints in class transformers.Constraint
{ "login": "MrzEsma", "id": 55921249, "node_id": "MDQ6VXNlcjU1OTIxMjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/55921249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MrzEsma", "html_url": "https://github.com/MrzEsma", "followers_url": "https://api.github.com/users/MrzEsma/followers", "following_url": "https://api.github.com/users/MrzEsma/following{/other_user}", "gists_url": "https://api.github.com/users/MrzEsma/gists{/gist_id}", "starred_url": "https://api.github.com/users/MrzEsma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MrzEsma/subscriptions", "organizations_url": "https://api.github.com/users/MrzEsma/orgs", "repos_url": "https://api.github.com/users/MrzEsma/repos", "events_url": "https://api.github.com/users/MrzEsma/events{/privacy}", "received_events_url": "https://api.github.com/users/MrzEsma/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hey! This might be implemented if the community is found needing this feature, and if it makes sens in the overall design of the API so I can't give you any estimate of the time this might take!\r\nLeaving this issue open and pinging @gante ! ", "Hello @gante,\r\n\r\nI saw that @ArthurZucker brought you into the conversation about adding `TemplateConstraints`. I'm keen on this feature for its potential to enable function calling in open source LLMs. Could we discuss its feasibility?\r\n\r\n", "Hi @MrzEsma ๐Ÿ‘‹ \r\n\r\nIt is my understanding that constrained beam search is not very used, being limited mostly to experimental settings. It is also very expensive to maintain, so I want to avoid expanding it unless there is a significant demand. As such, I'm going to decline the offer for now :)\r\n\r\nI'll do my usual bargain: if this comment reaches 10 reactions, then it means other users have been looking for similar features. In that case, I'd be happy to revisit my decision!", "Still experimenting but had a similar need recently but I need to experiment a bit more to organize the thoughts.", "> Hi @MrzEsma ๐Ÿ‘‹\r\n> \r\n> It is my understanding that constrained beam search is not very used, being limited mostly to experimental settings. It is also very expensive to maintain, so I want to avoid expanding it unless there is a significant demand. As such, I'm going to decline the offer for now :)\r\n> \r\n> I'll do my usual bargain: if this comment reaches 10 reactions, then it means other users have been looking for similar features. In that case, I'd be happy to revisit my decision!\r\n\r\nLooks like we've hit the magic number 10 - time for our feature to take flight? :)", "Alright, I'll keep up my work -- happy to accept a PR if someone is willing to work on it :)" ]
1,700
1,704
null
NONE
null
### Feature request Hello, I recently came across your [blog post](https://huggingface.co/blog/constrained-beam-search) on constrained beam search and I am thrilled to see such implementations being made available. They are quite impressive! Currently, I find myself in need of the feature `TemplateConstraints` It would greatly benefit my ongoing projects. Could you please provide an estimate on when this feature might be implemented, or if it is already in the pipeline? Thank you for your hard work and for providing the community with such valuable tools. I look forward to your response. Best regards, ### Motivation My motivation is that I want to implement something like function calling in ChatGPT and this feature opens many doors. ### Your contribution i dont know about it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27706/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27706/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27705/comments
https://api.github.com/repos/huggingface/transformers/issues/27705/events
https://github.com/huggingface/transformers/issues/27705
2,010,941,935
I_kwDOCUB6oc533Inv
27,705
Bug in the code of owlv2 algorithm
{ "login": "zhongwenkun886", "id": 42397285, "node_id": "MDQ6VXNlcjQyMzk3Mjg1", "avatar_url": "https://avatars.githubusercontent.com/u/42397285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhongwenkun886", "html_url": "https://github.com/zhongwenkun886", "followers_url": "https://api.github.com/users/zhongwenkun886/followers", "following_url": "https://api.github.com/users/zhongwenkun886/following{/other_user}", "gists_url": "https://api.github.com/users/zhongwenkun886/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhongwenkun886/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhongwenkun886/subscriptions", "organizations_url": "https://api.github.com/users/zhongwenkun886/orgs", "repos_url": "https://api.github.com/users/zhongwenkun886/repos", "events_url": "https://api.github.com/users/zhongwenkun886/events{/privacy}", "received_events_url": "https://api.github.com/users/zhongwenkun886/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "See #27205, which will be fixed by #27698. See also my [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/OWLv2/Zero_and_one_shot_object_detection_with_OWLv2.ipynb), you need to provide the `target_sizes` of the padded image rather than the original one for visualization.", "thanks" ]
1,700
1,701
1,701
NONE
null
### System Info python, linux ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The preprocessing of the image is padding the image to a square image๏ผˆpadding to the right and bottom region๏ผ‰,but it is not considered in post processing๏ผŒso when the input image is not a square image, the location of the result is wrong. ### Expected behavior The preprocessing of the image is padding the image to a square image๏ผˆpadding to the right and bottom region๏ผ‰,but it is not considered in post processing๏ผŒso when the input image is not a square image, the location of the result is wrong.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27705/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27704/comments
https://api.github.com/repos/huggingface/transformers/issues/27704/events
https://github.com/huggingface/transformers/issues/27704
2,010,854,927
I_kwDOCUB6oc532zYP
27,704
Stopping criteria does not work for Llama-2-13B
{ "login": "Eichhof", "id": 6844011, "node_id": "MDQ6VXNlcjY4NDQwMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6844011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Eichhof", "html_url": "https://github.com/Eichhof", "followers_url": "https://api.github.com/users/Eichhof/followers", "following_url": "https://api.github.com/users/Eichhof/following{/other_user}", "gists_url": "https://api.github.com/users/Eichhof/gists{/gist_id}", "starred_url": "https://api.github.com/users/Eichhof/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Eichhof/subscriptions", "organizations_url": "https://api.github.com/users/Eichhof/orgs", "repos_url": "https://api.github.com/users/Eichhof/repos", "events_url": "https://api.github.com/users/Eichhof/events{/privacy}", "received_events_url": "https://api.github.com/users/Eichhof/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! ๐Ÿค— \r\nI don't have access to `StoppingCriteriaSub` (missing form the reproducer) but this is very similar to #23852, and #26959, which most probably has the answers you are looking for. \r\nNow what you need to check thoroughly is not the strings that are decoder, but the token ids that you feed to logit processor. You should be using `tokenizer.convert_tokens_to_ids` to check if these are indeed tokens or not, then you should make sure you encode the raw tokens without the prefix space this is added to the tokens. For this we'll add a `add_prefix_space` option that you can set to `False` soon, in the meantime you should just use `[tokenizer._tokenize(word) for word in subword]`", "Dear Arthur\r\nThank you for your response. So I'm using `###` to separate turns in a conversation. I will check if `###` is a single token as you proposed. What is `add_prefix_space` exactly doing? Currently I'm separating turns as follows: `I'm feeling good, how about you?### Human: I'm also feeling good.### Chatbot: That's good.` So there is no white space before `###` but a white space after. Is that good or should I also add one white space before?", "Sentencepiece based tokenizers like Llama or T5 always add a prefix space to the input tokens. This means that when you are trying to get the encoding for `###` you are actually getting the encoding for ` ###` which is why it does not stop. ", "So is it better to have also a white space before `###` in my training data? This mean `I'm feeling good, how about you? ### Human: I'm also feeling good. ### Chatbot: That's good.` instead of `I'm feeling good, how about you?### Human: I'm also feeling good.### Chatbot: That's good.`\r\n\r\nYou proposed to use `[tokenizer._tokenize(word) for word in subword]`. How and where in my code (first post) should I use this?", "No no, it's better to just use `tokenizer._tokenize` for a slow tokenizer instead of `tokenizer.tokenize` to get the actual tokens. If `' #'` is encoded as `' ','#'` then you are good, otherwise `' #'` can be tokenized a `' #'` which is a token itself", "I'm a bit confused. The follwing response `Would you like to chat about something interesting?### Human: Yes please.` gets encoded by the model as following:\r\n\r\n```\r\nx = tokenizer._tokenize('Would you like to chat about something interesting?### Human: Yes please.')\r\nprint(x)\r\ny = tokenizer.convert_tokens_to_ids(x)\r\nprint(y)\r\n['W', 'ould', 'โ–you', 'โ–like', 'โ–to', 'โ–chat', 'โ–about', 'โ–something', 'โ–interesting', '?', '##', '#', 'โ–Human', ':', 'โ–Yes', 'โ–please', '.']\r\n[29956, 483, 366, 763, 304, 13563, 1048, 1554, 8031, 29973, 2277, 29937, 12968, 29901, 3869, 3113, 29889]\r\n```\r\n\r\nI have tried to add `model.config.eos_token_id = 2277`. I also tried to use the following stopping criteria:\r\n\r\n```\r\nfrom transformers import StoppingCriteria\r\nclass EosListStoppingCriteria(StoppingCriteria):\r\n def __init__(self, eos_sequence=[2277, 29937]):\r\n self.eos_sequence = eos_sequence\r\n\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n last_ids = input_ids[:, -len(self.eos_sequence):].tolist()\r\n return self.eos_sequence in last_ids\r\n generation_config = GenerationConfig( ... stopping_criteria=[EosListStoppingCriteria()])\r\n prompt = tokenizer(text, return_tensors='pt', truncation=\"only_first\", max_length=4096)\r\n prompt = {key: value.to(\"cuda\") for key, value in prompt.items()}\r\n out = model.generate(**prompt, generation_config=generation_config)\r\n res = tokenizer.decode(out[0])\r\n```\r\n\r\nBoth approaches do not work and the model is not stopping producing output at `###`.", "You need to account for all possible token combinations or juste check the ids that are generated by the model. \r\nDoes it not stop when you set ` 2277, 29937` in the custom logits processor on the linked issues?", "I have found the error:\r\n\r\nThe following works:\r\n\r\n```\r\n generation_config = GenerationConfig(\r\n min_length=self.min_length,\r\n max_new_tokens=max_new_tokens,\r\n do_sample=True,\r\n top_k=top_k,\r\n top_p=top_p,\r\n temperature=temperature,\r\n repetition_penalty=1.1,\r\n no_repeat_ngram_size=no_repeat_ngram_size,\r\n use_cache=True,\r\n pad_token_id=self.tokenizer.eos_token_id,\r\n max_time=5.0\r\n )\r\nout = self.model.generate(**prompt, generation_config=generation_config, stopping_criteria=[EosListStoppingCriteria()])\r\n```\r\n\r\nBut when I provide the `stopping_criteria` as part of the `GenerationConfig `it does not stop anymore. Should I not use the `GenerationConfig ` and provide all parameters directly in the generate method? I am now unsure if the parameters set in `GenerationConfig` are used at all or if I was using default values (and did not recognize it).", "Hi @Eichhof ๐Ÿ‘‹ Thank you for opening this issue\r\n\r\n`stopping_criteria` is not part of `GenerationConfig` and should be passed separately :) \r\n\r\nThe issue on our end is the lack of an informative exception, which would have enabled you to catch and fix the issue immediately! I will open a PR that will catch these sorts of issues ๐Ÿค— " ]
1,700
1,701
1,701
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.9.0 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @gante @ArthurZucker ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm using LLama-2 13B with the following stopping criteria: ``` stop_words = ["Human:", "Chatbot:", "###"] stop_words_ids = [tokenizer(stop_word, return_tensors='pt')['input_ids'].squeeze() for stop_word in stop_words] stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)]) generation_config = GenerationConfig( ... stopping_criteria=stopping_criteria ) prompt = tokenizer(text, return_tensors='pt', truncation="only_first", max_length=4096) prompt = {key: value.to("cuda") for key, value in prompt.items()} out = model.generate(**prompt, generation_config=generation_config) res = tokenizer.decode(out[0]) ``` The model does not stop at the provided stop words. For example, if I have a response of the model `I'm feeling good, how about you?### Human: I'm also feeling good.### Chatbot: That's good.` the model should stop generating at the first `###`. Why does this not work and how can this be fixed? I have fine-tuned the model (with Axolotl) on a dataset so that the model produces responses as shown above. ### Expected behavior The model should stop producing output at the first occurrence of a stop word.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27704/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27703/comments
https://api.github.com/repos/huggingface/transformers/issues/27703/events
https://github.com/huggingface/transformers/issues/27703
2,010,735,754
I_kwDOCUB6oc532WSK
27,703
speech recognition with speecht5
{ "login": "poojitharamachandra", "id": 39840406, "node_id": "MDQ6VXNlcjM5ODQwNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/39840406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poojitharamachandra", "html_url": "https://github.com/poojitharamachandra", "followers_url": "https://api.github.com/users/poojitharamachandra/followers", "following_url": "https://api.github.com/users/poojitharamachandra/following{/other_user}", "gists_url": "https://api.github.com/users/poojitharamachandra/gists{/gist_id}", "starred_url": "https://api.github.com/users/poojitharamachandra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poojitharamachandra/subscriptions", "organizations_url": "https://api.github.com/users/poojitharamachandra/orgs", "repos_url": "https://api.github.com/users/poojitharamachandra/repos", "events_url": "https://api.github.com/users/poojitharamachandra/events{/privacy}", "received_events_url": "https://api.github.com/users/poojitharamachandra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Seems like you are not using an officially shared snippet / an external library (call to `sd.rec`) to make sure we can help you, would you mind sharing the full snippet? \r\n", "```python\r\nimport sounddevice as sd\r\nfrom transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5ForSpeechToText\r\n\r\nprocessor = SpeechT5Processor.from_pretrained(\"microsoftt5_tts\")\r\nmodel = SpeechT5ForSpeechToText.from_pretrained(\"microsoftt5_tts\")\r\n\r\nduration = 10\r\nsampling_rate = 16000\r\naudio = sd.rec(int(sampling_rate * duration), samplerate=sampling_rate, channels=1)\r\ninput_features = processor(audio=audio,sampling_rate=sampling_rate, return_tensors=\"pt\")\r\nwith torch.no_grad():\r\n output = model(**input_features)\r\ndecoded_text = processor.decode(output, skip_special_tokens=True)\r\n```", "Thanks, but this does not run. \r\n```\r\nOSError: microsoftt5_tts is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`\r\n```\r\nI could of course check online to see what are the closest names but the point of a reproducer is that I can reproduce. \r\n\r\nWould recommend you to make sure the shape you are feeding to the processor is correct. \r\nHere is what the doc mentions:\r\n\r\n> The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. This outputs waveform features. Must mono channel audio, not stereo, i.e. single float per timestep.\r\n\r\nHere is an example of a working snippet: https://github.com/huggingface/transformers/blob/d8e1ed17ee7e640a1d5ba999345c71d4039a5a34/tests/models/speecht5/test_modeling_speecht5.py#L766 ", "do u have any suggestions on how to convert .wav file to numpy array suitable for the model?", "The automatic speech recognition pipeline supports passing wav files (as path to file) and uses `ffmpeg` see [here](https://github.com/younesbelkada/transformers/blob/ff3ae4e11eee4a4d695dd5937324a524cc29d092/src/transformers/pipelines/automatic_speech_recognition.py#L423). An snippet is available [here](https://github.com/younesbelkada/transformers/blob/ff3ae4e11eee4a4d695dd5937324a524cc29d092/docs/source/en/pipeline_tutorial.md#L171)", "It looks like you are loading the TTS model, but trying to perform ASR. Here's a code snippet for running inference with the ASR model: https://huggingface.co/microsoft/speecht5_asr#how-to-get-started-with-the-model\r\n\r\nOr with the pipeline:\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"automatic-speech-recognition\", model=\"microsoft/speecht5_asr\")\r\npipe(\"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac\") # replace input with the path to your audio\r\n```", "> It looks like you are loading the TTS model, but trying to perform ASR. Here's a code snippet for running inference with the ASR model: https://huggingface.co/microsoft/speecht5_asr#how-to-get-started-with-the-model\r\n> \r\n> Or with the pipeline:\r\n> \r\n> ```python\r\n> from transformers import pipeline\r\n> \r\n> pipe = pipeline(\"automatic-speech-recognition\", model=\"microsoft/speecht5_asr\")\r\n> pipe(\"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac\") # replace input with the path to your audio\r\n> ```\r\n\r\nthis creates too much noise in the generated text", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
### System Info ```python processor = SpeechT5Processor.from_pretrained("microsoftt5_tts") model = SpeechT5ForSpeechToText.from_pretrained("microsoftt5_tts") duration = 10 sampling_rate = 16000 audio = sd.rec(int(sampling_rate * duration), samplerate=sampling_rate, channels=1) input_features = processor(audio=audio,sampling_rate=sampling_rate, return_tensors="pt") with torch.no_grad(): output = model(**input_features) decoded_text = processor.decode(output, skip_special_tokens=True) ---------------> output = model(**input_features) RuntimeError: Calculated padded input size per channel: (1). Kernel size: (10). Kernel size can't be greater than actual input size ``` how can i solve this error? @sanchit-gandhi ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run the above code snippet ### Expected behavior expected to convert speech to text
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27703/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27702/comments
https://api.github.com/repos/huggingface/transformers/issues/27702/events
https://github.com/huggingface/transformers/issues/27702
2,010,524,705
I_kwDOCUB6oc531iwh
27,702
Issue with Fine-tuning LLM for Classification
{ "login": "NickL77", "id": 8673939, "node_id": "MDQ6VXNlcjg2NzM5Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/8673939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NickL77", "html_url": "https://github.com/NickL77", "followers_url": "https://api.github.com/users/NickL77/followers", "following_url": "https://api.github.com/users/NickL77/following{/other_user}", "gists_url": "https://api.github.com/users/NickL77/gists{/gist_id}", "starred_url": "https://api.github.com/users/NickL77/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NickL77/subscriptions", "organizations_url": "https://api.github.com/users/NickL77/orgs", "repos_url": "https://api.github.com/users/NickL77/repos", "events_url": "https://api.github.com/users/NickL77/events{/privacy}", "received_events_url": "https://api.github.com/users/NickL77/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey ๐Ÿค— thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead?\r\n\r\nRegarding quantization, you cannot train a fully quantized model it's just not supported (not supported by any library AFAIK) because it's way to unstable (gradient computation, overflows etc). @younesbelkada will probably explain it better than me when he comes back from holidays! \r\n\r\nI don't know which training script you are using but would recommend you to check [this](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing) from the `peft` library. \r\n\r\nThanks!", "Hi @NickL77 \r\nThanks a lot for the issue, as @ArthurZucker mentioned, you cannot perform pure fine-tuning on quantized models. To fully leverage quantization, you need to train adapters on top of the quantized models, using for example PEFT library. \r\nCheck out for example this documentation section: https://huggingface.co/docs/transformers/peft on how to train adapters using PEFT and transformers", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
### System Info ``` - `transformers` version: 4.34.0 - Platform: Linux-6.5.4-76060504-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ``` ### Who can help? @younesbelkada @muellerzr @pacman100 @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to fine-tune Mistral 7B on a classification task. I've deduced I need to use `AutoModelForSequenceClassification` and load it via: ``` bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, ) mistral_model = AutoModelForSequenceClassification.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=bnb_config, load_in_4bit=True, device_map="auto") for name, param in mistral_model.named_parameters(): # Freeze all parameters if param.dtype in [torch.float16, torch.float32, torch.float64]: param.requires_grad = False # Unfreeze the last two layers in 'layers' and 'score' if name.startswith('model.layers') and (int(name.split('.')[2]) >= 30): param.requires_grad = True elif name.startswith('score'): param.requires_grad = True ``` When quantizing using bnb, I get the following error when running the `trainer`: ``` File "site-packages/transformers/trainer.py", line 412, in Trainer.__init__ if _is_quantized_and_base_model and not _is_peft_model: raise ValueError( "You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of" " the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft" " for more details" ) ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details ``` I understand that it's best to fine-tune using a peft method, but here I'm freezing most layers and keeping a only a few layers trainable. Is there a reason it is not recommended/supported to do this? Or is there an alternative method? ----------------------------- If I load the model without quantizing via: ``` mistral_model = AutoModelForSequenceClassification.from_pretrained("mistralai/Mistral-7B-v0.1", device_map="auto") ``` I then get the following error: ``` File "site-packages/transformers/trainer.py", line 515, in Trainer.__init__ self._move_model_to_device(model, args.device) File "site-packages/transformers/trainer.py", line 739, in Trainer._move_model_to_device model = model.to(device) File "site-packages/accelerate/big_modeling.py", line 415, in <method-name> raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.") RuntimeError: You can't move a model that has some modules offloaded to cpu or disk. ``` ### Expected behavior Ideally I can train the classifier with quantization, but I'm ok with using smaller batch sizes to get it working without quantization.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27702/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27701/comments
https://api.github.com/repos/huggingface/transformers/issues/27701/events
https://github.com/huggingface/transformers/issues/27701
2,010,509,759
I_kwDOCUB6oc531fG_
27,701
type object 'OPTDecoder' has no attribute '_prepare_decoder_attention_mask'.
{ "login": "muzi0111", "id": 151991120, "node_id": "U_kgDOCQ8zUA", "avatar_url": "https://avatars.githubusercontent.com/u/151991120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muzi0111", "html_url": "https://github.com/muzi0111", "followers_url": "https://api.github.com/users/muzi0111/followers", "following_url": "https://api.github.com/users/muzi0111/following{/other_user}", "gists_url": "https://api.github.com/users/muzi0111/gists{/gist_id}", "starred_url": "https://api.github.com/users/muzi0111/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muzi0111/subscriptions", "organizations_url": "https://api.github.com/users/muzi0111/orgs", "repos_url": "https://api.github.com/users/muzi0111/repos", "events_url": "https://api.github.com/users/muzi0111/events{/privacy}", "received_events_url": "https://api.github.com/users/muzi0111/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! I think that version 4.34.0 should work. This attribute was removed in #27086 and it was a private method so not a breaking change", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
I'm currently facing an issue as follows: which version of Transformers should I install? The latest version does not seem to have this configuration option. I'm getting an AttributeError: type object 'OPTDecoder' has no attribute '_prepare_decoder_attention_mask'.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27701/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27700/comments
https://api.github.com/repos/huggingface/transformers/issues/27700/events
https://github.com/huggingface/transformers/pull/27700
2,010,450,877
PR_kwDOCUB6oc5gWIvf
27,700
Fix precision errors from casting rotary parameters to FP16 with AMP
{ "login": "kevinhu", "id": 6051736, "node_id": "MDQ6VXNlcjYwNTE3MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/6051736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevinhu", "html_url": "https://github.com/kevinhu", "followers_url": "https://api.github.com/users/kevinhu/followers", "following_url": "https://api.github.com/users/kevinhu/following{/other_user}", "gists_url": "https://api.github.com/users/kevinhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevinhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinhu/subscriptions", "organizations_url": "https://api.github.com/users/kevinhu/orgs", "repos_url": "https://api.github.com/users/kevinhu/repos", "events_url": "https://api.github.com/users/kevinhu/events{/privacy}", "received_events_url": "https://api.github.com/users/kevinhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27700). All of your documentation changes will be reflected on that endpoint.", "FYI @gante and @Rocketknight1 if we see anything failing. I ran slow tests locally and it was all good" ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? When training with AMP, using `einsum` to multiply `t` and `self.inv_freq` will introduce precision errors because it casts the result to FP16. This can be avoided by using `torch.outer` instead, as originally mentioned here: https://github.com/Dao-AILab/flash-attention/blob/2c3baba4a63c4007c8a132c5380edc9430f88a22/flash_attn/layers/rotary.py#L396C1-L398C45 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27700/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27700", "html_url": "https://github.com/huggingface/transformers/pull/27700", "diff_url": "https://github.com/huggingface/transformers/pull/27700.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27700.patch", "merged_at": 1701271850000 }
https://api.github.com/repos/huggingface/transformers/issues/27699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27699/comments
https://api.github.com/repos/huggingface/transformers/issues/27699/events
https://github.com/huggingface/transformers/issues/27699
2,010,325,647
I_kwDOCUB6oc530yKP
27,699
TypeError: Llama.create_completion() got an unexpected keyword argument 'min_p'
{ "login": "mclassen", "id": 711016, "node_id": "MDQ6VXNlcjcxMTAxNg==", "avatar_url": "https://avatars.githubusercontent.com/u/711016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mclassen", "html_url": "https://github.com/mclassen", "followers_url": "https://api.github.com/users/mclassen/followers", "following_url": "https://api.github.com/users/mclassen/following{/other_user}", "gists_url": "https://api.github.com/users/mclassen/gists{/gist_id}", "starred_url": "https://api.github.com/users/mclassen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mclassen/subscriptions", "organizations_url": "https://api.github.com/users/mclassen/orgs", "repos_url": "https://api.github.com/users/mclassen/repos", "events_url": "https://api.github.com/users/mclassen/events{/privacy}", "received_events_url": "https://api.github.com/users/mclassen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Me to, just getting started will follow ", "Run the set up again. when you updated there was a new requirements.txt. I had the same issue and running the setup again fixed it right up.", "I've also been getting this error for several days now. Tried resetting the pod, creating new pods with different gpus or different models. Always getting the same error.", "Hey! ๐Ÿค— seems like the issue is with `llama_cpp` rather than transformers ๐Ÿ˜… ", "The issue is the template installing the wrong requirement.\r\n\r\nhttps://github.com/TheBlokeAI/dockerLLM/issues/12", "Still broken. Cannot use GGUF with ooba.", "I encountered this error when running textgeneration-web-ui. \r\n\r\n1. Stop the server\r\n2. Run update_linux.sh\r\n3. Start the server\r\n\r\nFixed.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,705
1,705
NONE
null
### System Info I should have the latest version. When executing a query against a GGUF model, I get this error: Traceback (most recent call last): File "/workspace/text-generation-webui/modules/callbacks.py", line 57, in gentask ret = self.mfunc(callback=_callback, *args, **self.kwargs) File "/workspace/text-generation-webui/modules/llamacpp_model.py", line 141, in generate completion_chunks = self.model.create_completion( TypeError: Llama.create_completion() got an unexpected keyword argument 'min_p' ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 1. Create a standard TheBloke runpod template for TextGen WebUI 2. download TheBloke/Phind-CodeLlama-34B-v2-GGUF, doesn't really matter which quantized file specifically 3. any query should trigger the TypeError 4. the output is empty ### Expected behavior 1. There should be no error. 2. The output should show the result of the query.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27699/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27699/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27698/comments
https://api.github.com/repos/huggingface/transformers/issues/27698/events
https://github.com/huggingface/transformers/pull/27698
2,010,168,139
PR_kwDOCUB6oc5gVRQv
27,698
Fix owlv2 code snippet
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? Fixes #27205
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27698/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27698", "html_url": "https://github.com/huggingface/transformers/pull/27698", "diff_url": "https://github.com/huggingface/transformers/pull/27698.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27698.patch", "merged_at": 1701098947000 }
https://api.github.com/repos/huggingface/transformers/issues/27697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27697/comments
https://api.github.com/repos/huggingface/transformers/issues/27697/events
https://github.com/huggingface/transformers/issues/27697
2,010,076,389
I_kwDOCUB6oc53z1Tl
27,697
using SFT for finetuning Llama2, TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
{ "login": "Sosycs", "id": 6597399, "node_id": "MDQ6VXNlcjY1OTczOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6597399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sosycs", "html_url": "https://github.com/Sosycs", "followers_url": "https://api.github.com/users/Sosycs/followers", "following_url": "https://api.github.com/users/Sosycs/following{/other_user}", "gists_url": "https://api.github.com/users/Sosycs/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sosycs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sosycs/subscriptions", "organizations_url": "https://api.github.com/users/Sosycs/orgs", "repos_url": "https://api.github.com/users/Sosycs/repos", "events_url": "https://api.github.com/users/Sosycs/events{/privacy}", "received_events_url": "https://api.github.com/users/Sosycs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Sosycs, I'm thoroughly impressed with your code's performance on my local machine. I strongly recommend reviewing the package version to ensure compatibility with the latest updates \r\n\r\n**Package-Version:**\r\n> *Transformers:* 4.36.0.dev0\r\n> *Trl :* 0.7.4\r\n\r\n**My-Code:**\r\n```\r\nfrom transformers.models.llama import LlamaTokenizerFast\r\nfrom trl import DataCollatorForCompletionOnlyLM\r\ntokenizer = LlamaTokenizerFast.from_pretrained(\"hf-internal-testing/llama-tokenizer\",)\r\nresponse_template = \"Answer: [/INST]\"\r\n\r\n# Work-around for context-sensitive tokenizers\r\nresponse_template_tokenized = tokenizer.encode(f\"\\n{response_template}\", add_special_tokens=False)[2:]\r\n\r\ncollator = DataCollatorForCompletionOnlyLM(response_template=response_template_tokenized , tokenizer=tokenizer)\r\n\r\nexample = \"\"\"<s>[INST] <<SYS>> Please select the correct answer from the given multiple Options based on the given Context: <</SYS>> \r\nContext: Abrasion is another type of mechanical weathering. With abrasion, one rock bumps against another rock. Gravity causes abrasion as a rock tumbles down a slope. Moving water causes abrasion it moves rocks so that they bump against one another (Figure 9.3). Strong winds cause abrasion by blasting sand against rock surfaces. Finally, the ice in glaciers cause abrasion. Pieces of rock embedded in ice at the bottom of a glacier scrape against the rock below. If you have ever collected beach glass or pebbles from a stream, you have witnessed the work of abrasion. \r\nQuestion: Gravity causes erosion by all of the following except Options:(A) glaciers (B) moving air (C) flowing water (D) mass movement \r\nAnswer: [/INST]\"\"\"\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\nexample_encoded = tokenizer(example)\r\nprint(collator([example_encoded]))\r\n```\r\n**Output:**\r\n```\r\nYou're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n{'input_ids': tensor([[ 1, 1, 29961, 25580, 29962, 3532, 14816, 29903, 6778, 3529,\r\n 1831, 278, 1959, 1234, 515, 278, 2183, 2999, 25186, 2729,\r\n 373, 278, 2183, 15228, 29901, 529, 829, 14816, 29903, 6778,\r\n 29871, 13, 2677, 29901, 27782, 7002, 338, 1790, 1134, 310,\r\n 28310, 14826, 292, 29889, 2973, 633, 3417, 291, 29892, 697,\r\n 7679, 289, 17204, 2750, 1790, 7679, 29889, 4989, 17037, 9946,\r\n 633, 3417, 291, 408, 263, 7679, 260, 3774, 793, 1623,\r\n 263, 24968, 29889, 14104, 292, 4094, 9946, 633, 3417, 291,\r\n 372, 16229, 23150, 577, 393, 896, 289, 3427, 2750, 697,\r\n 1790, 313, 13080, 545, 29871, 29929, 29889, 29941, 467, 3767,\r\n 549, 8805, 29879, 4556, 633, 3417, 291, 491, 1999, 579,\r\n 292, 11982, 2750, 7679, 28001, 29889, 9788, 29892, 278, 14890,\r\n 297, 14751, 455, 414, 4556, 633, 3417, 291, 29889, 26005,\r\n 778, 310, 7679, 15685, 297, 14890, 472, 278, 5970, 310,\r\n 263, 14751, 13241, 24559, 412, 2750, 278, 7679, 2400, 29889,\r\n 960, 366, 505, 3926, 16531, 25695, 12917, 470, 282, 774,\r\n 7586, 515, 263, 4840, 29892, 366, 505, 16277, 287, 278,\r\n 664, 310, 633, 3417, 291, 29889, 29871, 13, 16492, 29901,\r\n 4989, 17037, 9946, 604, 359, 291, 491, 599, 310, 278,\r\n 1494, 5174, 25186, 5919, 29909, 29897, 14751, 455, 414, 313,\r\n 29933, 29897, 8401, 4799, 313, 29907, 29897, 4972, 292, 4094,\r\n 313, 29928, 29897, 4158, 10298, 29871, 13, 22550, 29901, 518,\r\n 29914, 25580, 29962]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100]])}\r\n```\r\n\r\n\r\n", "Thank you very much @hi-sushanta. updating the libraries worked!\r\nbut I have another question regarding the labels.\r\nwhen I use the example as:\r\n`example = \"\"\"<s>[INST] <<SYS>> Please select the correct answer from the given multiple Options based on the given Context: <</SYS>> Context: Oceanography is the study of the oceans. The word oceanology might be more accurate, since ology is the study of. Graph is to write and refers to map making. But mapping the oceans is how oceanography started. More than 70% of Earths surface is covered with water. Almost all of that water is in the oceans. Scientists have visited the deepest parts of the ocean in submarines. Remote vehicles go where humans cant. Yet much of the ocean remains unexplored. Some people call the ocean the last frontier. Humans have had a big impact on the oceans. Populations of fish and other marine species have been overfished. Contaminants are polluting the waters. Global warming is melting the thick ice caps and warming the water. Warmer water expands and, along with water from the melting ice caps, causes sea levels to rise. There are many branches of oceanography. Physical oceanography is the study of water movement, like waves and ocean currents (Figure 1.13). Marine geology looks at rocks and structures in the ocean basins. Chemical oceanography studies the natural elements in ocean water. Marine biology looks at marine life. Question: Chemical oceanography is the study of the Options:(A) human pollution of ocean water (B) naturally occurring elements in ocean water (C) rising levels of ocean water (D) rocks on the ocean floor Answer: [/INST] B </s>\"\"\"\r\n`\r\n\r\nthe labels are all -100 no matter what comes after the response_template. while in the origional code it works if I add the answer after the response_template. Do I need to rewrite the example in a different way?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
Hello, I am experiencing the following error: ``` You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-bf084d2d746f>](https://localhost:8080/#) in <cell line: 15>() 13 example_encoded = tokenizer(example) 14 ---> 15 collator([example_encoded]) 5 frames [/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose) 423 ) 424 --> 425 encodings = self._tokenizer.encode_batch( 426 batch_text_or_text_pairs, 427 add_special_tokens=add_special_tokens, TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]] ``` My Code is: ``` response_template = "Answer: [/INST]" # Work-around for context-sensitive tokenizers response_template_tokenized = tokenizer.encode(f"\n{response_template}", add_special_tokens=False)[2:] collator = DataCollatorForCompletionOnlyLM(response_template=response_template_tokenized , tokenizer=tokenizer) example = """<s>[INST] <<SYS>> Please select the correct answer from the given multiple Options based on the given Context: <</SYS>> Context: Abrasion is another type of mechanical weathering. With abrasion, one rock bumps against another rock. Gravity causes abrasion as a rock tumbles down a slope. Moving water causes abrasion it moves rocks so that they bump against one another (Figure 9.3). Strong winds cause abrasion by blasting sand against rock surfaces. Finally, the ice in glaciers cause abrasion. Pieces of rock embedded in ice at the bottom of a glacier scrape against the rock below. If you have ever collected beach glass or pebbles from a stream, you have witnessed the work of abrasion. Question: Gravity causes erosion by all of the following except Options:(A) glaciers (B) moving air (C) flowing water (D) mass movement Answer: [/INST]""" example_encoded = tokenizer(example) collator([example_encoded]) ``` I have tried encode plus and splitting by "\n" before tokenizing but did not solve the error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27697/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27696/comments
https://api.github.com/repos/huggingface/transformers/issues/27696/events
https://github.com/huggingface/transformers/pull/27696
2,010,011,896
PR_kwDOCUB6oc5gUuoL
27,696
Fix Past CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,701
1,701
COLLABORATOR
null
# What does this PR do? Since mid-November, we have hundreds of following failures in each past CI > (line 1108) NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. It is caused by `fsspec==2023.10.0` with an old `datasets`. This PR just updates `datasets` (at CI runtime) to avoid, and fix thousand failures in total in past CI ๐Ÿš€ ๐Ÿคฃ
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27696/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27696", "html_url": "https://github.com/huggingface/transformers/pull/27696", "diff_url": "https://github.com/huggingface/transformers/pull/27696.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27696.patch", "merged_at": 1701072719000 }
https://api.github.com/repos/huggingface/transformers/issues/27695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27695/comments
https://api.github.com/repos/huggingface/transformers/issues/27695/events
https://github.com/huggingface/transformers/pull/27695
2,009,945,094
PR_kwDOCUB6oc5gUgDr
27,695
Fix `TVPModelTest`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? Just device issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27695/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27695", "html_url": "https://github.com/huggingface/transformers/pull/27695", "diff_url": "https://github.com/huggingface/transformers/pull/27695.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27695.patch", "merged_at": 1700851671000 }
https://api.github.com/repos/huggingface/transformers/issues/27694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27694/comments
https://api.github.com/repos/huggingface/transformers/issues/27694/events
https://github.com/huggingface/transformers/pull/27694
2,009,916,536
PR_kwDOCUB6oc5gUZ6O
27,694
Introduce SegGPT model
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts and @niels", "Hi @raghavanone - thanks for opening this PR! Let us know when it's ready for review ", "@raghavanone Hey, I also was doing the implementation of `SegGPT` to add to the library (you can find my work here https://github.com/huggingface/transformers/pull/27735) I believe my current implementation follows a bit more the library standards (@NielsRogge if you could take a look at my PR to make sure my claim is right). If you want to we can collaborate on my PR", "@EduardoPach my implementations is close to completion,the test pass and conversion script is also complete.I don't think it is wise to abandon this." ]
1,700
1,701
1,701
CONTRIBUTOR
null
Introduce SegGPT model. #27514
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27694/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27694", "html_url": "https://github.com/huggingface/transformers/pull/27694", "diff_url": "https://github.com/huggingface/transformers/pull/27694.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27694.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27693/comments
https://api.github.com/repos/huggingface/transformers/issues/27693/events
https://github.com/huggingface/transformers/pull/27693
2,009,819,879
PR_kwDOCUB6oc5gUE04
27,693
Trigger corresponding pipeline tests if `tests/utils/tiny_model_summary.json` is modified
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,700
1,701
1,701
COLLABORATOR
null
# What does this PR do? Trigger corresponding pipeline tests if `tests/utils/tiny_model_summary.json` is modified. It happened several times that we merged a PR and pipeline testing failed, because the CI triggered on those PRs didn't cover all files (as `tests/utils/tiny_model_summary.json` is not a python file)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27693/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27693", "html_url": "https://github.com/huggingface/transformers/pull/27693", "diff_url": "https://github.com/huggingface/transformers/pull/27693.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27693.patch", "merged_at": 1701188481000 }
https://api.github.com/repos/huggingface/transformers/issues/27692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27692/comments
https://api.github.com/repos/huggingface/transformers/issues/27692/events
https://github.com/huggingface/transformers/issues/27692
2,009,677,525
I_kwDOCUB6oc53yT7V
27,692
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
{ "login": "nirdoshrawal009", "id": 102213412, "node_id": "U_kgDOBhenJA", "avatar_url": "https://avatars.githubusercontent.com/u/102213412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nirdoshrawal009", "html_url": "https://github.com/nirdoshrawal009", "followers_url": "https://api.github.com/users/nirdoshrawal009/followers", "following_url": "https://api.github.com/users/nirdoshrawal009/following{/other_user}", "gists_url": "https://api.github.com/users/nirdoshrawal009/gists{/gist_id}", "starred_url": "https://api.github.com/users/nirdoshrawal009/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nirdoshrawal009/subscriptions", "organizations_url": "https://api.github.com/users/nirdoshrawal009/orgs", "repos_url": "https://api.github.com/users/nirdoshrawal009/repos", "events_url": "https://api.github.com/users/nirdoshrawal009/events{/privacy}", "received_events_url": "https://api.github.com/users/nirdoshrawal009/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @nirdoshrawal009, can you provide a minimal reproducer ? Thanks ", "Meet the same problem, here is a minimal reproducer of mine:\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\n# In my case, audio is passed through HTTP requests\r\ndef process(audio):\r\n\tpipe = pipeline(\"automatic-speech-recognition\", model='openai/whisper-large-v2', device_map='auto')\r\n\treturn pipe(audio)\r\n```\r\n\r\ndependency version:\r\n\r\n- transformers: 4.32.1\r\n- accelerate: 0.24.0", "Have you solved this problem? \r\n\r\n> ### System Info\r\n> I want to finetune Falcon 7b models using SFTTrainer from Transformers library. I have set the device_map = 'auto' while loading the model and cuda_visible_devices = '0,1' But getting this error.\r\n> \r\n> ### Who can help?\r\n> _No response_\r\n> \r\n> ### Information\r\n> * [ ] The official example scripts\r\n> * [ ] My own modified scripts\r\n> \r\n> ### Tasks\r\n> * [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n> * [ ] My own task or dataset (give details below)\r\n> \r\n> ### Reproduction\r\n> RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)\r\n> \r\n> ### Expected behavior\r\n> I want to finetune Falcon 7b models using SFTTrainer from Transformers library. I have set the device_map = 'auto' while loading the model and cuda_visible_devices = '0,1,2,3' But getting this error.\r\n\r\n", "Hi @jrt-20, i can't fix the issue since i am unable to reproduce the error. If you are willing to give me a minimal reproducer, I can help you ! ", "> Hi @jrt-20, i can't fix the issue since i am unable to reproduce the error. If you are willing to give me a minimal reproducer, I can help you !\r\nMany thanks for your kind and warm help๏ผŒthe code and the log are as follows:\r\n```python\r\nfrom transformers import GPT2Tokenizer, GPT2Model\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = GPT2Model.from_pretrained(\"gpt2\",device_map='auto').to(\"cuda\")\r\ntext = \"Replace me by any text you'd like.\"\r\nencoded_input = tokenizer(text, return_tensors='pt')\r\nencoded_input = encoded_input.to(\"cuda\")\r\noutput = model(**encoded_input)\r\nprint(output)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 8, in <module>\r\n output = model(**encoded_input)\r\n File \"/root/miniconda3/envs/sgp_jrt/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/root/miniconda3/envs/sgp_jrt/lib/python3.7/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/root/miniconda3/envs/sgp_jrt/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 846, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n File \"/root/miniconda3/envs/sgp_jrt/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/root/miniconda3/envs/sgp_jrt/lib/python3.7/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/root/miniconda3/envs/sgp_jrt/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 162, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/root/miniconda3/envs/sgp_jrt/lib/python3.7/site-packages/torch/nn/functional.py\", line 2210, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:7! (when checking argument for argument index in method wrapper__index_select)\r\n```", "Hi @jrt-20, you should move a model when you load it with device_map=\"auto\" because splitting the model across all available gpus. Either do `model = GPT2Model.from_pretrained(\"gpt2\", device_map='auto')` or `model = GPT2Model.from_pretrained(\"gpt2\", device_map='cuda')`. I think that you should have received a warning in the log. ", "lgtm,I have solved this problem", "Glad that you solved the problem ! Closing the issue. " ]
1,700
1,703
1,703
NONE
null
### System Info I want to finetune Falcon 7b models using SFTTrainer from Transformers library. I have set the device_map = 'auto' while loading the model and cuda_visible_devices = '0,1' But getting this error. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) ### Expected behavior I want to finetune Falcon 7b models using SFTTrainer from Transformers library. I have set the device_map = 'auto' while loading the model and cuda_visible_devices = '0,1,2,3' But getting this error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27692/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27691/comments
https://api.github.com/repos/huggingface/transformers/issues/27691/events
https://github.com/huggingface/transformers/pull/27691
2,009,497,162
PR_kwDOCUB6oc5gS-NX
27,691
Reorder the code on the Hub to explicit that sharing on the Hub isn't a requirement
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,700
1,701
1,701
MEMBER
null
cc @julien-c @gary149
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27691/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27691/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27691", "html_url": "https://github.com/huggingface/transformers/pull/27691", "diff_url": "https://github.com/huggingface/transformers/pull/27691.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27691.patch", "merged_at": 1701074299000 }
https://api.github.com/repos/huggingface/transformers/issues/27690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27690/comments
https://api.github.com/repos/huggingface/transformers/issues/27690/events
https://github.com/huggingface/transformers/pull/27690
2,009,488,459
PR_kwDOCUB6oc5gS8Ut
27,690
Make image processors more general
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27690). All of your documentation changes will be reflected on that endpoint.", "Thanks for opening this PR! \r\n\r\n>Ideally it should also allow for the following (cc @amyeroberts)\r\n> \r\n> size = {\"longest_edge\": ...}\r\n> size = {\"shortest_edge\": ..., \"longest_edge\": ...}\r\n\r\nAs discussed on slack, enabling all image processors to accept `{\"height\": h, \"width\": w}` is a change I'm very much pro. Enabling all to accept `{\"shortest_edge\": x, \"longest_edge\": y}` isn't something I think we should as this behaviour isn't well-defined and varies between models. ", "cc @ydshieh this is failing for KOSMOS-2 - it would be great to revert that change for KOSMOS-2 (i.e. remove the \"use_square_size\" attribute) since this is introduced for legacy behaviour of `size` (which used to be an integer and is now a dictionary).", "It did but it's also very convenient not to have to use copied from when the only difference was this, so reverting IMO is not the best solution. Let's rather adapt / make it compatible ", "Reverting #26965 means we well have to add `Kosmos2ImageProcessor` (while currently it is `CLIPImageProcessor` with `use_square_size=True`). Probably it will go without problem, but still kind of breaking changes.", "@ydshieh would it also be possible to just set size to `{\"height\": ..., \"width\" ...}`? With this PR, image processors are made more general, making sure square sizes are supported", "Hi, could you give more details on\r\n\r\n> set size to {\"height\": ..., \"width\" ...}\r\n\r\n(where should I set this, for example.)\r\n\r\nDo you mean on the config file?", "Yes I meant in the `preprocessor_config.json`. Given that the model is only one month old, we could still update them. Alternatively, I've added a backwards compatibility in this PR, only for `CLIPImageProcessor`. Let me know what you think works best.", "We can keep your backwards compatibility code, and I could try to open Hub PRs to update them.\r\nLet's also hear what the core maintainers say of course.", "It's required to merge #27718 ", "Merge now as full (slow) CI on 4 models touched by this PR pass", "cc @younesbelkada we have a regression on this " ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? This PR undoes https://github.com/huggingface/transformers/pull/26965 and instead makes image processors more general, by removing the hardcoded "shortest_side" arguments, allowing to pass the following to image processors: ``` size = {"height": ..., "width": ...} size = {"shortest_edge": ...} ``` Ideally it should also allow for the following (cc @amyeroberts) ```: size = {"longest_edge": ...} size = {"shortest_edge": ..., "longest_edge": ...} ``` Image processors should not be limited to only `{"shortest_edge": ...}` for instance, as `CLIPImageProcessor` is at the moment. The `size` argument is now always a dictionary containing [one of these 4 possibilities](https://github.com/huggingface/transformers/blob/7293fdc5b9cc809c2aa2ceb84f903ad47e5c06f0/src/transformers/image_processing_utils.py#L663); hence it would be great to support them all.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27690/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27690", "html_url": "https://github.com/huggingface/transformers/pull/27690", "diff_url": "https://github.com/huggingface/transformers/pull/27690.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27690.patch", "merged_at": 1701769539000 }
https://api.github.com/repos/huggingface/transformers/issues/27689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27689/comments
https://api.github.com/repos/huggingface/transformers/issues/27689/events
https://github.com/huggingface/transformers/pull/27689
2,009,430,217
PR_kwDOCUB6oc5gSvrw
27,689
fix warning
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,700
1,701
1,701
COLLABORATOR
null
# What does this PR do? Reverts a change from #27519 to fix #27678 cc @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27689/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27689", "html_url": "https://github.com/huggingface/transformers/pull/27689", "diff_url": "https://github.com/huggingface/transformers/pull/27689.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27689.patch", "merged_at": 1701072880000 }
https://api.github.com/repos/huggingface/transformers/issues/27688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27688/comments
https://api.github.com/repos/huggingface/transformers/issues/27688/events
https://github.com/huggingface/transformers/issues/27688
2,009,422,069
I_kwDOCUB6oc53xVj1
27,688
Remote code improvements
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "1. was resolved by @MKhalusova in https://github.com/huggingface/transformers/pull/27213, it will be in the next release :hugs: ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,703
1,703
MEMBER
null
Originally posted by @Jackmin801 on the awesome [_jinaai/jina-embeddings-v2-base-en_](https://huggingface.co/jinaai/jina-embeddings-v2-base-en/discussions/5#654102670a2101c338f3d737) repository. > 1. Documentation > The clarity of the transformers [documentation on custom models](https://huggingface.co/docs/transformers/custom_models) could be improved by describing the syntax of `auto_map` in *config.json*. We never used the `register` functions and would just directly modify the *config.json*. The behaviour that allowed one to use code on the Hub from another repo using "--" doesn't seem to be documented anywhere but we figured out that it was possible because `save_pretrained` saves in this format when using a custom model. The feature does seem to be pretty new though (I believe ~6 months ago?) so maybe that is why it hasn't been too well documented yet. But we think that if it was better communicated to users that it was possible to do this, more people would develop on the Hub as we did. > > 2. Promote `trust_remote_code` to environment variable > Some downstream libraries currently do not support passing the `trust_remote_code` argument. Notable to our work was `sentence_transformers`, despite quite a few requests for this [[1](https://github.com/UKPLab/sentence-transformers/issues/1473), [2](https://github.com/UKPLab/sentence-transformers/issues/2272), [3](https://github.com/UKPLab/sentence-transformers/pull/2274)]. This leads to us needing to [monkeypatching the model loading logic in the libraries](https://github.com/simonw/llm-sentence-transformers/pull/10/files) to be able to use our model. If `trust_remote_code` could be read from an environment variable e.g. `HUGGINGFACE_TRUST_REMOTE_CODE`, it would make it such that one only need set the environment variable to enable loading custom models. This would make the use of custom models much easier to adopt throughout the ecosystem. > > 3. Silent failure when `trust_remote_code` is not set to True. > When `trust_remote_code` is not set to True for our model, the behaviour seems to be that it loads the classic BERT implementation from transformers and [throws a bunch of warnings from re-initialised weights](https://x.com/simonw/status/1716644983917392330?s=20). This is not ideal because if a downstream evaluation script forgot to set the arg, it would generate inaccurate results and the only way of knowing that something was wrong was to scroll through the output logs and see if this warning appeared or print the model and see if the model has the right name. If instead, it would error and ask the user to set the `trust_remote_code` arg, it would be more easily caught and save us quite some head scratching and communication overhead in the team. Definitely agree with all the points above; would be awesome to work on this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27688/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27688/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27687/comments
https://api.github.com/repos/huggingface/transformers/issues/27687/events
https://github.com/huggingface/transformers/pull/27687
2,009,285,159
PR_kwDOCUB6oc5gSQYH
27,687
Skip pipeline tests for 2 models for now
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? Skip 2 pipeline tests for 2 moels for now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27687/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27687", "html_url": "https://github.com/huggingface/transformers/pull/27687", "diff_url": "https://github.com/huggingface/transformers/pull/27687.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27687.patch", "merged_at": 1700815400000 }
https://api.github.com/repos/huggingface/transformers/issues/27686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27686/comments
https://api.github.com/repos/huggingface/transformers/issues/27686/events
https://github.com/huggingface/transformers/issues/27686
2,009,178,905
I_kwDOCUB6oc53waMZ
27,686
'LlamaTokenizerFast' object has no attribute 'prefix_id'
{ "login": "Kushalamummigatti", "id": 62338340, "node_id": "MDQ6VXNlcjYyMzM4MzQw", "avatar_url": "https://avatars.githubusercontent.com/u/62338340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kushalamummigatti", "html_url": "https://github.com/Kushalamummigatti", "followers_url": "https://api.github.com/users/Kushalamummigatti/followers", "following_url": "https://api.github.com/users/Kushalamummigatti/following{/other_user}", "gists_url": "https://api.github.com/users/Kushalamummigatti/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kushalamummigatti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kushalamummigatti/subscriptions", "organizations_url": "https://api.github.com/users/Kushalamummigatti/orgs", "repos_url": "https://api.github.com/users/Kushalamummigatti/repos", "events_url": "https://api.github.com/users/Kushalamummigatti/events{/privacy}", "received_events_url": "https://api.github.com/users/Kushalamummigatti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, ๐Ÿค— from the look of it, you are not using the `CodeLlamaTokenizer` but the `LlamaTokenizer`. You code does not include a reproducer so I cannot really help you here. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
### System Info Using transformer version `transformers` version: 4.34.1 - Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> @ArthurZucker ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Am trying to use codellama tokenization function present in llama/generation.py for a custom problem. But am facing this issue: 'LlamaTokenizerFast' object has no attribute 'prefix_id' Please help. def infilling_prompt_tokens( tokenizer: Tokenizer, pre: str, suf: str, suffix_first: bool = False, ) -> List[int]: """ Format and encode an infilling problem. If `suffix_first` is set, format in suffix-prefix-middle format. """ assert tokenizer.prefix_id is not None assert tokenizer.middle_id is not None assert tokenizer.suffix_id is not None if suffix_first: # format as "<PRE> <SUF>{suf} <MID> {pre}" return ( [tokenizer.bos_id, tokenizer.prefix_id, tokenizer.suffix_id] + tokenizer.encode_infilling(suf) + [tokenizer.middle_id] + tokenizer.encode(pre, bos=False, eos=False) ) else: # format as "<PRE> {pre} <SUF>{suf} <MID>" return ( [tokenizer.bos_id, tokenizer.prefix_id] + tokenizer.encode(pre, bos=False, eos=False) + [tokenizer.suffix_id] + tokenizer.encode_infilling(suf) + [tokenizer.middle_id] ) ### Expected behavior The function should be able to reproduce on any custom format.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27686/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27685/comments
https://api.github.com/repos/huggingface/transformers/issues/27685/events
https://github.com/huggingface/transformers/pull/27685
2,008,971,092
PR_kwDOCUB6oc5gROJF
27,685
Add Flash Attention 2 to Persimmon
{ "login": "jeromeku", "id": 2455711, "node_id": "MDQ6VXNlcjI0NTU3MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2455711?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeromeku", "html_url": "https://github.com/jeromeku", "followers_url": "https://api.github.com/users/jeromeku/followers", "following_url": "https://api.github.com/users/jeromeku/following{/other_user}", "gists_url": "https://api.github.com/users/jeromeku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeromeku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeromeku/subscriptions", "organizations_url": "https://api.github.com/users/jeromeku/orgs", "repos_url": "https://api.github.com/users/jeromeku/repos", "events_url": "https://api.github.com/users/jeromeku/events{/privacy}", "received_events_url": "https://api.github.com/users/jeromeku/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@younesbelkada \r\n\r\nRe-based, installed `ruff==0.1.5`, and re-ran `make style`, still getting test failure for `PhiModelTest.test_pipeline_text_generation`.", "cc @molbap as younes is offline" ]
1,700
1,706
null
NONE
null
# What does this PR do? Integrates FA2 to Persimmon per #26350, #27052 (former branch was messed up after trying to rebase, so PR'ing a new branch). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @younesbelkada @ArthurZucker ## Notes - Fixed comments per #27052. Requesting new PR as former branch was messed up after trying to `rebase` on `main`. - Tried making changes as suggested in #27661 for `generate_padding_right` test. However, `Persimmon` tokenizer configs do not have either `eos` or `pad` tokens (both are set to `null` see [here](https://huggingface.co/adept/persimmon-8b-chat/blob/main/tokenizer_config.json)), so simply copying the `LlamaModelTest` `generate_padding_right` test override does not work. - Also tried running `dummy inputs` on the full pretrained model for the `generate_padding_right` test, no luck either -- this is left as the current implementation in `test_persimmon_modeling.py`. - Ran some additional experiments on the `generate_padding_test` for other models for FA2 -- see [comments](https://github.com/huggingface/transformers/pull/27052#issuecomment-1820156930). - Marking `generate_padding_right` test as `skip` for now. - Files other than those related to `persimmon` were changed in this PR due to fixes from running `make {quality, style, fixup}`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27685/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27685", "html_url": "https://github.com/huggingface/transformers/pull/27685", "diff_url": "https://github.com/huggingface/transformers/pull/27685.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27685.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27684/comments
https://api.github.com/repos/huggingface/transformers/issues/27684/events
https://github.com/huggingface/transformers/issues/27684
2,008,953,546
I_kwDOCUB6oc53vjLK
27,684
transformers/utils/generic.py contains function deprecated in latest PyTorch
{ "login": "rationalism", "id": 813306, "node_id": "MDQ6VXNlcjgxMzMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/813306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rationalism", "html_url": "https://github.com/rationalism", "followers_url": "https://api.github.com/users/rationalism/followers", "following_url": "https://api.github.com/users/rationalism/following{/other_user}", "gists_url": "https://api.github.com/users/rationalism/gists{/gist_id}", "starred_url": "https://api.github.com/users/rationalism/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rationalism/subscriptions", "organizations_url": "https://api.github.com/users/rationalism/orgs", "repos_url": "https://api.github.com/users/rationalism/repos", "events_url": "https://api.github.com/users/rationalism/events{/privacy}", "received_events_url": "https://api.github.com/users/rationalism/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for opening this ๐Ÿค— \r\nWould be down to update this but we need to make sure we support other versions of `torch` (you are using the latest). \r\nWould you like to open a PR and makes sure this works on previous versions as well? ", "+1", "+1", "same issue ", "#27803 fixes this! Make sure to use the `main` branch" ]
1,700
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.1 - Accelerate version: 0.24.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: NO - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - gpu_ids: 0 - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - dynamo_config: {'dynamo_backend': 'INDUCTOR'} - PyTorch version (GPU?): 2.2.0.dev20231123+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run any Transformers code (eg. I just ran `transformers-cli env`) with the latest version of PyTorch (2.2.0.dev20231123+cu121). ### Expected behavior It should run without warnings. Instead, this warning appears due to a function call having been deprecated. Monkey-patching "_register_pytree_node" with "register_pytree_node" fixes the bug. /home/alyssa/anaconda3/envs/lm_fun/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. _torch_pytree._register_pytree_node(
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27684/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27684/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27683/comments
https://api.github.com/repos/huggingface/transformers/issues/27683/events
https://github.com/huggingface/transformers/pull/27683
2,008,857,050
PR_kwDOCUB6oc5gQ2Pu
27,683
tokenizer_kwargs in text-generation pipeline __call__()
{ "login": "thedamnedrhino", "id": 8396998, "node_id": "MDQ6VXNlcjgzOTY5OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/8396998?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thedamnedrhino", "html_url": "https://github.com/thedamnedrhino", "followers_url": "https://api.github.com/users/thedamnedrhino/followers", "following_url": "https://api.github.com/users/thedamnedrhino/following{/other_user}", "gists_url": "https://api.github.com/users/thedamnedrhino/gists{/gist_id}", "starred_url": "https://api.github.com/users/thedamnedrhino/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thedamnedrhino/subscriptions", "organizations_url": "https://api.github.com/users/thedamnedrhino/orgs", "repos_url": "https://api.github.com/users/thedamnedrhino/repos", "events_url": "https://api.github.com/users/thedamnedrhino/events{/privacy}", "received_events_url": "https://api.github.com/users/thedamnedrhino/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ArthurZucker what do you think about adding a `truncation` (and `max_length`) arg to the pipeline call/constructor instead of this? For all text pipelines. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "(I answered on the linked PR! ๐Ÿค— )", "What is the solution? Nothing seems to be working for me.", "It's in #28362. Use something like this:\r\n```\r\ntext_generator_pipeline(\r\n test_str,\r\n do_sample=False,\r\n return_full_text=False,\r\n truncation=True,\r\n max_length=3,\r\n )\r\n```" ]
1,700
1,707
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #25994 for `text-generation` pipelines. The primary use case is to have truncation in the pipeline. This is useful in e.g. RAG when the relevant documents are too long. ## Who can review? @nmcahill @Narsil @ArthurZucker <!-- Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27683/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27683", "html_url": "https://github.com/huggingface/transformers/pull/27683", "diff_url": "https://github.com/huggingface/transformers/pull/27683.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27683.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27682/comments
https://api.github.com/repos/huggingface/transformers/issues/27682/events
https://github.com/huggingface/transformers/issues/27682
2,008,854,973
I_kwDOCUB6oc53vLG9
27,682
Mistral with Flash atteniton v2 give error on long sequence input and max_new_tokens
{ "login": "binarycrayon", "id": 10211, "node_id": "MDQ6VXNlcjEwMjEx", "avatar_url": "https://avatars.githubusercontent.com/u/10211?v=4", "gravatar_id": "", "url": "https://api.github.com/users/binarycrayon", "html_url": "https://github.com/binarycrayon", "followers_url": "https://api.github.com/users/binarycrayon/followers", "following_url": "https://api.github.com/users/binarycrayon/following{/other_user}", "gists_url": "https://api.github.com/users/binarycrayon/gists{/gist_id}", "starred_url": "https://api.github.com/users/binarycrayon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/binarycrayon/subscriptions", "organizations_url": "https://api.github.com/users/binarycrayon/orgs", "repos_url": "https://api.github.com/users/binarycrayon/repos", "events_url": "https://api.github.com/users/binarycrayon/events{/privacy}", "received_events_url": "https://api.github.com/users/binarycrayon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @binarycrayon \r\nThis issue should be fixed with https://github.com/huggingface/transformers/pull/27548", "Thank you, subscribed. I will close this once it's merged" ]
1,700
1,701
1,701
NONE
null
### System Info ## system information WSL on windows 11 ## Hardware RTX3090Ti ## Software Python: 3.10.13 Transformers: 4.35 ## Input Sequence Length 15901 throws error 13266 was fine ## Model Initialization ``` model_id = "mistralai/Mistral-7B-v0.1" model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, torch_dtype = torch.bfloat16, load_in_8bit=True, device_map="cuda:0", use_flash_attention_2=True) ``` ### Who can help? @younesbelkada @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, torch_dtype = torch.bfloat16, device_map="cuda:0", use_flash_attention_2=True ) # model.config.pad_token_id = tokenizer.pad_token_id text = "<long text here>" inputs = tokenizer(text, return_tensors="pt") # print(model.device) # Check model device inputs = {k: v.to(model.device) for k, v in inputs.items()} output = model.generate(**inputs, max_new_tokens=128) ``` ### Expected behavior I expected the inference to work properly, got error when the input has 15901 characters, it worked fine when the input has 13266 characters ## Issue ``` ValueError Traceback (most recent call last) /notebooks/Untitled.ipynb Cell 15 line 8 File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/utils/_contextlib.py:115](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/utils/_contextlib.py:115), in context_decorator.<locals>.decorate_context(*args, **kwargs) [112](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/utils/_contextlib.py:112) @functools.wraps(func) [113](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/utils/_contextlib.py:113) def decorate_context(*args, **kwargs): [114](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/utils/_contextlib.py:114) with ctx_factory(): --> [115](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/utils/_contextlib.py:115) return func(*args, **kwargs) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1754](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1754), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs) [1737](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1737) return self.assisted_decoding( [1738](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1738) input_ids, [1739](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1739) assistant_model=assistant_model, ref='~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:0'>0</a>;32m (...) [1750](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1750) **model_kwargs, [1751](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1751) ) [1752](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1752) if generation_mode == GenerationMode.GREEDY_SEARCH: [1753](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1753) # 11. run greedy search -> [1754](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1754) return self.greedy_search( [1755](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1755) input_ids, [1756](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1756) logits_processor=logits_processor, [1757](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1757) stopping_criteria=stopping_criteria, [1758](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1758) pad_token_id=generation_config.pad_token_id, [1759](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1759) eos_token_id=generation_config.eos_token_id, [1760](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1760) output_scores=generation_config.output_scores, [1761](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1761) return_dict_in_generate=generation_config.return_dict_in_generate, [1762](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1762) synced_gpus=synced_gpus, [1763](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1763) streamer=streamer, [1764](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1764) **model_kwargs, [1765](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1765) ) [1767](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1767) elif generation_mode == GenerationMode.CONTRASTIVE_SEARCH: [1768](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:1768) if not model_kwargs["use_cache"]: File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2615](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2615), in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) [2612](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2612) model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) [2614](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2614) # forward pass to get next token -> [2615](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2615) outputs = self( [2616](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2616) **model_inputs, [2617](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2617) return_dict=True, [2618](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2618) output_attentions=output_attentions, [2619](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2619) output_hidden_states=output_hidden_states, [2620](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2620) ) [2622](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2622) if synced_gpus and this_peer_finished: [2623](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/generation/utils.py:2623) continue # don't waste resources running the code we don't need File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518), in Module._wrapped_call_impl(self, *args, **kwargs) [1516](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1516) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] [1517](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else: -> [1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518) return self._call_impl(*args, **kwargs) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527), in Module._call_impl(self, *args, **kwargs) [1522](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in [1523](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward. [1524](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks [1525](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1525) or _global_backward_pre_hooks or _global_backward_hooks [1526](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1526) or _global_forward_hooks or _global_forward_pre_hooks): -> [1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527) return forward_call(*args, **kwargs) [1529](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try: [1530](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1530) result = None File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module.<locals>.new_forward(*args, **kwargs) [163](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:163) output = old_forward(*args, **kwargs) [164](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:164) else: --> [165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165) output = old_forward(*args, **kwargs) [166](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:166) return module._hf_hook.post_forward(module, output) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1007](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1007), in MistralForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) [1004](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1004) return_dict = return_dict if return_dict is not None else self.config.use_return_dict [1006](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1006) # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) -> [1007](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1007) outputs = self.model( [1008](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1008) input_ids=input_ids, [1009](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1009) attention_mask=attention_mask, [1010](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1010) position_ids=position_ids, [1011](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1011) past_key_values=past_key_values, [1012](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1012) inputs_embeds=inputs_embeds, [1013](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1013) use_cache=use_cache, [1014](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1014) output_attentions=output_attentions, [1015](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1015) output_hidden_states=output_hidden_states, [1016](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1016) return_dict=return_dict, [1017](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1017) ) [1019](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1019) hidden_states = outputs[0] [1020](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1020) logits = self.lm_head(hidden_states) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518), in Module._wrapped_call_impl(self, *args, **kwargs) [1516](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1516) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] [1517](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else: -> [1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518) return self._call_impl(*args, **kwargs) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527), in Module._call_impl(self, *args, **kwargs) [1522](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in [1523](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward. [1524](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks [1525](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1525) or _global_backward_pre_hooks or _global_backward_hooks [1526](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1526) or _global_forward_hooks or _global_forward_pre_hooks): -> [1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527) return forward_call(*args, **kwargs) [1529](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try: [1530](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1530) result = None File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module.<locals>.new_forward(*args, **kwargs) [163](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:163) output = old_forward(*args, **kwargs) [164](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:164) else: --> [165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165) output = old_forward(*args, **kwargs) [166](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:166) return module._hf_hook.post_forward(module, output) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:895](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:895), in MistralModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) [885](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:885) layer_outputs = self._gradient_checkpointing_func( [886](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:886) decoder_layer.__call__, [887](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:887) hidden_states, ref='~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:0'>0</a>;32m (...) [892](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:892) use_cache, [893](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:893) ) [894](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:894) else: --> [895](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:895) layer_outputs = decoder_layer( [896](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:896) hidden_states, [897](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:897) attention_mask=attention_mask, [898](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:898) position_ids=position_ids, [899](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:899) past_key_value=past_key_value, [900](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:900) output_attentions=output_attentions, [901](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:901) use_cache=use_cache, [902](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:902) ) [904](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:904) hidden_states = layer_outputs[0] [906](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:906) if use_cache: File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518), in Module._wrapped_call_impl(self, *args, **kwargs) [1516](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1516) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] [1517](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else: -> [1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518) return self._call_impl(*args, **kwargs) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527), in Module._call_impl(self, *args, **kwargs) [1522](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in [1523](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward. [1524](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks [1525](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1525) or _global_backward_pre_hooks or _global_backward_hooks [1526](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1526) or _global_forward_hooks or _global_forward_pre_hooks): -> [1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527) return forward_call(*args, **kwargs) [1529](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try: [1530](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1530) result = None File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module.<locals>.new_forward(*args, **kwargs) [163](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:163) output = old_forward(*args, **kwargs) [164](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:164) else: --> [165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165) output = old_forward(*args, **kwargs) [166](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:166) return module._hf_hook.post_forward(module, output) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:624](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:624), in MistralDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache, **kwargs) [621](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:621) hidden_states = self.input_layernorm(hidden_states) [623](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:623) # Self Attention --> [624](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:624) hidden_states, self_attn_weights, present_key_value = self.self_attn( [625](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:625) hidden_states=hidden_states, [626](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:626) attention_mask=attention_mask, [627](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:627) position_ids=position_ids, [628](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:628) past_key_value=past_key_value, [629](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:629) output_attentions=output_attentions, [630](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:630) use_cache=use_cache, [631](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:631) ) [632](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:632) hidden_states = residual + hidden_states [634](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:634) # Fully Connected File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518), in Module._wrapped_call_impl(self, *args, **kwargs) [1516](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1516) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] [1517](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else: -> [1518](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1518) return self._call_impl(*args, **kwargs) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527), in Module._call_impl(self, *args, **kwargs) [1522](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in [1523](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward. [1524](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks [1525](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1525) or _global_backward_pre_hooks or _global_backward_hooks [1526](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1526) or _global_forward_hooks or _global_forward_pre_hooks): -> [1527](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1527) return forward_call(*args, **kwargs) [1529](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try: [1530](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1530) result = None File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165), in add_hook_to_module.<locals>.new_forward(*args, **kwargs) [163](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:163) output = old_forward(*args, **kwargs) [164](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:164) else: --> [165](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:165) output = old_forward(*args, **kwargs) [166](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/accelerate/hooks.py:166) return module._hf_hook.post_forward(module, output) File [~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:376](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:376), in MistralFlashAttention2.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache, **kwargs) [373](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:373) past_value = past_value[:, :, slicing_tokens:, :].contiguous() [375](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:375) if past_key.shape[-2] != self.config.sliding_window - 1: --> [376](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:376) raise ValueError( [377](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:377) f"past key must have a shape of (`batch_size, num_heads, self.config.sliding_window-1, head_dim`), got" [378](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:378) f" {past_key.shape}" [379](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:379) ) [381](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:381) past_key_value = (past_key, past_value) [383](//notebooks/~/dev/alignment-handbook/CondaENV/env/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:383) if attention_mask is not None: ValueError: past key must have a shape of (`batch_size, num_heads, self.config.sliding_window-1, head_dim`), got torch.Size([1, 8, 3628, 128]) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27682/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27681/comments
https://api.github.com/repos/huggingface/transformers/issues/27681/events
https://github.com/huggingface/transformers/pull/27681
2,008,640,581
PR_kwDOCUB6oc5gQHM9
27,681
Update forward signature test for vision models
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "`main_input_name` is defined in `PreTrainedModel` (and the corresponding TF/Flax counterpart), so\r\n\r\n> all (NLP) models have this properly set\r\n\r\nis True" ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? This PR makes sure that vision-only models don't need to overwrite `test_forward_signature`. Instead, `model.main_input_name` is leveraged.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27681/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27681", "html_url": "https://github.com/huggingface/transformers/pull/27681", "diff_url": "https://github.com/huggingface/transformers/pull/27681.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27681.patch", "merged_at": 1701096497000 }
https://api.github.com/repos/huggingface/transformers/issues/27680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27680/comments
https://api.github.com/repos/huggingface/transformers/issues/27680/events
https://github.com/huggingface/transformers/pull/27680
2,008,640,225
PR_kwDOCUB6oc5gQHIE
27,680
Fix sampling method to handle all -inf scores in next_token_scores
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Saibo-creator see related comment [here](https://github.com/huggingface/transformers/issues/27676#issuecomment-1831735403) :)", "check PR #27797 which fixes the relevant issues" ]
1,700
1,701
1,701
CONTRIBUTOR
null
This commit addresses a bug in the token generation process where next_token_scores are all -inf, leading to NaN probabilities after softmax. This typically happens in the constrained generation process where no tokens are allowed to be generated anymore. The fix involves adjusting probabilities in such scenarios to ensure pad_token_id has a probability of 1, thus enabling correct sampling and avoiding runtime errors. Also includes error handling for cases where pad_token_id is not defined but required. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27676 #13707 #15169 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27680/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27680/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27680", "html_url": "https://github.com/huggingface/transformers/pull/27680", "diff_url": "https://github.com/huggingface/transformers/pull/27680.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27680.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27679/comments
https://api.github.com/repos/huggingface/transformers/issues/27679/events
https://github.com/huggingface/transformers/pull/27679
2,008,532,360
PR_kwDOCUB6oc5gPvoi
27,679
using env var to skip check_imports err
{ "login": "wqh17101", "id": 26429138, "node_id": "MDQ6VXNlcjI2NDI5MTM4", "avatar_url": "https://avatars.githubusercontent.com/u/26429138?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wqh17101", "html_url": "https://github.com/wqh17101", "followers_url": "https://api.github.com/users/wqh17101/followers", "following_url": "https://api.github.com/users/wqh17101/following{/other_user}", "gists_url": "https://api.github.com/users/wqh17101/gists{/gist_id}", "starred_url": "https://api.github.com/users/wqh17101/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wqh17101/subscriptions", "organizations_url": "https://api.github.com/users/wqh17101/orgs", "repos_url": "https://api.github.com/users/wqh17101/repos", "events_url": "https://api.github.com/users/wqh17101/events{/privacy}", "received_events_url": "https://api.github.com/users/wqh17101/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27679). All of your documentation changes will be reflected on that endpoint.", "Hi @wqh17101 - thanks for opening this PR with a solution! As this is a highly specific fix (adding in an environment variable to control logic for a single issue), it's not something we're going to merge in right now. \r\n\r\nThe great thing about open source is that you can modify the original repo as you wish with your own fork to include this code. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27554 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27679", "html_url": "https://github.com/huggingface/transformers/pull/27679", "diff_url": "https://github.com/huggingface/transformers/pull/27679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27679.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27678/comments
https://api.github.com/repos/huggingface/transformers/issues/27678/events
https://github.com/huggingface/transformers/issues/27678
2,008,463,663
I_kwDOCUB6oc53trkv
27,678
AttributeError: module 'transformers.utils.logging' has no attribute 'warning'
{ "login": "benzom", "id": 31667178, "node_id": "MDQ6VXNlcjMxNjY3MTc4", "avatar_url": "https://avatars.githubusercontent.com/u/31667178?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benzom", "html_url": "https://github.com/benzom", "followers_url": "https://api.github.com/users/benzom/followers", "following_url": "https://api.github.com/users/benzom/following{/other_user}", "gists_url": "https://api.github.com/users/benzom/gists{/gist_id}", "starred_url": "https://api.github.com/users/benzom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benzom/subscriptions", "organizations_url": "https://api.github.com/users/benzom/orgs", "repos_url": "https://api.github.com/users/benzom/repos", "events_url": "https://api.github.com/users/benzom/events{/privacy}", "received_events_url": "https://api.github.com/users/benzom/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Indeed, opening a PR for a fix! ๐Ÿค— thanks for reporting" ]
1,700
1,701
1,701
NONE
null
### System Info Hi everyone. I'm trying to run example from here https://github.com/huggingface/transformers/tree/main/examples/pytorch Transformers library was installed from the source as it was requested during the first run accelerate==0.24.1 torch==1.13.0a0+936e930 The running command: ``` accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir tst-summarization ``` Full error: ``` Traceback (most recent call last): File "run_summarization_no_trainer.py", line 782, in <module> Traceback (most recent call last): File "run_summarization_no_trainer.py", line 782, in <module> main() File "run_summarization_no_trainer.py", line 705, in main generated_tokens = accelerator.unwrap_model(model).generate( File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1565, in generate generation_config.validate() File "/usr/local/lib/python3.8/dist-packages/transformers/generation/configuration_utils.py", line 413, in validate logging.warning("`num_beams` is set to None - defaulting to 1.", UserWarning) AttributeError: module 'transformers.utils.logging' has no attribute 'warning' main() File "run_summarization_no_trainer.py", line 705, in main generated_tokens = accelerator.unwrap_model(model).generate( File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1565, in generate generation_config.validate() File "/usr/local/lib/python3.8/dist-packages/transformers/generation/configuration_utils.py", line 413, in validate logging.warning("`num_beams` is set to None - defaulting to 1.", UserWarning) AttributeError: module 'transformers.utils.logging' has no attribute 'warning' 33%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 35/105 [00:05<00:11, 6.08it/s] ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 4120986) of binary: /usr/bin/python Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 47, in main args.func(args) File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 985, in launch_command multi_gpu_launcher(args) File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 654, in multi_gpu_launcher distrib_run.run(args) File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ run_summarization_no_trainer.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2023-11-23_18:24:03 host : 99dgx-02.mtsai.superpod.local rank : 1 (local_rank: 1) exitcode : 1 (pid: 4120987) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-11-23_18:24:03 host : 99dgx-02.mtsai.superpod.local rank : 0 (local_rank: 0) exitcode : 1 (pid: 4120986) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir tst-summarization ### Expected behavior Model is training, no errors occur
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27678/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27677/comments
https://api.github.com/repos/huggingface/transformers/issues/27677/events
https://github.com/huggingface/transformers/pull/27677
2,008,433,180
PR_kwDOCUB6oc5gPZzA
27,677
Add ChatGLM model.
{ "login": "xunkai55", "id": 4828553, "node_id": "MDQ6VXNlcjQ4Mjg1NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4828553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xunkai55", "html_url": "https://github.com/xunkai55", "followers_url": "https://api.github.com/users/xunkai55/followers", "following_url": "https://api.github.com/users/xunkai55/following{/other_user}", "gists_url": "https://api.github.com/users/xunkai55/gists{/gist_id}", "starred_url": "https://api.github.com/users/xunkai55/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xunkai55/subscriptions", "organizations_url": "https://api.github.com/users/xunkai55/orgs", "repos_url": "https://api.github.com/users/xunkai55/repos", "events_url": "https://api.github.com/users/xunkai55/events{/privacy}", "received_events_url": "https://api.github.com/users/xunkai55/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for opening a PR ๐Ÿค— I think the implementation of chatGLM should follow the comment I posted [here](https://github.com/huggingface/transformers/pull/27267#issuecomment-1802207501). See #27267 which also wanted to add support for ChatGM ๐Ÿ˜‰ ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
# What does this PR do? Add ChatGLM model support in HF Transformers repo. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27677/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27677", "html_url": "https://github.com/huggingface/transformers/pull/27677", "diff_url": "https://github.com/huggingface/transformers/pull/27677.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27677.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27676/comments
https://api.github.com/repos/huggingface/transformers/issues/27676/events
https://github.com/huggingface/transformers/issues/27676
2,008,428,965
I_kwDOCUB6oc53tjGl
27,676
RuntimeError with prefix_allowed_tokens_fn and do_sample=True When Allowed Tokens List is Empty
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting, this would require use to check that the output of `self._prefix_allowed_tokens_fn(batch_id, sent)` on each token is not `[]` before applying the mask. It does make sense because we don't specify that the list cannot be empty. \r\nWould you like to open a PR for a fix? (meaning something like:\r\n```diff \r\n\r\n @add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING)\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:\r\n mask = torch.full_like(scores, -math.inf)\r\n for batch_id, beam_sent in enumerate(input_ids.view(-1, self._num_beams, input_ids.shape[-1])):\r\n for beam_id, sent in enumerate(beam_sent):\r\n- mask[batch_id * self._num_beams + beam_id, self._prefix_allowed_tokens_fn(batch_id, sent)] = 0\r\n+ allowed_tokens = self._prefix_allowed_tokens_fn(batch_id, sent)\r\n+ if len(allowed_tokens) > 0:\r\n+ mask[batch_id * self._num_beams + beam_id, allowed_tokens] = 0\r\n\r\n return scores + mask\r\n\r\n```", "Ok, I will raise a fix for it. ", "Hi there @Saibo-creator! ๐Ÿ‘‹ (cc @ArthurZucker)\r\n\r\nI echo Patrick's comment in a related issue [here](https://github.com/huggingface/transformers/issues/15169#issuecomment-1018617055): this is not an issue of `generate`, but rather an issue of the user-defined `prefix_allowed_tokens_fn`. As such, we shouldn't attempt to fix the problem for the user, as it might result in unexpected behavior. \r\n\r\nInstead, we should raise an informative exception: the user has set an unfeasible set of constraints in `prefix_allowed_tokens_fn` :)", "yep raising an exception sounds good as well! Less silent changes on our side ๐Ÿค— ", "Hey, thanks for your responses. \r\nI agree with you about rasing an execption. \r\n\r\nI would like to summarize and double check that our motivations align.\r\n\r\nAs the example I gave above, a widely demanded usage of LLM is to generate json object reliably.\r\n\r\nForce LLM to generate structured objects like json means we need to constrain the LLM's generation process, including let it stop when necessary, e.g. when the json object is complete. \r\n\r\nThere we can have two ways to handle it:\r\n1. the constraints such as `prefix_allowed_tokens_fn` should return an allowed set of `{EOS}` instead of `{}` to let the generation stop\r\n2. the constraints such as `prefix_allowed_tokens_fn` should return an empty set of tokens `{}`\r\n\r\nFrom discussion above, it seems everyone agree that the option 1 is better and option 2 should be considered an exception.\r\n\r\nDo you agree ?\r\n\r\n\r\n\r\n\r\n", "@Saibo-creator agreed. \r\n\r\nI would even go beyond the case of constrained generation here: if, at any part of text generation, the set of possible tokens is `{}`, then there is a logical error. We have been adopting this idea in other parts of text generation :)" ]
1,700
1,702
1,702
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.0.dev0 - Platform: macOS-13.4.1-arm64-arm-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.19.3 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch def main(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained(model_id) prefix = "Hello" input_ids = tokenizer([prefix], add_special_tokens=False, return_tensors="pt", padding=True)["input_ids"] def empty_prefix_allowed_tokens_fn(batch_id, sent): return [] try: output = model.generate( input_ids, do_sample=False, max_length=10, num_beams=1, prefix_allowed_tokens_fn=empty_prefix_allowed_tokens_fn, num_return_sequences=1 ) generations = tokenizer.batch_decode(output, skip_special_tokens=True) print(generations) except RuntimeError as e: print("RuntimeError encountered:", e) if __name__ == '__main__': main() ``` While the example above may seems to be very artificial, but actually this is a very common problem. Suppose you want to use LLM to generate a json object, c.f. #27557, and the constraints will return an empty set of allowed tokens once the json object is complete such as `{"ip": "127.0.0.1"}`, then this issue will occur. ### Expected behavior I expect the sampling should behave like in greedy search, i.e. when the allowed token list is empty, the model will return PAD instead of throwing an error. Related issues - [model.generate with prefix_allowed_tokens_fn throws RuntimeError: probability tensor contains either inf, nan or element < 0 #15169](https://github.com/huggingface/transformers/issues/15169) - [RunTimeError when using prefix_allowed_tokens_fn and top-k/top-p sampling in model.generate #13707](https://github.com/huggingface/transformers/issues/13707) The above two issues are about the same problem as ours, but no fix has been done so far.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27676/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27675/comments
https://api.github.com/repos/huggingface/transformers/issues/27675/events
https://github.com/huggingface/transformers/pull/27675
2,008,356,275
PR_kwDOCUB6oc5gPIwM
27,675
Fix semantic error in evaluation section
{ "login": "anihm136", "id": 49116134, "node_id": "MDQ6VXNlcjQ5MTE2MTM0", "avatar_url": "https://avatars.githubusercontent.com/u/49116134?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anihm136", "html_url": "https://github.com/anihm136", "followers_url": "https://api.github.com/users/anihm136/followers", "following_url": "https://api.github.com/users/anihm136/following{/other_user}", "gists_url": "https://api.github.com/users/anihm136/gists{/gist_id}", "starred_url": "https://api.github.com/users/anihm136/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anihm136/subscriptions", "organizations_url": "https://api.github.com/users/anihm136/orgs", "repos_url": "https://api.github.com/users/anihm136/repos", "events_url": "https://api.github.com/users/anihm136/events{/privacy}", "received_events_url": "https://api.github.com/users/anihm136/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Change "convert predictions to logits" to "convert logits to predictions" to fix semantic error in the evaluation section. Logits need to be converted to predictions to evaluate the accuracy, not the other way round ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc: @stevehliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27675/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27675", "html_url": "https://github.com/huggingface/transformers/pull/27675", "diff_url": "https://github.com/huggingface/transformers/pull/27675.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27675.patch", "merged_at": 1700826077000 }
https://api.github.com/repos/huggingface/transformers/issues/27674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27674/comments
https://api.github.com/repos/huggingface/transformers/issues/27674/events
https://github.com/huggingface/transformers/pull/27674
2,008,350,483
PR_kwDOCUB6oc5gPHeh
27,674
Update tiny model creation script
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,700
1,701
1,701
COLLABORATOR
null
# What does this PR do? A few fixes or improvements I found necessary while working on #27388. See comments along the changes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27674/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27674", "html_url": "https://github.com/huggingface/transformers/pull/27674", "diff_url": "https://github.com/huggingface/transformers/pull/27674.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27674.patch", "merged_at": 1701162334000 }
https://api.github.com/repos/huggingface/transformers/issues/27673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27673/comments
https://api.github.com/repos/huggingface/transformers/issues/27673/events
https://github.com/huggingface/transformers/pull/27673
2,008,285,315
PR_kwDOCUB6oc5gO5Mr
27,673
Translating en/model_doc folder docs to Japanese(from `blip` to `clap`) ๐Ÿ‡ฏ๐Ÿ‡ต
{ "login": "rajveer43", "id": 64583161, "node_id": "MDQ6VXNlcjY0NTgzMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajveer43", "html_url": "https://github.com/rajveer43", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "repos_url": "https://api.github.com/users/rajveer43/repos", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27673). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> LGTM! Can you try running `make fixup` and `pip uninstall black && pip install -U ruff==0.1.5` to fix the failing CI test?\r\n\r\nsure." ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27669 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27673/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27673", "html_url": "https://github.com/huggingface/transformers/pull/27673", "diff_url": "https://github.com/huggingface/transformers/pull/27673.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27673.patch", "merged_at": 1701887902000 }
https://api.github.com/repos/huggingface/transformers/issues/27672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27672/comments
https://api.github.com/repos/huggingface/transformers/issues/27672/events
https://github.com/huggingface/transformers/pull/27672
2,008,284,054
PR_kwDOCUB6oc5gO460
27,672
Update TVP arxiv link
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,700
1,700
1,700
COLLABORATOR
null
# What does this PR do? TVP on the doc page wasn't rendering the architecture image which highlighted two issues: * Link in HTML tag had been copied from TVLT. This PR updates to point to the TVP paper * TVP architecture pic hadn't been added ([resolved here](https://huggingface.co/datasets/huggingface/documentation-images/blob/main/transformers/model_doc/tvp_architecture.png))
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27672/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27672", "html_url": "https://github.com/huggingface/transformers/pull/27672", "diff_url": "https://github.com/huggingface/transformers/pull/27672.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27672.patch", "merged_at": 1700758937000 }
https://api.github.com/repos/huggingface/transformers/issues/27671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27671/comments
https://api.github.com/repos/huggingface/transformers/issues/27671/events
https://github.com/huggingface/transformers/pull/27671
2,008,210,958
PR_kwDOCUB6oc5gOo4T
27,671
[WIP][Splinter] Fixes #16627 by implementing the test cases for splinter
{ "login": "nileshkokane01", "id": 8201108, "node_id": "MDQ6VXNlcjgyMDExMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nileshkokane01", "html_url": "https://github.com/nileshkokane01", "followers_url": "https://api.github.com/users/nileshkokane01/followers", "following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}", "gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}", "starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions", "organizations_url": "https://api.github.com/users/nileshkokane01/orgs", "repos_url": "https://api.github.com/users/nileshkokane01/repos", "events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}", "received_events_url": "https://api.github.com/users/nileshkokane01/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@ArthurZucker ,\r\n\r\nThanks, I didn't really worked on it yet, it was an initial draft.\r\n\r\nI will try taking this up as soon as I get time. Would that be fine with you? ", "Absolutely no worries" ]
1,700
1,706
null
CONTRIBUTOR
null
# What does this PR do? Fixes #16627 by implementing test cases for splinter <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (16627 ) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27671/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27671", "html_url": "https://github.com/huggingface/transformers/pull/27671", "diff_url": "https://github.com/huggingface/transformers/pull/27671.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27671.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27670/comments
https://api.github.com/repos/huggingface/transformers/issues/27670/events
https://github.com/huggingface/transformers/issues/27670
2,008,200,663
I_kwDOCUB6oc53srXX
27,670
Min P style sampling - an alternative to Top P/TopK
{ "login": "kalomaze", "id": 66376113, "node_id": "MDQ6VXNlcjY2Mzc2MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/66376113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kalomaze", "html_url": "https://github.com/kalomaze", "followers_url": "https://api.github.com/users/kalomaze/followers", "following_url": "https://api.github.com/users/kalomaze/following{/other_user}", "gists_url": "https://api.github.com/users/kalomaze/gists{/gist_id}", "starred_url": "https://api.github.com/users/kalomaze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kalomaze/subscriptions", "organizations_url": "https://api.github.com/users/kalomaze/orgs", "repos_url": "https://api.github.com/users/kalomaze/repos", "events_url": "https://api.github.com/users/kalomaze/events{/privacy}", "received_events_url": "https://api.github.com/users/kalomaze/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "fyi @gante ๐Ÿค— ", "Hi @kalomaze ๐Ÿ‘‹ Thank you for opening this issue!\r\n\r\nIn addition to Temperature, Top p, and Top k, which apply distribution-agnostic transformations, we have three other distribution-aware transformations:\r\n1. [Typical P Decoding](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.TypicalLogitsWarper)\r\n2. [Epsilon Sampling](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.EpsilonLogitsWarper)\r\n3. [Eta Sampling](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.EtaLogitsWarper)\r\n\r\nThese techniques do a similar thing to what you mention: they apply a \"Top p\"-like transformation, adjusted by the probability distribution. \r\n\r\nSince we already have similar techniques, backed up by papers with benchmarks, I'm reluctant to add this technique without further benchmarks. Maintenance is a heavy long-term burden in `transformers` that we want to contain ๐Ÿค— ", "> Hi @kalomaze ๐Ÿ‘‹ Thank you for opening this issue!\r\n> \r\n> In addition to Temperature, Top p, and Top k, which apply distribution-agnostic transformations, we have three other distribution-aware transformations:\r\n> \r\n> 1. [Typical P Decoding](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.TypicalLogitsWarper)\r\n> 2. [Epsilon Sampling](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.EpsilonLogitsWarper)\r\n> 3. [Eta Sampling](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.EtaLogitsWarper)\r\n> \r\n> These techniques do a similar thing to what you mention: they apply a \"Top p\"-like transformation, adjusted by the probability distribution.\r\n> \r\n> Since we already have similar techniques, backed up by papers with benchmarks, I'm reluctant to add this technique without further benchmarks. Maintenance is a heavy long-term burden in `transformers` that we want to contain ๐Ÿค—\r\n\r\nThe scaleability of Min P in comparison to Top P seems to objectively be [more consistent beyond just theorycrafting.](https://www.reddit.com/r/LocalLLaMA/comments/187kpr6/how_to_properly_scale_language_model_creativity/)\r\n\r\nMin P also highly interpretable in comparison to Locally Typical sampling which gets into denser, more subjective interpretations of information theory, which begs to question whether or not it's overdesigned. This makes Typical sampling less intuitive to use for the end user.\r\n\r\nIn addition to this, Typical sampling, Epsilon sampling, and Eta sampling as techniques have seen extremely limited real world adoption in terms of open source LLM interfaces, which, at large, have continued to use Top K and Top P in their wake. If not those two, Mirostat has seen mild popularity, but I would argue the latter two samplers (Epsilon sampling, Eta sampling) are perhaps _less_ proven in terms of subjective quality.\r\n\r\nIn conclusion, Min P: \r\n- Is more interpretable to end users _and_ developers compared to the methods you listed. This has less risk of unintended behavior in terms of achieving the same goal as Top P / Top K when compared to typical sampling, which is less proven in the 'real world'.\r\n- It has been proven to scale more consistently in comparison to Nucleus sampling in practice, as mentioned earlier\r\n- It has [consistently seen positive reception](https://github.com/SillyTavern/SillyTavern/pull/1417#issuecomment-1831554614) and adoption from the open source language model community at large to the point where most inference backends (vllm, llama.cpp, exllamav2, text-generation-webui's HF loaders, etc) have adopted it:\r\n\r\n<img width=\"632\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/66376113/f4a698f8-2d5b-4543-b133-ec7d96be2f7d\">\r\n<img width=\"650\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/66376113/77107dac-99ae-4509-bf61-8a41008ab14b\">\r\n\r\nI will also note that a common issue for open source language models is the lack of truly objective metrics for testing beyond manual human analysis; so any apparently 'standard' testing metrics should be given _serious scrutiny_ before they are considered absolute and final measures in which to compare sampler methods.\r\n\r\nIf there are any specific metrics you would like to see on any specific models, I can try to provide them to support my case beyond the subjective results and widespread adoption of the technique (which I figured would stand out on their own, but having numbers would be beneficial... assuming _we can trust the numbers_, which is an assumption I'm hesitant to make without sufficient fundamental evidence for their use beyond \"arxiv papers used it\")", "@kalomaze precisely because in the past we've added techniques that had some results but ended up not having much use (like Eta sampling) I'm asking for additional validation :) For instance, [Eta sampling had a blind human preference test](https://arxiv.org/pdf/2210.15191.pdf), where it was shown as preferred over top p, with a relatively low sample size (N=294). However, the upside (and the marketing) was not large enough, so the community decided to stick with simpler, established techniques like top p.\r\n\r\nJust because other repos have merged your technique, it does not make it inherently good. ML is a data-driven science, so let's collect data -- I have yet to see any data beyond a few examples. Note that this is nothing against your creation, I actually agree with it in principle. `transformers` is a large library with a few maintainers, we have to be conscious of what we add here.\r\n\r\nA good test would be to compare your technique against others with blind human preference ๐Ÿค— There is nothing better than human preference -- I'd be happy to participate in the evaluation.", "> A good test would be to compare your technique against others with blind human preference ๐Ÿค— There is nothing better than human preference -- I'd be happy to participate in the evaluation.\r\n\r\nDo we have enough people who are willing to test / evaluate this to rule out the margin of error, though? The main thing we are looking for is to minimize the included outliers when improving the truncation schemes (and those are usually low probability to begin with), and outliers are going to be _hard_ to test for without sufficient data if you sample normally, unless we change the sampler to _only_ pick the least likely token (as a way to measure the truncation consistency directly).\r\n\r\nI've done exactly that before for Top P and Min P and I saw that Min P was an obvious improvement. Would you like me to reproduce that experiment but with Typical sampling? (Llama.cpp, my inference engine of choice, has a broken implementation of Typical sampling at the moment but there is a PR to fix that I can use, and Eta/Epsilon just aren't adopted anywhere else in the LLM world so I'd have to learn how to use Transformers to test those, which seems like it will be necessary for my future LLM tests)\r\n\r\nI'm also aware that an appeal to popularity isn't hard evidence, but I think it's a stronger marker in this case than it would otherwise be given the context of LLM benchmarks and _especially_ certain metrics (e.g perplexity) being dubiously unreliable markers of quality in the ML space.", "> Do we have enough people who are willing to test / evaluate this to rule out the margin of error, though?\r\n\r\nBetween your reddit and my twitter/LI reaches, we will definitely have more than enough people to run a proper study. If you agree to build the interface for the study (e.g. through a HF spaces), I'd be more than happy to promote it! I also have the power to allocate GPUs to a space in order to run the study ๐Ÿ’ช \r\n\r\n> The main thing we are looking for is to minimize the included outliers when improving the truncation schemes (and those are usually low probability to begin with), and outliers are going to be hard to test for without sufficient data if you sample normally, unless we change the sampler to only pick the least likely token (as a way to measure the truncation consistency directly).\r\n\r\nI agree that the biggest difference is in the outliers. However, each output may have tens or hundreds of tokens, so the effect of bad \"1% probability tokens\" is not that hard to observe :) If there is noticeable human preference after >1000 samples, then we can be sure that it makes a difference.\r\n\r\nAlso, if the test turns out to be a success, you'd gain much more power over the distribution of your technique :D There are no questions over human preference.\r\n\r\n> especially certain metrics (e.g perplexity) being dubiously unreliable markers of quality in the ML space.\r\n\r\n100% agreed", "> > Do we have enough people who are willing to test / evaluate this to rule out the margin of error, though?\r\n> \r\n> Between your reddit and my twitter/LI reaches, we will definitely have more than enough people to run a proper study. If you agree to build the interface for the study (e.g. through a HF spaces), I'd be more than happy to promote it! I also have the power to allocate GPUs to a space in order to run the study ๐Ÿ’ช\r\n> \r\n> > The main thing we are looking for is to minimize the included outliers when improving the truncation schemes (and those are usually low probability to begin with), and outliers are going to be hard to test for without sufficient data if you sample normally, unless we change the sampler to only pick the least likely token (as a way to measure the truncation consistency directly).\r\n> \r\n> I agree that the biggest difference is in the outliers. However, each output may have tens or hundreds of tokens, so the effect of bad \"1% probability tokens\" is not that hard to observe :) If there is noticeable human preference after >1000 samples, then we can be sure that it makes a difference.\r\n> \r\n> Also, if the test turns out to be a success, you'd gain much more power over the distribution of your technique :D There are no questions over human preference.\r\n> \r\n> > especially certain metrics (e.g perplexity) being dubiously unreliable markers of quality in the ML space.\r\n> \r\n> 100% agreed\r\n\r\nUnderstood; I've never made a HF space, so that'd be new territory for me, though I'll look into it for sure (since having empirical data would be helpful.)\r\n\r\nWhat would be a fair comparison value to Top P? Or would you prefer something where all methods all evaluated (that might be too aggressive, though?) The next problem, I think, is finding an 'equivalent scale' for all methods. The scale of Min P is obvious and understood; but for Epsilon & etc it's difficult for me to determine...", "@kalomaze I'd suggest to start simple, going against top p alone. Less work and straight to the point. If we realize we're gathering enough participants, then we can expand it to multiple models and multiple strategies, for a better overview. \r\n\r\nI can help you with any roadblock or questions you have along the way: the results are very much of my interest! ๐Ÿ’›\r\n\r\n(and I'm crossing my fingers for Min P to be successful!) ", "> @kalomaze I'd suggest to start simple, going against top p alone. Less work and straight to the point. If we realize we're gathering enough participants, then we can expand it to multiple models and multiple strategies, for a better overview.\r\n> \r\n> I can help you with any roadblock or questions you have along the way: the results are very much of my interest! ๐Ÿ’›\r\n> \r\n> (and I'm crossing my fingers for Min P to be successful!)\r\n\r\nI see, that's very doable.\r\n\r\nHow about:\r\n- Top P 0.98 vs Min P 0.02\r\n- Top P 0.95 vs Min P 0.05\r\n- Top P 0.90 vs Min P 0.1\r\n- Top P 0.80 vs Min P 0.2\r\n\r\nAt temperature 1.0?", "@kalomaze sounds good (I'm assuming you're more sensible than me to what a good pairing looks like :) )\r\n\r\nI'd perhaps suggest lowering the temperature a bit, to 0.7-0.8 (which is what most LLMs use by default nowadays)", "> I'd perhaps suggest lowering the temperature a bit, to 0.7-0.8 (which is what most LLMs use by default nowadays)\r\n\r\nThe API docs for OpenAI suggest either lowering temperature or using Top P, but not both, which seems to imply truncation sampling was intended for use with a standard temperature (which makes sense to me); and the default provided is also 1.0 for GPT in the first place.\r\nTemperature 1.0 is also representative of the original logit scores transformed into probabilities, and isn't an arbitrary transformation, so it makes the most sense to me at least, to compare at this value (unless you have other reasons for it).", "@kalomaze temperature can be seen as a post-hoc calibration of the model logits -- an underconfident model should use a temperature below 1.0 and vice-versa. You can also see it as sharpening (<1.0) or flattening (>1.0) the probability distribution. It does have some overlap with top p, with the difference that top p acts on the probabilities and temperature on log probabilities -- after top p, you can end with the same possible tokens, but the temperature will have an impact on their relative distribution.\r\n\r\nThe optimal temperature changes across models and tasks, with llama models excelling around ~0.7 for most tasks. For instance, the starcoder model is recommended to be used with temperatures around ~0.3 :) My suggestion for 0.7-0.8 assumed the use of models like llama or mistral" ]
1,700
1,701
null
NONE
null
### Feature request This is a sampler method already present in other LLM inference backends that aims to simplify the truncation process & help accomodate for the flaws/failings of Top P & Top K. **Min P**. ![image](https://github.com/huggingface/transformers/assets/66376113/53113071-20ed-43bf-a8ae-c9f083840d96) What Min P is doing is simple: we are setting a minimum percentage value that a token must reach to be considered during sampling. However, this is not a hard limit. The minimum will 'scale' based on the top token's probability. So, if you have a Min P value of 0.1 (for example), that would mean your base Min P requirement is 10%. So if your top token is 25%, that means it will only consider tokens that have at least 2.5% probability. This method subjectively seems to improve results across the board with no noticeable downside, and has been merged into the following FOSS LLM backends: - [llama.cpp](https://github.com/ggerganov/llama.cpp/pull/3841) - [vllm](https://github.com/vllm-project/vllm/pull/1642) - [text-generation-webui](https://github.com/oobabooga/text-generation-webui/pull/4701) (through both the HF loaders and llama-cpp-python) I would suggest a default of 0.05. ### Motivation I noticed certain 'flaws' in the popular Top P sampling method: - When the model does not have sufficient confidence/concentration on the next token candidate(s), it's possible for the sampler to consider many tokens that are _highly_ unlikely compared to the few choices it has confidence in. - Top K helps limit the amount of 'low confidence' tokens period as a supplement to Top P, but this often comes at a cost of token choice diversity (often arbitrarily). - In addition to this, Top P can sometimes cut reasonable tokens. What if there's a 90.1% probability token, followed by a 9% probability token? A Top P value of 0.90 would completely gloss over the 9% token in this instance. ![image](https://github.com/huggingface/transformers/assets/66376113/42bb3624-2600-41d7-a9b9-e2c4a25cab52) For this reason I made Min P which seems to have positive reception across the board. ### Your contribution I may consider making a PR for this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27670/reactions", "total_count": 6, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27670/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27669/comments
https://api.github.com/repos/huggingface/transformers/issues/27669/events
https://github.com/huggingface/transformers/issues/27669
2,007,992,511
I_kwDOCUB6oc53r4i_
27,669
[i18n-jp] Translating `en/model_doc` folder docs to Japanese ๐Ÿ‡ฏ๐Ÿ‡ต
{ "login": "rajveer43", "id": 64583161, "node_id": "MDQ6VXNlcjY0NTgzMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajveer43", "html_url": "https://github.com/rajveer43", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "repos_url": "https://api.github.com/users/rajveer43/repos", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[]
1,700
1,701
1,701
CONTRIBUTOR
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the japanese-speaking community ๐ŸŒ (currently 0 out of 267 complete) Who would want to translate? Please follow the ๐Ÿค— [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers ๐Ÿค—). * Please translate in a gender-neutral way. * Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * ๐Ÿ™‹ If you'd like others to help you with the translation, you can also post in the ๐Ÿค— [forums](https://discuss.huggingface.co/). ## Model_doc section - [ ] [blip-2.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/blip-2.md) #27673 - [ ] [bloom.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bloom.md) #27673 - [ ] [bort.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bort.md). #27673 - [ ] [bridgetower.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bridgetower.md). #27673 - [ ] [bros.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bros.md). #27673 - [ ] [byt5.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/byt5.md). #27673 - [ ] [camembert.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/camembert.md). #27673 - [ ] [canine.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/canine.md). #27673 - [ ] [chinese_clip.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/chinese_clip.md). #27673 - [ ] [clap.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/clap.md). #27673
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27669/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27668/comments
https://api.github.com/repos/huggingface/transformers/issues/27668/events
https://github.com/huggingface/transformers/pull/27668
2,007,905,340
PR_kwDOCUB6oc5gNr5I
27,668
memory efficient attention (Flash V2) initial support - encoder-only and not combined with relative attention
{ "login": "YoelShoshan", "id": 7043815, "node_id": "MDQ6VXNlcjcwNDM4MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7043815?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YoelShoshan", "html_url": "https://github.com/YoelShoshan", "followers_url": "https://api.github.com/users/YoelShoshan/followers", "following_url": "https://api.github.com/users/YoelShoshan/following{/other_user}", "gists_url": "https://api.github.com/users/YoelShoshan/gists{/gist_id}", "starred_url": "https://api.github.com/users/YoelShoshan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YoelShoshan/subscriptions", "organizations_url": "https://api.github.com/users/YoelShoshan/orgs", "repos_url": "https://api.github.com/users/YoelShoshan/repos", "events_url": "https://api.github.com/users/YoelShoshan/events{/privacy}", "received_events_url": "https://api.github.com/users/YoelShoshan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,700
1,700
NONE
null
# What does this PR do? adds support for memory efficient attention, based on using xformers library memory_efficient_attention() op. it only supports the following: 1. Encoder-only (no causal attention) 2. Relative attention (which gets injected into the "scores" (the results of [email protected]()) is NOT supported, as it would case Flash v2 to not be used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27668/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27668", "html_url": "https://github.com/huggingface/transformers/pull/27668", "diff_url": "https://github.com/huggingface/transformers/pull/27668.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27668.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27667/comments
https://api.github.com/repos/huggingface/transformers/issues/27667/events
https://github.com/huggingface/transformers/pull/27667
2,007,663,534
PR_kwDOCUB6oc5gM2sB
27,667
fix: fix gradient accumulate step for learning rate
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,701
1,701
CONTRIBUTOR
null
Hi, I think this is the mistake seen the learning rate should take the param `num_train_steps` as `total_train_steps`, instead of `len(vectorized_datasets["train"])` So I fix them to the right suitable that learning rate will go down to zero as init I would like to cc @sanchit-gandhi to review my PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27667/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27667", "html_url": "https://github.com/huggingface/transformers/pull/27667", "diff_url": "https://github.com/huggingface/transformers/pull/27667.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27667.patch", "merged_at": 1701932367000 }
https://api.github.com/repos/huggingface/transformers/issues/27666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27666/comments
https://api.github.com/repos/huggingface/transformers/issues/27666/events
https://github.com/huggingface/transformers/issues/27666
2,007,613,877
I_kwDOCUB6oc53qcG1
27,666
how to remove punctuation marks.
{ "login": "chanyong-owl", "id": 57178312, "node_id": "MDQ6VXNlcjU3MTc4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/57178312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chanyong-owl", "html_url": "https://github.com/chanyong-owl", "followers_url": "https://api.github.com/users/chanyong-owl/followers", "following_url": "https://api.github.com/users/chanyong-owl/following{/other_user}", "gists_url": "https://api.github.com/users/chanyong-owl/gists{/gist_id}", "starred_url": "https://api.github.com/users/chanyong-owl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chanyong-owl/subscriptions", "organizations_url": "https://api.github.com/users/chanyong-owl/orgs", "repos_url": "https://api.github.com/users/chanyong-owl/repos", "events_url": "https://api.github.com/users/chanyong-owl/events{/privacy}", "received_events_url": "https://api.github.com/users/chanyong-owl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey ๐Ÿค— thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
### System Info i trained t5-large for translation. the result of train was good But when i input some sentence, the result is like that "What are you doing now?.??....." [?.??......] <- how to delete that punctuation marks. i put some parameter like max_length. But i can not solve that situation ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction c ### Expected behavior cfdvf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27666/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27665/comments
https://api.github.com/repos/huggingface/transformers/issues/27665/events
https://github.com/huggingface/transformers/pull/27665
2,007,502,981
PR_kwDOCUB6oc5gMTvp
27,665
Docs/Add conversion code to the musicgen docs
{ "login": "yoinked-h", "id": 63889420, "node_id": "MDQ6VXNlcjYzODg5NDIw", "avatar_url": "https://avatars.githubusercontent.com/u/63889420?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yoinked-h", "html_url": "https://github.com/yoinked-h", "followers_url": "https://api.github.com/users/yoinked-h/followers", "following_url": "https://api.github.com/users/yoinked-h/following{/other_user}", "gists_url": "https://api.github.com/users/yoinked-h/gists{/gist_id}", "starred_url": "https://api.github.com/users/yoinked-h/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yoinked-h/subscriptions", "organizations_url": "https://api.github.com/users/yoinked-h/orgs", "repos_url": "https://api.github.com/users/yoinked-h/repos", "events_url": "https://api.github.com/users/yoinked-h/events{/privacy}", "received_events_url": "https://api.github.com/users/yoinked-h/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Updated the people you pinged, let's minimize it ๐Ÿ˜‰ ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27665). All of your documentation changes will be reflected on that endpoint." ]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Adds conversion code to the musicgen docs (the conversion code is quite hidden and needs some changes to be run on other model dirs) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27665/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27665", "html_url": "https://github.com/huggingface/transformers/pull/27665", "diff_url": "https://github.com/huggingface/transformers/pull/27665.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27665.patch", "merged_at": 1700825664000 }
https://api.github.com/repos/huggingface/transformers/issues/27663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27663/comments
https://api.github.com/repos/huggingface/transformers/issues/27663/events
https://github.com/huggingface/transformers/pull/27663
2,007,090,060
PR_kwDOCUB6oc5gK93b
27,663
Fix yolos resizing
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,700
1,703
1,703
COLLABORATOR
null
# What does this PR do? On main - running the following produces an image outside of the limits set by "longest_edge" in an images config: ```py from transformers import AutoProcessor from PIL import Image import requests processor = AutoProcessor.from_pretrained("Xenova/yolos-small-300") # or hustvl/yolos-small-300 url = 'https://i.imgur.com/qOp3m0N.png' # very thin image image = Image.open(requests.get(url, stream=True).raw).convert('RGB') output = processor(image) # Result # main: (3, 89, 1335) # branch: (3, 80, 1328) print(output['pixel_values'][0].shape) ``` This logic was copied from the DETR [image processing logic](https://github.com/huggingface/transformers/blob/8aca43bdb3cb9a5020f6d57589d85679dc873b1c/src/transformers/models/detr/image_processing_detr.py#L93) which comes from the [original DETR repo ](https://github.com/facebookresearch/detr/blob/3af9fa878e73b6894ce3596450a8d9b89d918ca9/datasets/transforms.py#L76). Note: this means the DETR models' image processors won't respect `longest_edge`. Yolos' image processor has been updated to reflect [the original model](https://github.com/hustvl/YOLOS/blob/5717fc29d727dab84ad585c56457b4de1225eddc/datasets/transforms.py#L76). Note - this also differers from the output image size that would be obtained from using `torchvision`s `resize`. In the above example, it would be `(3, 88, 1333)` Fixes #27381 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27663/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27663", "html_url": "https://github.com/huggingface/transformers/pull/27663", "diff_url": "https://github.com/huggingface/transformers/pull/27663.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27663.patch", "merged_at": 1703105752000 }
https://api.github.com/repos/huggingface/transformers/issues/27662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27662/comments
https://api.github.com/repos/huggingface/transformers/issues/27662/events
https://github.com/huggingface/transformers/pull/27662
2,006,973,324
PR_kwDOCUB6oc5gKkJ5
27,662
[`Llava`]ย Add Llava to transformers
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27662). All of your documentation changes will be reflected on that endpoint.", "Bakllava weights successfully converted here: https://huggingface.co/ybelkada/BakLlava-v1-hf", "All converted checkpoints are now under this organization: https://huggingface.co/llava-hf", "Hello, can you tell me how to use `LlavaForConditionalGeneration` in transformers to train from scratch? I want to use the weights of Vicuna and Clip Vision Transformer for training, just like the original author did." ]
1,700
1,706
1,701
CONTRIBUTOR
null
# What does this PR do? Adds Llava - a multimodal model in transformers library. Llava is a multi-modal model that claims to have competitive performance than GPT-4 for multi-modal tasks. There are currently 3 main variants of this architecture: - Llava - Llama - Llava - MPT - Llava - Mistral (known as Bakllava): https://github.com/SkunkworksAI/BakLLaVA This implementation leverages `AutoModelForCausalLM` , similarly as `Blip2` to load the correct language model. The goal of this PR is to make it agnostic across all language models architectures. Closes https://github.com/huggingface/transformers/pull/25789 Closes https://github.com/huggingface/transformers/pull/27221 Original llava author: https://github.com/haotian-liu/LLaVA @haotian-liu Original PR author: @shauray8 ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForVisionText2Text model_id = "llava-hf/llava-1.5-7b-hf" processor = AutoProcessor.from_pretrained(model_id) prompt = "<image>\n" prompt += "USER: What are the things I should be cautious about when I visit this place?\nASSISTANT:" image_file = "https://llava-vl.github.io/static/images/view.jpg" model = LlavaForVisionText2Text.from_pretrained(model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True).to(0) raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=200) print(processor.decode(output[0][2:], skip_special_tokens=True)) >>> USER: What are the things I should be cautious about when I visit this place? ASSISTANT: When visiting this place, which appears to be a dock or pier extending into a body of water, you should be cautious about several factors. First, be aware of the water depth and any potential hazards, such as submerged rocks or debris, that could pose a risk to your safety. Second, be mindful of the weather conditions, as sudden changes in weather can make the dock or pier unsafe to use. Third, be cautious of the surrounding environment, as there may be wildlife or other natural elements that could pose a threat. Finally, be aware of any local regulations or guidelines for using the dock or pier, as these may include rules about fishing, swimming, or other activities. By being cautious and following any applicable guidelines, you can ensure a safe and enjoyable experience at this location. ``` ![Image](https://llava-vl.github.io/static/images/view.jpg)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27662/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27662", "html_url": "https://github.com/huggingface/transformers/pull/27662", "diff_url": "https://github.com/huggingface/transformers/pull/27662.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27662.patch", "merged_at": 1701937847000 }
https://api.github.com/repos/huggingface/transformers/issues/27661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27661/comments
https://api.github.com/repos/huggingface/transformers/issues/27661/events
https://github.com/huggingface/transformers/pull/27661
2,006,949,847
PR_kwDOCUB6oc5gKe8I
27,661
[`FA-2`] Add Flash Attention to `Phi`
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "All the FA tests pass except the `test_flash_attn_2_generate_padding_right`.\r\n\r\n![Screenshot from 2023-11-23 00-31-04](https://github.com/huggingface/transformers/assets/56069179/46386b35-f7eb-4190-b16e-e3cca96a09ba)\r\n\r\nThis is odd given that the `flash_attn_2_inference_padding_right` test is passing as does the `test_flash_attn_2_generate_left_padding` test.\r\n", "@susnato can you try to run that test multiple times? sometimes it is flaky - apart from that the changes look great on my end !", "Hi @younesbelkada, I ran that test 30 times and every time it failed!\r\n\r\nShouldn't the inference test fail too, if the generation test fails? :sweat_smile:\r\n ", "Hmm yes correct, what I did for llama was to overwrite the test as can be see here: https://github.com/huggingface/transformers/blob/main/tests/models/llama/test_modeling_llama.py#L392 using a real checkpoint. It would be great if you can do the same and test the next 10 tokens are the same (make sure to use `do_sample=False`)", "Hi @younesbelkada, thanks a lot for the advice! All the flash attention tests are passing now. :hugs: \r\n\r\n \r\n![Screenshot from 2023-11-23 17-57-49](https://github.com/huggingface/transformers/assets/56069179/78a6f2dd-77d4-430d-abd8-597d20a9a30d)\r\n", "Hi @younesbelkada, I have pushed the suggestion you asked. ", "BTW when is the next release date for `transformers`?", "Just force-pushed the branch along with the changes. @younesbelkada ", "yep" ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds Flash Attention to `Phi`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> cc: @younesbelkada, @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27661", "html_url": "https://github.com/huggingface/transformers/pull/27661", "diff_url": "https://github.com/huggingface/transformers/pull/27661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27661.patch", "merged_at": 1701932269000 }
https://api.github.com/repos/huggingface/transformers/issues/27660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27660/comments
https://api.github.com/repos/huggingface/transformers/issues/27660/events
https://github.com/huggingface/transformers/pull/27660
2,006,906,570
PR_kwDOCUB6oc5gKVaC
27,660
Log learning rate
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Is it a draft or should i review? ", "@ArthurZucker still draft, hit one failure when I was trying it so should be good next week hopefully ๐Ÿคž ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@muellerzr Hello and Happy New Year! Give me a hint on how this issue is going?", "Not much progress yet due to the holidays, it may be another week before I can revisit it if you want to build off what I have @blademoon and try and get it the rest of the way, feel free :) ", "@muellerzr Good afternoon Zach. Happy New Year 2024. Best of luck in the new year. I am also \"rocking out\" after the weekend. Today I continued working on the translation of HuggingFace courses into Russian. So there is not so much time to join the development (unfortunately). I don't work for HugginFace, I translate courses as a volunteer. Thanks again for taking on the implementation of this feature, I've been asked a couple of times on the forum when it will be in the official version. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27660). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,700
1,707
null
CONTRIBUTOR
null
# What does this PR do? Logs the learning rate on each logging instance when training, and specifically uses scientific notation to save on space Fulfills https://github.com/huggingface/transformers/issues/27631 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker Example output: ![image](https://github.com/huggingface/transformers/assets/7831895/ea85f274-0390-4d83-8a2b-3db4b9f0bb3f) We only use the scientific notation when printing on screen. Everything else gets the full float
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27660/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27660", "html_url": "https://github.com/huggingface/transformers/pull/27660", "diff_url": "https://github.com/huggingface/transformers/pull/27660.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27660.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27659/comments
https://api.github.com/repos/huggingface/transformers/issues/27659/events
https://github.com/huggingface/transformers/pull/27659
2,006,855,559
PR_kwDOCUB6oc5gKJ-r
27,659
[i18n-fr] Translate autoclass tutorial to French
{ "login": "NoB0", "id": 28621493, "node_id": "MDQ6VXNlcjI4NjIxNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NoB0", "html_url": "https://github.com/NoB0", "followers_url": "https://api.github.com/users/NoB0/followers", "following_url": "https://api.github.com/users/NoB0/following{/other_user}", "gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}", "starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NoB0/subscriptions", "organizations_url": "https://api.github.com/users/NoB0/orgs", "repos_url": "https://api.github.com/users/NoB0/repos", "events_url": "https://api.github.com/users/NoB0/events{/privacy}", "received_events_url": "https://api.github.com/users/NoB0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27659). All of your documentation changes will be reflected on that endpoint.", "Thanks for the translation! " ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? Translated the autoclass_tutorial.md file of the documentation to French. Part of #21456 Thank you in advance for your review. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? French speaking contributors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27659/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27659", "html_url": "https://github.com/huggingface/transformers/pull/27659", "diff_url": "https://github.com/huggingface/transformers/pull/27659.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27659.patch", "merged_at": 1701931454000 }
https://api.github.com/repos/huggingface/transformers/issues/27658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27658/comments
https://api.github.com/repos/huggingface/transformers/issues/27658/events
https://github.com/huggingface/transformers/pull/27658
2,006,844,016
PR_kwDOCUB6oc5gKHZp
27,658
[Whisper] Finalize batched SOTA long-form generation
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27658). All of your documentation changes will be reflected on that endpoint.", "Follow-up of: https://github.com/huggingface/transformers/pull/27492", "Failing test is a timeout which should not happen when running again", "> This was a fun PR to review ๐Ÿ”ฅ Really cool feature -- and a lot of impactful work done here ๐Ÿ’ช\r\n> \r\n> Questions:\r\n> \r\n> 1. I was recaping the features from section 4.5 in the paper at the end of the review, and I can't find the part corresponding to \"initial timestamp constraint\". Are these changes in the PR?\r\n> 2. I see that you dynamically remove rows for a given batch as their computation becomes redundant (no more data or no need for fallback), which is something that I have been considering for the main `generate()` method. Did you notice a significant speed increase?\r\n> \r\n> Suggestion for a future PR: move Whisper's logits processors into their own file under `models/whisper/logits_process.py` and, more importantly, add their documentation to the Whisper doc page. That way, all Whisper functionality becomes more self-contained, instead of being mixed up with generalist methods ๐Ÿค—\r\n\r\nThanks for the extensive review and for diving into the complex heuristics here ๐Ÿ˜… \r\n\r\n1.) Yes, so we already make sure of the initial_timestamp_contraint actually [here](https://github.com/huggingface/transformers/blob/6c78bbcb8320d316434262ef003251ca997db0d1/src/transformers/generation/logits_process.py#L1867) and have it (incorrectly) set in our generation configs [here](https://huggingface.co/openai/whisper-large-v2/blob/696465c62215e36a9ab3f9b7672fe7749f1a1df5/generation_config.json#L215) . The correct value is 50 which is what I've hardcoded in my decoding experiments for now (see [here](https://github.com/patrickvonplaten/whisper-long-form/blob/b599d4df1e88c255cb0e59386eb2a778d51f009f/run_whisper_transformers.py#L328)), but I'll update the model cards once this PR is merged.\r\n\r\n2.) Yes, nice observation! I've ran most experiments on a RTX4090 or A100 and tbh didn't really see a huge speed-up as soon as a row is removed (but probably also because these GPUs are quite good at parallelization. Think on a T4 I'd see a bigger speed up). But it should def give a speed-up! The downside here is that it's not great for torch compile (cc @ArthurZucker) but I think this is a bit outside of this PR and could be added in a future PR.\r\n\r\n> Suggestion for a future PR: move Whisper's logits processors into their own file under models/whisper/logits_process.py and, more importantly, add their documentation to the Whisper doc page. That way, all Whisper functionality becomes more self-contained, instead of being mixed up with generalist methods ๐Ÿค—\r\n=> Agree! I'm happy to move the logits processors actually directly into the whisper model directory in this PR if that's cleaner (cc @sanchit-gandhi @ArthurZucker) \r\n", "> Looks great! Thanks for working on this complex batched generation logic. The only difference I'm observing is how we handle timestamps > 30s.\r\n> \r\n> In the original Whisper package, we always offset predicted timestamps above 30s such that they correspond to the actual audio timing, e.g.\r\n> \r\n> ```python\r\n> import torch\r\n> from datasets import load_dataset\r\n> from whisper import load_model, transcribe\r\n> from whisper.utils import WriteVTT\r\n> \r\n> model = load_model(\"tiny.en\")\r\n> \r\n> dataset = load_dataset(\"distil-whisper/librispeech_long\", \"clean\", split=\"validation\")\r\n> sample = dataset[0][\"audio\"][\"array\"]\r\n> sample = torch.from_numpy(sample).float()\r\n> \r\n> pred_out = transcribe(model, audio=sample, condition_on_previous_text=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0))\r\n> \r\n> writer = WriteVTT(output_dir=\"./transcription\")\r\n> \r\n> for start, end, text in writer.iterate_result(pred_out):\r\n> print(f\"{start} --> {end} {text}\")\r\n> ```\r\n> \r\n> **Print Output:**\r\n> \r\n> ```\r\n> 00:00.000 --> 00:06.480 Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.\r\n> 00:06.480 --> 00:11.280 Nor is Mr. Quilter's manner less interesting than his matter.\r\n> 00:11.280 --> 00:16.840 He tells us that at this festive season of the year, with Christmas and roast beef looming\r\n> 00:16.840 --> 00:23.760 before us, similes drawn from eating and its results occur most readily to the mind.\r\n> 00:23.760 --> 00:29.440 He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and\r\n> 00:29.440 --> 00:33.760 can discover in it but little of rocky Ithaca.\r\n> 00:33.760 --> 00:39.800 Linnell's pictures are a sort of up-gards and atom paintings, and Mason's exquisite\r\n> 00:39.800 --> 00:44.720 idles are as national as a jingo poem.\r\n> 00:44.720 --> 00:50.360 Mr. Burkett Foster's landscapes smile at one much in the same way that Mr. Carker used\r\n> 00:50.360 --> 00:52.960 to flash his teeth.\r\n> 00:52.960 --> 00:57.600 Mr. John Collier gives his sitter a cheerful slap in the back.\r\n> 00:57.600 --> 01:01.240 Before he says, like a shampooer and a Turkish bath,\r\n> 01:01.240 --> 01:02.080 next man.\r\n> ```\r\n> \r\n> Whereas currently in Transformers, any timestamps above 30s are reset back to zero:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> from transformers import WhisperProcessor, WhisperForConditionalGeneration\r\n> \r\n> dataset = load_dataset(\"distil-whisper/librispeech_long\", \"clean\", split=\"validation\")\r\n> sample = dataset[0][\"audio\"][\"array\"]\r\n> \r\n> model = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-tiny.en\")\r\n> processor = WhisperProcessor.from_pretrained(\"openai/whisper-tiny.en\")\r\n> \r\n> inputs = processor(sample, return_tensors=\"pt\", truncation=False, padding=\"longest\", return_attention_mask=True, sampling_rate=16_000)\r\n> \r\n> pred_ids = model.generate(**inputs, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), logprob_threshold=-1.0, no_speech_threshold=0.6, compression_ratio_threshold=1.35)\r\n> \r\n> pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True, decode_with_timestamps=True)[0]\r\n> \r\n> cur_segment = pred_str[0]\r\n> prev_char = pred_str[0]\r\n> for char in pred_str[1:]:\r\n> if prev_char == \">\" and char == \"<\":\r\n> print(cur_segment)\r\n> cur_segment = char\r\n> else:\r\n> cur_segment += char\r\n> prev_char = char\r\n> ```\r\n> \r\n> **Print Output:**\r\n> \r\n> ```\r\n> <|0.00|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|6.48|>\r\n> <|6.48|> Nor is Mr. Quilter's manner less interesting than his matter.<|11.28|>\r\n> <|11.28|> He tells us that at this festive season of the year, with Christmas and roast beef looming<|16.84|>\r\n> <|16.84|> before us, similes drawn from eating and its results occur most readily to the mind.<|23.76|>\r\n> <|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and<|5.68|>\r\n> <|5.68|> can discover in it but little of rocky Ithaca.<|10.00|>\r\n> <|10.00|> Linnell's pictures are a sort of up-gards and atom paintings, and Mason's exquisite<|16.04|>\r\n> <|16.04|> idles are as national as a jingo poem.<|20.96|>\r\n> <|20.96|> Mr. Burkett Foster's landscapes smile at one much in the same way that Mr. Carker used<|26.60|>\r\n> <|26.60|> to flash his teeth.<|29.20|>\r\n> <|0.00|> Mr. John Collier gives his sitter a cheerful slap in the back.<|4.68|>\r\n> ```\r\n> \r\n> This can be rectified by keeping track of the time offset for each element in the batch: https://github.com/openai/whisper/blob/ba3f3cd54b0e5b8ce1ab3de13e32122d0d5f98ab/whisper/transcribe.py#L271\r\n\r\nNice observation. I've changed a couple lines of code in Whisper's tokenizer and now we're getting the same formatting:\r\n\r\n```\r\n<|0.00|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|6.48|>\r\n<|6.48|> Nor is Mr. Quilter's manner less interesting than his matter.<|11.28|>\r\n<|11.28|> He tells us that at this festive season of the year, with Christmas and roast beef looming<|16.84|>\r\n<|16.84|> before us, similes drawn from eating and its results occur most readily to the mind.<|23.76|>\r\n<|23.76|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and<|29.44|>\r\n<|29.44|> can discover in it but little of rocky Ithaca.<|33.76|>\r\n<|33.76|> Linnell's pictures are a sort of up-gards and atom paintings, and Mason's exquisite<|39.80|>\r\n<|39.80|> idles are as national as a jingo poem.<|44.72|>\r\n<|44.72|> Mr. Burkett Foster's landscapes smile at one much in the same way that Mr. Carker used<|50.36|>\r\n<|50.36|> to flash his teeth.<|52.96|>\r\n<|52.96|> Mr. John Collier gives his sitter a cheerful slap in the back.<|57.64|>\r\n```", "I don't see how the failing test can be related to changes in Whisper's generate or tokenizer method. I'm also not able to reproduce them locally.\r\n\r\n```\r\nFAILED tests/test_modeling_utils.py::ModelUtilsTest::test_model_from_pretrained_with_different_pretrained_model_name - AssertionError: False is not true\r\nFAILED tests/test_modeling_utils.py::ModelUtilsTest::test_unexpected_keys_warnings - AssertionError: \"were not used when initializing ModelWithHead: ['added_key']\" not found in ''\r\n[Ensure a warning is shown when the input_ids start with a pad_token_id.] SUBFAIL tests/test_modeling_utils.py::ModelUtilsTest::test_warn_if_padding_and_no_attention_mask - AssertionError: 'We strongly recommend passing in an `attention_mask`' not found in ''\r\n[Ensure a warning is shown when the input_ids end with a pad_token_id.] SUBFAIL tests/test_modeling_utils.py::ModelUtilsTest::test_warn_if_padding_and_no_attention_mask - AssertionError: 'We strongly recommend passing in an `attention_mask`' not found in ''\r\n[Ensure that the warning is shown at most once.] SUBFAIL tests/test_modeling_utils.py::ModelUtilsTest::test_warn_if_padding_and_no_attention_mask - AssertionError: 0 != 1\r\n[Ensure a different warning is shown when the pad_token_id is equal to the bos_token_id.] SUBFAIL tests/test_modeling_utils.py::ModelUtilsTest::test_warn_if_padding_and_no_attention_mask - AssertionError: 'You may ignore this warning if your `pad_token_id`' not found in ''\r\n```", "@patrickvonplaten They're not related and they're affecting other PRs e.g. [this PR](https://github.com/huggingface/transformers/pull/28214). Internal thread: https://huggingface.slack.com/archives/C01NE71C4F7/p1705082667041509", "@patrickvonplaten A fix has been merged into main which resolves the flaky logging tests: https://github.com/huggingface/transformers/commit/0754217c82e5c640c6269d4d0ddc99203b3fd99b", "All slow tests of Whisper are passing -> merging! ", "@sanchit-gandhi open PRs for the model repos to change generation configs:\r\nPr created at https://huggingface.co/openai/whisper-large-v3/discussions/69\r\nPr created at https://huggingface.co/openai/whisper-large-v2/discussions/95\r\nPr created at https://huggingface.co/openai/whisper-base.en/discussions/17\r\nPr created at https://huggingface.co/openai/whisper-tiny.en/discussions/24\r\nPr created at https://huggingface.co/openai/whisper-small/discussions/37\r\nPr created at https://huggingface.co/openai/whisper-base/discussions/29\r\nPr created at https://huggingface.co/openai/whisper-tiny/discussions/39\r\nPr created at https://huggingface.co/openai/whisper-small.en/discussions/16\r\nPr created at https://huggingface.co/openai/whisper-medium/discussions/32\r\nPr created at https://huggingface.co/openai/whisper-large/discussions/48\r\nPr created at https://huggingface.co/openai/whisper-medium.en/discussions/16\r\nPr created at https://huggingface.co/distil-whisper/distil-medium.en/discussions/11\r\nPr created at https://huggingface.co/distil-whisper/distil-large-v2/discussions/23\r\nPr created at https://huggingface.co/distil-whisper/distil-small.en/discussions/8\r\n\r\n=> Would be great if you could check and merge.", "Great new feature. Is there a way to get word-level timestamps when using the above long-form transcription? I know it can give segment-level timestamps, but word-level timestamps would be very useful. ", "> a way to get word-level timestamps when using the above long-form transcription? I know it can give segment-level timestamps, but word-level timestamps would be very useful.\r\n\r\nThat would be a great feature addition", "thank you very much for the contribution @patrickvonplaten \r\ni am pretty new to coding. Since this PR is marked as merged, does it mean if I use\r\n```\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n max_new_tokens=128,\r\n chunk_length_s=30,\r\n batch_size=16,\r\n return_timestamps=True,\r\n torch_dtype=torch_dtype,\r\n device=device,\r\n)\r\n```\r\n\r\nas given in https://huggingface.co/openai/whisper-large-v3\r\n\r\nI will automatically use the batched sequential method here, or is something else required?\r\n\r\nthe reason I ask is, I followed your code in the first post and successfully processed the data from datasets module\r\nhowever, when I try to read a mp3 file using librosa \r\n\r\n```\r\nraw_audio, sampling_rate = librosa.load(file_path, sr=16000, mono=True)\r\ninputs = processor([raw_audio], return_tensors=\"pt\", truncation=False, padding=\"longest\", return_attention_mask=True, sampling_rate=16_000)\r\nresult = model.generate(**inputs, condition_on_prev_tokens=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), logprob_threshold=-1.0, compression_ratio_threshold=1.35, return_timestamps=True)\r\ndecoded = processor.batch_decode(result, skip_special_tokens=True)\r\n```\r\nI get an error \r\n```\r\n result = model.generate(**inputs, condition_on_prev_tokens=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), logprob_threshold=-1.0, compression_ratio_threshold=1.35, return_timestamps=True)\r\n File \"C:\\Users\\r\\AppData\\Roaming\\Python\\Python311\\site-packages\\transformers\\models\\whisper\\generation_whisper.py\", line 614, in generate\r\n decoder_input_ids, kwargs = self._prepare_decoder_input_ids(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\r\\AppData\\Roaming\\Python\\Python311\\site-packages\\transformers\\models\\whisper\\generation_whisper.py\", line 1322, in _prepare_decoder_input_ids\r\n decoder_input_ids = torch.cat([t * one_tensor for t in init_tokens], dim=-1)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\r\\AppData\\Roaming\\Python\\Python311\\site-packages\\transformers\\models\\whisper\\generation_whisper.py\", line 1322, in <listcomp>\r\n decoder_input_ids = torch.cat([t * one_tensor for t in init_tokens], dim=-1)\r\n\r\nTypeError: unsupported operand type(s) for *: 'NoneType' and 'Tensor'\r\n```\r\nI suspected it had something to do with how librosa read my file but I printed the output for raw_audio and it was an array of float32 which seems to be what is needed", "Hey @revantemp3,\r\n\r\nCould you maybe open a new issue and ping me and @sanchit-gandhi on it? I'm sadly not able to reproduce the error when running the code linked [here](https://github.com/huggingface/transformers/pull/27658#issue-2006844016).", "Hey @patrickvonplaten , can your approach also work for short-form transcription, what is for audio <30 seconds? \r\n\r\nJust wanted to understand if there are any other reasons beyond just sticking to legacy code for not using your method for <30 second audio? ", "I don't understand, if you have less than 30sec you can just pass it to the model and it will predict everything in one go, that is the default mode" ]
1,700
1,708
1,705
MEMBER
null
๐Ÿšจ๐Ÿšจ ***Disclaimer***: **All the credit of this PR goes to the original codebase: https://github.com/openai/whisper/blob/main/whisper/transcribe.py (this PR is to 90% copied from there). The reason it's added here (instead of contributing to https://github.com/openai/whisper) is because the original repo fundamentally doesn't work with batched inference and because the original repo doesn't make use of Flash Attention and other PyTorch-specific optimizations** ๐Ÿšจ๐Ÿšจ # Batched Long-Form Generation for Whisper Whisper is the *de-facto* open-source model that is used in production use cases. It's still the most performant open-source model out there and was shipped with a variety of sizes ranging from 30M params to 1.5B params. For English, especially the slightly smaller checkpoints such as `whisper-medium.en` and `whisper-small.en` yield impressive performances. Most benchmarks only evaluate on short-form (input is shorter than 30 seconds), such as [our ASR leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard), but for most industry use cases (meeting transcriptions, one actually requires long-form transcription (input is minutes or even hours long) As stated [in section 4.5 of the paper](https://cdn.openai.com/papers/whisper.pdf), long-form generation requires a couple of heuristics to function well. These include: - init time stamp constrain - condition on prev input - voice activity detection - temperature fallback when it is detected that the model yields repetitive outputs or low-prob outputs These features help to make long-form transcription more performant and more robust. So far our offer in transformers mainly focused on chunked transcription and vanilla "sequential" decoding which was added [here](https://github.com/huggingface/transformers/pull/27492). While chunked transcription gives already good results for long-form (see [here](https://github.com/huggingface/transformers/pull/27492)), "sequential" decoding improves the results significantly. This PR adds all features of "sequantial" decoding and allows it to be used for batched generation. Compared to "chunked generation", this PR now improves: `whisper-tiny.en` by 25-30% `whisper-small.en` by 20% `whisper-large-v2` by 10% (not conditioning on prev input) *numbers are retrieved by comparing results below with results of "chunked generation" [here](https://github.com/huggingface/transformers/pull/27492)* => Smaller model profit more from improved decoding strategies! The main reason why this feature is implemented in Transformers is because the original code base doesn't support batch inference (see [here](https://github.com/openai/whisper/discussions/662)) and is quite slow compared as it doesn't make use of batching, nor pure fp16 nor FA2. ## Speed Results: - Transcribing all of [these four long-form](https://huggingface.co/collections/distil-whisper/long-form-test-sets-652ec038902fe76a6d0d65bb) datasets with a high batch size and whisper-large-v2 is **4.5 times** faster with `transformers` compared to the [original codebase](https://github.com/huggingface/transformers/pull/27492) while yielding more or less the same results (measured with Torch 2.1, CUDA 12.1 on RTX4090): - Original Whisper (batch size 1): **9h2min** - Transformers (batch size 1): **5h20min** - Transformers (batch size 16): **2h23min** - Transformers (chunked, batch size = 16): **2h15min**. => Batched sequential generation is therefore only 5% slower than chunked generation, but gives much improved WER results. ## Usage One activates long-form generation by simply passing `input_features` that are longer than 30 seconds. **Note** when running in `batch_size>1` mode, one should pass an `attention_mask` as well so that the input length of each audio stream can be known: (the following command takes ~5min on a RTX4090 since 120h of audio are decoded) ```py #!/usr/bin/env python3 from transformers import WhisperForConditionalGeneration, AutoProcessor from datasets import load_dataset, Audio import torch import numpy as np processor = AutoProcessor.from_pretrained("openai/whisper-small.en") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small.en", torch_dtype=torch.float16) model.to("cuda") # rertieve 8 long audio sequences ds = load_dataset("distil-whisper/earnings21", "full")["test"] ds = ds.cast_column("audio", Audio(sampling_rate=16000)) ds = ds[:8] # take batch size of 8 raw_audio = [x["array"].astype(np.float32) for x in ds["audio"]] # process input, make sure to pass `padding='longest'` and `return_attention_mask=True` inputs = processor(raw_audio, return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, sampling_rate=16_000) inputs = inputs.to("cuda", torch.float16) # activate `temperature_fallback` and repetition detection filters and condition on prev text result = model.generate(**inputs, condition_on_prev_tokens=False, temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0), logprob_threshold=-1.0, compression_ratio_threshold=1.35, return_timestamps=True) decoded = processor.batch_decode(result, skip_special_tokens=True) print(decoded) ``` ## WER Results: First, it is to be noted that results can sometimes strongly vary between runs (1-2% absolute WER) when temperature fallback is activated due to the random nature when sampling and because long-form transcription is strongly dependent on previous inputs. This is much less the case though when NOT conditioning on previous input. In summary, Transformers results match the results of the original code base. They match very closely when not conditioning on previous input, but have a higher variance when conditioning on previous input. It is important to understand that an array of different hyper-parameters is used here for inference: ``` gen_kwargs = { "condition_on_prev_tokens": True/False, "max_length": 448, "return_timestamps": True, "num_beams": data_args.num_beams, "top_k": 0, "compression_ratio_threshold": 1.35, # different compression threshold is used "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0), "logprob_threshold": -1.0, "no_speech_threshold": 0.6, } ``` whereas especially `compression_ratio_threshold`, `temperature`, `condition_on_prev_tokens`, `max_length, `logprob_threshold` and `no_speech_threshold` quite strongly influence the WER. In addition, different hardware can lead to different results. **Note**: All of the above parameters are **exactly** implement as in the original codebase except fo `"compression_ratio_threshold"`. While the original codebase computes the compression ratio based on the decoded string of a generation, here we compute it directly on the predicted IDS because: - High repetition in actual output generations is a better proxy for repetition compared to high repetition of letters/words - It allows for a cleaner design where we don't have to pass the tokenizer into `generate`. You can unfold the following to have in-detail overview of all results of this PR compared to the original codebase. In a nutshell, one can see that when **not** conditioning on the previous input, results more or less match exactly (only very minor differences). However, when conditioning on the previous input, the results because much more brittle (re-running the same experiment will then lead to different results). Here we see that the original codebase gives slightly better results for the bigger models, but slighly worse for smaller models. Re-running the experiment might give different results though. <details> <summary>All results - click me</summary> Results for all models (except v3) on long-form. **[This PR] When not conditioning on previous input:** | Name | rev16/test_wer | meanwhile/test_wer | earnings22/test_wer | earnings21/test_wer | **Avg** |---------------------|----------------|--------------------|---------------------|---------------------|---------------------| | whisper-large-v2 | 11.3 | 4.8 | 13.7 | 10.5 | 10.08 | whisper-medium.en | 11.3 | 5.6 | 13.8 | 10.8 | 10.38 | whisper-medium | 11.3 | 5.6 | 13.8 | 10.6 | 10.33 | whisper-small.en | 11.8 | 6.6 | 14.6 | 11.4 | 11.1 | whisper-small | 12.1 | 7.5 | 15.1 | 11.4 | 11.53 | whisper-base.en | 13.5 | 10.5 | 17.3 | 13.1 | 13.6 | whisper-base | 20.5 | 13.4 | 18.5 | 14.3 | 16.68 | whisper-tiny.en | 15.3 | 13.5 | 21.4 | 15.9 | 16.53 | whisper-tiny | 17.1 | 16.4 | 24.1 | 18.4 | 19.00 **[Original OAI] When not conditioning on previous input:** Name | rev16/test_wer | meanwhile/test_wer | earnings22/test_wer | earnings21/test_wer | **Avg** ----------------------|----------------|--------------------|---------------------|---------------------|--------------------- whisper-large-v2 | 11.4 | 4.8 | 13.8 | 10.5 | 10.13 whisper-medium.en | 11.2 | 5.6 | 13.97 | 10.8 | 10.39 whisper-medium | 11.4 | 5.6 | 13.8 | 10.6 | 10.35 whisper-small.en | 12.0 | 6.6 | 14.7 | 11.4 | 11.18 whisper-small | 12.0 | 7.5 | 15.0 | 11.4 | 11.48 whisper-base.en | 13.5 | 10.5 | 17.4 | 13.2 | 13.65 whisper-base | 14.5 | 13.6 | 18.5 | 14.4 | 15.25 whisper-tiny.en | 15.3 | 13.6 | 21.7 | 15.9 | 16.63 whisper-tiny | 17.1 | 17.4 | 24.3 | 18.3 | 19.28 **[This PR] When conditioning on previous input:** To transform the table in the provided image to the same simplified format, I'll remove the "RTX4090-openai/" and "-Longform-cond-input-test" from the Name column, and round the numerical values to one decimal place. Here is the updated table in text format: Name | rev16/test_wer | meanwhile/test_wer | earnings22/test_wer | earnings21/test_wer | **Avg** ------------------|----------------|--------------------|---------------------|---------------------|--------------------- whisper-large-v2 | 12.3 | 4.3 | 15.0 | 11.3 | 10.73 whisper-large | 12.88 | 5.1 | 14.09 | 12.5 | 10.42 whisper-medium.en | 11.4 | 6.0 | 13.7 | 10.6 | 10.95 whisper-medium | 11.5 | 5.6 | 14.1 | 10.6 | 10.45 whisper-small.en | 11.8 | 6.4 | 14.3 | 11.0 | 10.88 whisper-small | 11.9 | 7.2 | 14.5 | 11.1 | 11.18 whisper-base.en | 12.8 | 9.5 | 17.7 | 12.3 | 13.08 whisper-base | 14.0 | 16.6 | 19.0 | 13.8 | 15.85 whisper-tiny.en | 14.7 | 13.0 | 20.6 | 15.2 | 15.88 whisper-tiny | 16.5 | 16.2 | 24.5 | 18.5 | 18.93 **[Original OAI] When conditioning on previous input:** Name | rev16/test_wer | meanwhile/test_wer | earnings22/test_wer | earnings21/test_wer | **Avg** --------------------|----------------|--------------------|---------------------|---------------------|--------------------- whisper-large-v2 | 11.4 | 5.5 | 13.8 | 10.5 | 10.3 whisper-large | 10.5 | 5.3 | 13.2 | 9.7 | 9.68 whisper-medium.en | 11.2 | 5.5 | 13.9 | 10.8 | 10.35 whisper-medium | 11.9 | 5.6 | 14.0 | 11.1 | 10.65 whisper-small.en | 13.5 | 6.4 | 14.8 | 11.1 | 11.45 whisper-small | 12.2 | 7.4 | 14.6 | 11.0 | 11.3 whisper-base.en | 13.5 | 10.0 | 17.0 | 12.5 | 13.25 whisper-base | 14.4 | 13.9 | 18.5 | 14.4 | 15.3 whisper-tiny.en | 15.3 | 13.5 | 21.6 | 15.9 | 16.58 whisper-tiny | 17.2 | 18.1 | 24.1 | 18.2 | 19.4 The code to run Transformers Whisper is based on: ```py raw_audio = [x["array"].astype(np.float32) for x in batch[data_args.audio_column_name]] inputs = processor(raw_audio, truncation=False, padding="longest", return_attention_mask=True, sampling_rate=SAMPLING_RATE, return_tensors="pt") if inputs.input_features.shape[-1] < 3000: inputs = processor(raw_audio, return_tensors="pt") inputs.to("cuda", DTYPE) gen_kwargs = { "condition_on_prev_tokens": data_args.condition_on_prev_tokens, "max_length": 448, "return_timestamps": True, "num_beams": 1, "top_k": 0, "compression_ratio_threshold": 1.35, # different compression threshold is used "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0), "logprob_threshold": -1.0, "no_speech_threshold": 0.6, } result = model.generate(**inputs, **gen_kwargs) decoded = processor.batch_decode(result, skip_special_tokens=True) ``` <details> <summary>The whole inference code</summary> ```py import transformers import whisper from transformers import HfArgumentParser, is_wandb_available from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq from whisper.normalizers import EnglishTextNormalizer SAMPLING_RATE = 16_000 logger = logging.getLogger(__name__) metric = evaluate.load("wer") @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: str = field( default=None, metadata={ "help": "The name of the dataset to use (via the datasets library). Load and combine " "multiple datasets by separating dataset hours by a '+' symbol." }, ) model_name_or_path: str = field( default=None, metadata={ "help": "The name of the model to use (via the transformers library). " }, ) condition_on_prev_tokens: bool = field( default=False, metadata={"help": "Whether to condition on previous tokens or not"}, ) num_beams: int = field( default=1, metadata={"help": "The number of beams used for evluation."}, ) batch_size: int = field( default=1, metadata={"help": "Batch size at which the model should be evaluated at."} ) use_fp16: bool = field( default=True, metadata={"help": "Whether to run the model in fp16 or not"} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}, ) dataset_split_name: Optional[str] = field( default=None, metadata={"help": "The split name of the dataset to use (via the datasets library)."}, ) dataset_cache_dir: Optional[str] = field( default=None, metadata={"help": "Path to cache directory for saving and loading datasets"}, ) audio_column_name: str = field( default="audio", metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"}, ) text_column_name: str = field( default=None, metadata={"help": "The name of the dataset column containing the text data. Defaults to `text`."}, ) wandb_project: str = field( default="distil-whisper-speed-benchmark", metadata={"help": "The name of the wandb project."}, ) wandb_name: str = field( default=None, metadata={"help": "The name of the wandb run."}, ) wandb_job_type: str = field( default="distil-whisper", metadata={"help": "The name of the wandb job type."}, ) wandb_dir: str = field( default=None, metadata={"help": "The absolute path to save the wandb logs."}, ) save_code_to_wandb: bool = default=True, metadata={"help": "Whether to use Datasets' streaming mode to load and the data."}, ) def write_metric(summary_writer, eval_metrics, step, prefix="eval"): for metric_name, value in eval_metrics.items(): summary_writer.scalar(f"{prefix}/{metric_name}", value, step) def write_wandb_metric(wandb_logger, metrics, train_time, prefix): log_metrics = {} for k, v in metrics.items(): log_metrics[f"{prefix}/{k}"] = v log_metrics[f"{prefix}/time"] = train_time wandb_logger.log(log_metrics) # TODO(SG): bug with wandb means we can't log the step count def compute_metrics(pred_str, label_str, normalizer): # normalize everything and re-compute the WER norm_pred_str = [normalizer(pred) for pred in pred_str] norm_label_str = [normalizer(label) for label in label_str] wer = 100 * metric.compute(predictions=norm_pred_str, references=norm_label_str) return wer def convert_dataset_str_to_list( dataset_names, dataset_config_names, splits=None, text_column_names=None, dataset_hours=None, default_split="train" ): if isinstance(dataset_names, str): dataset_names = dataset_names.split("+") # we assume that all the datasets we're using derive from the distil-whisper org on the Hub - prepend the org name if necessary for i in range(len(dataset_names)): ds_name = dataset_names[i] dataset_names[i] = f"distil-whisper/{ds_name}" if "/" not in ds_name else ds_name dataset_config_names = dataset_config_names.split("+") splits = splits.split("+") if splits is not None else None text_column_names = text_column_names.split("+") if text_column_names is not None else None dataset_hours = dataset_hours.split("+") if dataset_hours is not None else None # basic checks to ensure we've got the right number of datasets/configs/splits/columns/probs if len(dataset_names) != len(dataset_config_names): raise ValueError( f"Ensure one config is passed for each dataset, got {len(dataset_names)} datasets and" f" {len(dataset_config_names)} configs." ) if splits is not None and len(splits) != len(dataset_names): raise ValueError( f"Ensure one split is passed for each dataset, got {len(dataset_names)} datasets and {len(splits)} splits." ) if text_column_names is not None and len(text_column_names) != len(dataset_names): raise ValueError( f"Ensure one text column name is passed for each dataset, got {len(dataset_names)} datasets and" f" {len(text_column_names)} text column names." ) if dataset_hours is not None: if len(dataset_hours) != len(dataset_names): raise ValueError( f"Ensure one probability is passed for each dataset, got {len(dataset_names)} datasets and " f"{len(dataset_hours)} hours." ) dataset_hours = [float(ds_hours) for ds_hours in dataset_hours] else: dataset_hours = [None] * len(dataset_names) text_column_names = ( text_column_names if text_column_names is not None else ["text" for _ in range(len(dataset_names))] ) splits = splits if splits is not None else [default_split for _ in range(len(dataset_names))] dataset_names_dict = [] for i, ds_name in enumerate(dataset_names): dataset_names_dict.append( { "name": ds_name, "config": dataset_config_names[i], "split": splits[i], "text_column_name": text_column_names[i], "hour def main(): # 1. Parse input arguments # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser([DataTrainingArguments]) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. data_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))[0] else: data_args = parser.parse_args_into_dataclasses()[0] # 2. Setup logging # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) has_wandb = is_wandb_available() if has_wandb: import wandb as wandb_logger import wandb # Set up wandb run wandb_logger.init( project=data_args.wandb_project, name=data_args.wandb_name, job_type=data_args.wandb_job_type, dir=data_args.wandb_dir, save_code=data_args.save_code_to_wandb, ) wandb_logger.log({"torch_version": str(torch.__version__)}) wandb_logger.log({"transformers_version": str(transformers.__version__)}) wandb_logger.log({"batch_size": 1}) else: raise ValueError("Wandb logging requires wandb to be installed. Run `pip install wandb` to enable.") # 3. Load dataset raw_datasets = IterableDatasetDict() if data_args.streaming else DatasetDict() # Convert lists of dataset names/configs/splits to a dict # names: "librispeech_asr+gigaspeech", configs: "all+l", splits: "validation.clean+validation" # -> [{"name: "librispeech_asr": "config": "all", "split": "validation.clean"}, {"name: "gigaspeech": "config": "l", "split": "validation"} dataset_names_dict = convert_dataset_str_to_list( data_args.dataset_name, data_args.dataset_config_name, splits=data_args.dataset_split_name, text_column_names=data_args.text_column_name, ) if len(dataset_names_dict) == 1: # load a single eval set dataset_dict = dataset_names_dict[0] raw_datasets["eval"] = load_dataset( dataset_dict["name"], dataset_dict["config"], split=dataset_dict["split"], cache_dir=data_args.dataset_cache_dir, streaming=data_args.streaming, ) if dataset_dict["text_column_name"] not in list(raw_datasets["eval"].features.keys()): raise ValueError( f"--text column name {dataset_dict['text_column_name']} not found in the evaluation " f"dataset {dataset_dict['name']}. Ensure `text_column_name` is set to the correct column " f"for the target text. Should be one of {' '.join(list(raw_datasets['eval'].features.keys()))}" ) if dataset_dict["text_column_name"] != "text": raw_datasets["eval"] = raw_datasets["eval"].rename_column(dataset_dict["text_column_name"], "text") else: # load multiple eval sets for dataset_dict in tqdm(dataset_names_dict, desc="Loading datasets..."): # Clean-up the dataset name for pretty logging # ("distil-whisper/librispeech_asr", "validation.clean") -> "librispeech_asr/validation-clean" pretty_name = f"{dataset_dict['name'].split('/')[-1]}/{dataset_dict['split'].replace('.', '-')}" raw_datasets[pretty_name] = load_dataset( dataset_dict["name"], dataset_dict["config"], split=dataset_dict["split"], cache_dir=data_args.dataset_cach # so we just need to set the correct target sampling rate. raw_datasets = raw_datasets.cast_column( data_args.audio_column_name, datasets.features.Audio(SAMPLING_RATE), ) # 5. Load model & normalizer processor = AutoProcessor.from_pretrained(data_args.model_name_or_path) model = AutoModelForSpeechSeq2Seq.from_pretrained(data_args.model_name_or_path, low_cpu_mem_usage=True, torch_dtype=DTYPE) model.generation_config.max_initial_timestamp_index = 50 model.cuda() normalizer = EnglishTextNormalizer() # 6. Run evaluation def evaluate(batch): # batch_size has to be 1 for openai/whisper raw_audio = [x["array"].astype(np.float32) for x in batch[data_args.audio_column_name]] inputs = processor(raw_audio, truncation=False, padding="longest", return_attention_mask=True, sampling_rate=SAMPLING_RATE, return_tensors="pt") if inputs.input_features.shape[-1] < 3000: inputs = processor(raw_audio, return_tensors="pt") inputs.to("cuda", DTYPE) gen_kwargs = { "condition_on_prev_tokens": data_args.condition_on_prev_tokens, "max_length": 448, "return_timestamps": True, "num_beams": data_args.num_beams, "top_k": 0, "compression_ratio_threshold": 1.35, # different compression threshold is used "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0), "logprob_threshold": -1.0, "no_speech_threshold": 0.6, } result = model.generate(**inputs, **gen_kwargs) decoded = processor.batch_decode(result, skip_special_tokens=True) batch["transcription"] = decoded batch["reference"] = batch["text"] return batch result_datasets = DatasetDict() for split in raw_datasets: map_fn = partial( raw_datasets[split].map, function=evaluate, remove_columns=raw_datasets[split].features.keys(), batch_size=data_args.batch_size, batched=True, ) result_datasets[split] = ( map_fn(num_proc=1, desc="benchmark eval dataset") if not data_args.streaming else map_fn() ) # 7. Compute WER and upload count = 0 for split in result_datasets: transcriptions = [] references = [] all_wers = [] if data_args.streaming: result_iter = iter(result_datasets[split]) for result in result_iter: transcriptions.append(result["transcription"]) references.append(result["reference"]) try: all_wers.append(compute_metrics(transcriptions[-1:], references[-1:], normalizer)) except: all_wers.append(None) count += 1 print(f"Processed {count} samples...") log_stats = { f"{split}_wer": compute_metrics(transcriptions, references, normalizer), f"{split}_all_wer": all_wers, } wandb_logger.log(log_stats) print("Done!") if __name__ == "__main__": main() ``` </details> The code to run the OpenAI model is based on: ```py raw_audio = batch[data_args.audio_column_name][0]["array"] raw_audio = raw_audio.astype(np.float32) out_dict = model.transcribe(raw_audio, condition_on_previous_text=data_args.condition_on_prev_tokens, language="en") batch["transcription"] = [out_dict["text"]] batch["reference"] = batch["text"] ``` <details> <summary>The whole inference code</summary> ```py """ Evaluating a Whisper model on one or more evaluation datasets. """ # You can also adapt this script for your own speech recognition validation. Pointers for this are left as comments. from dataclasses import dataclass import logging import os from typing import Optional import numpy as np import sys from dataclasses import field from functools import partial import datasets import evaluate import torch from datasets import DatasetDict, IterableDatasetDict, load_dataset from tqdm import tqdm import transformers import whisper from transformers import HfArgumentParser, is_wandb_available from whisper.normalizers import EnglishTextNormalizer SAMPLING_RATE = 16_000 logger = logging.getLogger(__name__) metric = evaluate.load("wer") @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: str = field( default=None, metadata={ "help": "The name of the dataset to use (via the datasets library). Load and combine " "multiple datasets by separating dataset hours by a '+' symbol." }, ) model_name_or_path: str = field( default=None, metadata={ "help": "The name of the model to use (via the transformers library). " }, ) condition_on_prev_tokens: bool = field( default=False, metadata={"help": "Whether to condition on previous tokens or not"}, ) num_beams: int = field( default=1, metadata={"help": "The number of beams used for evluation."}, ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}, ) dataset_split_name: Optional[str] = field( default=None, metadata={"help": "The split name of the dataset to use (via the datasets library)."}, ) dataset_cache_dir: Optional[str] = field( default=None, metadata={"help": "Path to cache directory for saving and loading datasets"}, ) audio_column_name: str = field( default="audio", metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"}, ) text_column_name: str = field( default=None, metadata={"help": "The name of the dataset column containing the text data. Defaults to `text`."}, ) wandb_project: str = field( default="distil-whisper-speed-benchmark", metadata={"help": "The name of the wandb project."}, ) wandb_name: str = field( default=None, metadata={"help": "The name of the wandb run."}, ) wandb_job_type: str = field( default="distil-whisper", metadata={"help": "The name of the wandb job type."}, ) wandb_dir: str = field( default=None, metadata={"help": "The absolute path to save the wandb logs."}, ) save_code_to_wandb: bool = field( default=True, metadata={ "help": ( "Whether to save main script to wandb. This is valuable for improving" " experiment reproducibility and to diff code across experiments in" " the UI." ) }, ) streaming: bool = field( default=True, metadata={"help": "Whether to use Datasets' streaming mode to load and the data."}, ) max_eval_samples: Optional[int] = field( default=None, metadata={"help": "For debugging purposes, truncate the number of eval examples to this value if set."}, ) def write_metric(summary_writer, eval_metrics, step, prefix="eval"): for metric_name, value in eval_metrics.items(): summary_writer.scalar(f"{prefix}/{metric_name}", value, step) def write_wandb_metric(wandb_logger, metrics, train_time, prefix): log_metrics = {} for k, v in metrics.items(): log_metrics[f"{prefix}/{k}"] = v log_metrics[f"{prefix}/time"] = train_time wandb_logger.log(log_metrics) # TODO(SG): bug with wandb means we can't log the step count def compute_metrics(pred_str, label_str, normalizer): # normalize everything and re-compute the WER norm_pred_str = [normalizer(pred) for pred in pred_str] norm_label_str = [normalizer(label) for label in label_str] wer = 100 * metric.compute(predictions=norm_pred_str, references=norm_label_str) return wer def convert_dataset_str_to_list( dataset_names, dataset_config_names, splits=None, text_column_names=None, dataset_hours=None, default_split="train" ): if isinstance(dataset_names, str): dataset_names = dataset_names.split("+") # we assume that all the datasets we're using derive from the distil-whisper org on the Hub - prepend the org name if necessary for i in range(len(dataset_names)): ds_name = dataset_names[i] dataset_names[i] = f"distil-whisper/{ds_name}" if "/" not in ds_name else ds_name dataset_config_names = dataset_config_names.split("+") splits = splits.split("+") if splits is not None else None text_column_names = text_column_names.split("+") if text_column_names is not None else None dataset_hours = dataset_hours.split("+") if dataset_hours is not None else None # basic checks to ensure we've got the right number of datasets/configs/splits/columns/probs if len(dataset_names) != len(dataset_config_names): raise ValueError( f"Ensure one config is passed for each dataset, got {len(dataset_names)} datasets and" f" {len(dataset_config_names)} configs." ) if splits is not None and len(splits) != len(dataset_names): raise ValueError( f"Ensure one split is passed for each dataset, got {len(dataset_names)} datasets and {len(splits)} splits." ) if text_column_names is not None and len(text_column_names) != len(dataset_names): raise ValueError( f"Ensure one text column name is passed for each dataset, got {len(dataset_names)} datasets and" f" {len(text_column_names)} text column names." ) if dataset_hours is not None: if len(dataset_hours) != len(dataset_names): raise ValueError( f"Ensure one probability is passed for each dataset, got {len(dataset_names)} datasets and " f"{len(dataset_hours)} hours." ) dataset_hours = [float(ds_hours) for ds_hours in dataset_hours] else: dataset_hours = [None] * len(dataset_names) text_column_names = ( text_column_names if text_column_names is not None else ["text" for _ in range(len(dataset_names))] ) splits = splits if splits is not None else [default_split for _ in range(len(dataset_names))] dataset_names_dict = [] for i, ds_name in enumerate(dataset_names): dataset_names_dict.append( { "name": ds_name, "config": dataset_config_names[i], "split": splits[i], "text_column_name": text_column_names[i], "hours": dataset_hours[i], } ) return dataset_names_dict def main(): # 1. Parse input arguments # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser([DataTrainingArguments]) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. data_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))[0] else: data_args = parser.parse_args_into_dataclasses()[0] # 2. Setup logging # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) has_wandb = is_wandb_available() if has_wandb: import wandb as wandb_logger import wandb # Set up wandb run wandb_logger.init( project=data_args.wandb_project, name=data_args.wandb_name, job_type=data_args.wandb_job_type, dir=data_args.wandb_dir, save_code=data_args.save_code_to_wandb, ) wandb_logger.log({"torch_version": str(torch.__version__)}) wandb_logger.log({"transformers_version": str(transformers.__version__)}) wandb_logger.log({"batch_size": 1}) else: raise ValueError("Wandb logging requires wandb to be installed. Run `pip install wandb` to enable.") # 3. Load dataset raw_datasets = IterableDatasetDict() if data_args.streaming else DatasetDict() # Convert lists of dataset names/configs/splits to a dict # names: "librispeech_asr+gigaspeech", configs: "all+l", splits: "validation.clean+validation" # -> [{"name: "librispeech_asr": "config": "all", "split": "validation.clean"}, {"name: "gigaspeech": "config": "l", "split": "validation"} dataset_names_dict = convert_dataset_str_to_list( data_args.dataset_name, data_args.dataset_config_name, splits=data_args.dataset_split_name, text_column_names=data_args.text_column_name, ) if len(dataset_names_dict) == 1: # load a single eval set dataset_dict = dataset_names_dict[0] raw_datasets["eval"] = load_dataset( dataset_dict["name"], dataset_dict["config"], split=dataset_dict["split"], cache_dir=data_args.dataset_cache_dir, streaming=data_args.streaming, ) if dataset_dict["text_column_name"] not in list(raw_datasets["eval"].features.keys()): raise ValueError( f"--text column name {dataset_dict['text_column_name']} not found in the evaluation " f"dataset {dataset_dict['name']}. Ensure `text_column_name` is set to the correct column " f"for the target text. Should be one of {' '.join(list(raw_datasets['eval'].features.keys()))}" ) if dataset_dict["text_column_name"] != "text": raw_datasets["eval"] = raw_datasets["eval"].rename_column(dataset_dict["text_column_name"], "text") else: # load multiple eval sets for dataset_dict in tqdm(dataset_names_dict, desc="Loading datasets..."): # Clean-up the dataset name for pretty logging # ("distil-whisper/librispeech_asr", "validation.clean") -> "librispeech_asr/validation-clean" pretty_name = f"{dataset_dict['name'].split('/')[-1]}/{dataset_dict['split'].replace('.', '-')}" raw_datasets[pretty_name] = load_dataset( dataset_dict["name"], dataset_dict["config"], split=dataset_dict["split"], cache_dir=data_args.dataset_cache_dir, streaming=data_args.streaming, ) if dataset_dict["text_column_name"] not in list(raw_datasets[pretty_name].features.keys()): raise ValueError( f"`--text_column_name` {dataset_dict['text_column_name']} not found in the evaluation " f"dataset {dataset_dict['name']}. Ensure `text_column_name` is set to the correct column " f"for the target text. Should be one of {' '.join(list(raw_datasets[pretty_name].features.keys()))}" ) if dataset_dict["text_column_name"] != "text": raw_datasets[pretty_name] = raw_datasets[pretty_name].rename_column( dataset_dict["text_column_name"], "text" ) # 4. Resample speech dataset: `datasets` takes care of automatically loading and resampling the audio, # so we just need to set the correct target sampling rate. raw_datasets = raw_datasets.cast_column( data_args.audio_column_name, datasets.features.Audio(SAMPLING_RATE), ) # 5. Load model & normalizer model_name = data_args.model_name_or_path.split("/")[-1].split("whisper-")[-1] model = whisper.load_model(model_name) model.cuda() normalizer = EnglishTextNormalizer() # 6. Run evaluation def evaluate(batch): # batch_size has to be 1 for openai/whisper raw_audio = batch[data_args.audio_column_name][0]["array"] raw_audio = raw_audio.astype(np.float32) # generate out_dict = model.transcribe(raw_audio, condition_on_previous_text=data_args.condition_on_prev_tokens, language="en") batch["transcription"] = [out_dict["text"]] batch["reference"] = batch["text"] return batch result_datasets = DatasetDict() for split in raw_datasets: map_fn = partial( raw_datasets[split].map, function=evaluate, remove_columns=raw_datasets[split].features.keys(), batch_size=1, batched=True, ) result_datasets[split] = ( map_fn(num_proc=1, desc="benchmark eval dataset") if not data_args.streaming else map_fn() ) # 7. Compute WER and upload count = 0 for split in result_datasets: transcriptions = [] references = [] if data_args.streaming: result_iter = iter(result_datasets[split]) for result in result_iter: transcriptions.append(result["transcription"]) references.append(result["reference"]) count += 1 print(f"Processed {count} samples...") log_stats = {f"{split}_wer": compute_metrics(transcriptions, references, normalizer)} wandb_logger.log(log_stats) print("Done!") if __name__ == "__main__": main() ``` </details> </details> ## Review To make this review a bit easier, let me explain first why we need so much code. Very much simplified we need the following loops for long-form generation with temperature fallback in Whisper: ``` # pseudo-code for segment in audio: for temperature in (0.0, 0.2, 0.4, 0.6, 0.8. 1.0): tokens = model.generate(segment, temperature=temperature) for batch_idx in tokens.shape[0]: # find slices that need temperature fallback needs_fallback = ... if not any(needs_fallback): break (out of temperature loop) ``` Temperature fallback is already quite tricky as it means we have to dynamically re-generate certain segments. This coupled with batched generation is even trickier as it means we often only have to do this for some segments in the batch, but not all. Hence we need a couple of nested for-loops here. Because the generation code has become so complex, I moved all the Whisper-specific generation code int `generation_whisper.py`. I've added new tests making sure that everything works correctly an ran all slow tests to make sure I didn't break anything. Failing slow tests for Whisper are due to unrelated issues such as this: https://github.com/huggingface/transformers/pull/27492 one. To make the whole Whisper generation function easier to read, I've split the big generation function into multiple static private functions to help readability. ## Next steps - [ ] Once approved & before merging, all of the official Whisper model cards should be updated as a couple of generation parameters should be added to the model's generation config - [ ] While SOTA batched long-form generation is now functional, it's far from easy to understand what's going on here. Since Whisper is so highly used & important, I'm planning on writing some in-detail blog post / docs after this PR is merged - [ ] The way we deal with `forced_decoder_input_ids` is quite suboptimal which can also be seen from this issue: https://github.com/huggingface/transformers/issues/28228 . In a follow-up PR this should be refactored so that we only make use of `decoder_input_ids` instead of relying on `decoder_input_ids` - [ ] Contact OAI to see if a joint blog post could make sense here
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27658/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27658/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27658", "html_url": "https://github.com/huggingface/transformers/pull/27658", "diff_url": "https://github.com/huggingface/transformers/pull/27658.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27658.patch", "merged_at": 1705665857000 }
https://api.github.com/repos/huggingface/transformers/issues/27657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27657/comments
https://api.github.com/repos/huggingface/transformers/issues/27657/events
https://github.com/huggingface/transformers/pull/27657
2,006,766,771
PR_kwDOCUB6oc5gJ2Wz
27,657
[i18n-fr] Translate installation to French
{ "login": "NoB0", "id": 28621493, "node_id": "MDQ6VXNlcjI4NjIxNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NoB0", "html_url": "https://github.com/NoB0", "followers_url": "https://api.github.com/users/NoB0/followers", "following_url": "https://api.github.com/users/NoB0/following{/other_user}", "gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}", "starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NoB0/subscriptions", "organizations_url": "https://api.github.com/users/NoB0/orgs", "repos_url": "https://api.github.com/users/NoB0/repos", "events_url": "https://api.github.com/users/NoB0/events{/privacy}", "received_events_url": "https://api.github.com/users/NoB0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Heads up, a few typos I was able to spot:\r\n\r\n- line 31: missing 's' -> environnements\r\n- line 57: en seule ligne -> en une seule ligne\r\n- line 74 : suivant -> suivants\r\n- line 110: utilie -> utile", "> Heads up, a few typos I was able to spot:\r\n> \r\n> * line 31: missing 's' -> environnements\r\n> * line 57: en seule ligne -> en une seule ligne\r\n> * line 74 : suivant -> suivants\r\n> * line 110: utilie -> utile\r\n\r\nThanks for pointing this out! I corrected them and checked for others, I believe it is now ok.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27657). All of your documentation changes will be reflected on that endpoint.", "Thank you for working on this! Please make sure the CI checks are green. You can find more information here https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md" ]
1,700
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? Translated the quicktour.mdx file of the documentation to French. Part of #21456 Thank you in advance for your review. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? French speaking contributors. Documentation: @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27657/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27657", "html_url": "https://github.com/huggingface/transformers/pull/27657", "diff_url": "https://github.com/huggingface/transformers/pull/27657.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27657.patch", "merged_at": 1701435608000 }
https://api.github.com/repos/huggingface/transformers/issues/27656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27656/comments
https://api.github.com/repos/huggingface/transformers/issues/27656/events
https://github.com/huggingface/transformers/pull/27656
2,006,508,658
PR_kwDOCUB6oc5gI9ek
27,656
[Auto Safetensors] Websocket -> SSE
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let's merge and iterate on the other PR" ]
1,700
1,700
1,700
MEMBER
null
cc @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27656/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27656", "html_url": "https://github.com/huggingface/transformers/pull/27656", "diff_url": "https://github.com/huggingface/transformers/pull/27656.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27656.patch", "merged_at": 1700748731000 }
https://api.github.com/repos/huggingface/transformers/issues/27655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27655/comments
https://api.github.com/repos/huggingface/transformers/issues/27655/events
https://github.com/huggingface/transformers/pull/27655
2,006,369,439
PR_kwDOCUB6oc5gIeqV
27,655
[DPT, Dinov2] Add resources
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge Shall I merge?", "Yes this can be merged :)" ]
1,700
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? To make more people aware that DPT is now compatible with `AutoBackbone` (https://github.com/huggingface/transformers/pull/26092), I've added some usage tips and resources to the docs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27655/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27655", "html_url": "https://github.com/huggingface/transformers/pull/27655", "diff_url": "https://github.com/huggingface/transformers/pull/27655.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27655.patch", "merged_at": 1700761448000 }
https://api.github.com/repos/huggingface/transformers/issues/27653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27653/comments
https://api.github.com/repos/huggingface/transformers/issues/27653/events
https://github.com/huggingface/transformers/issues/27653
2,006,313,091
I_kwDOCUB6oc53leiD
27,653
Silent failure when using max_length value that is too low
{ "login": "Kroshtan", "id": 31923442, "node_id": "MDQ6VXNlcjMxOTIzNDQy", "avatar_url": "https://avatars.githubusercontent.com/u/31923442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kroshtan", "html_url": "https://github.com/Kroshtan", "followers_url": "https://api.github.com/users/Kroshtan/followers", "following_url": "https://api.github.com/users/Kroshtan/following{/other_user}", "gists_url": "https://api.github.com/users/Kroshtan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kroshtan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kroshtan/subscriptions", "organizations_url": "https://api.github.com/users/Kroshtan/orgs", "repos_url": "https://api.github.com/users/Kroshtan/repos", "events_url": "https://api.github.com/users/Kroshtan/events{/privacy}", "received_events_url": "https://api.github.com/users/Kroshtan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Feel free to open a PR for a fix, have seen this a few times but don't think it's very important and fast tokenizers also don't have the same outputs as slow for this.\r\nThough not sure if there are a lot of relavant usages for this? ๐Ÿ˜… ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.34.1 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: tried both - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Tokenizers have a minimum required setting for max_length, and if this minimum value is not met will ignore the max_length value and simply perform tokenization without max_length. This is done without either a warning or error message. The cut-off value for this silent failure is different for different tokenizers. See below for examples: **Bert Tokenizer** ``` x = AutoTokenizer.from_pretrained("bert-base-cased") >>> x("This is the example text that is sufficiently long to check max length.", max_length=1, padding=True, truncation=True) {'input_ids': [101, 1188, 1110, 1103, 1859, 3087, 1115, 1110, 13230, 1263, 1106, 4031, 12477, 1775, 2251, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} >>> x("This is the example text that is sufficiently long to check max length.", max_length=2, padding=True, truncation=True) {'input_ids': [101, 102], 'token_type_ids': [0, 0], 'attention_mask': [1, 1]} >>> x(text="This is the example text that is sufficiently long to check max length.", text_pair="this is another sample to check if this makes the problem appear.", max_length=3, padding=True, truncation=True) {'input_ids': [101, 102, 102], 'token_type_ids': [0, 0, 1], 'attention_mask': [1, 1, 1]} >>> x(text="This is the example text that is sufficiently long to check max length.", text_pair="this is another sample to check if this makes the problem appear.", max_length=2, padding=True, truncation=True) {'input_ids': [101, 1188, 1110, 1103, 1859, 3087, 1115, 1110, 13230, 1263, 1106, 4031, 12477, 1775, 2251, 119, 102, 1142, 1110, 1330, 6876, 1106, 4031, 1191, 1142, 2228, 1103, 2463, 2845, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} x = AutoTokenizer.from_pretrained("roberta-base") x("This is the example text that is sufficiently long to check max length.", max_length=1, padding=True, truncation=True) {'input_ids': [0, 713, 16, 5, 1246, 2788, 14, 16, 21547, 251, 7, 1649, 19220, 5933, 4, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} x("This is the example text that is sufficiently long to check max length.", max_length=2, padding=True, truncation=True) {'input_ids': [0, 2], 'attention_mask': [1, 1]} x(text="This is the example text that is sufficiently long to check max length.", text_pair="this is another sample to check if this makes the problem appear.", max_length=4, padding=True, truncation=True) {'input_ids': [0, 2, 2, 2], 'attention_mask': [1, 1, 1, 1]} x(text="This is the example text that is sufficiently long to check max length.", text_pair="this is another sample to check if this makes the problem appear.", max_length=3, padding=True, truncation=True) {'input_ids': [0, 713, 16, 5, 1246, 2788, 14, 16, 21547, 251, 7, 1649, 19220, 5933, 4, 2, 2, 9226, 16, 277, 7728, 7, 1649, 114, 42, 817, 5, 936, 2082, 4, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} x(text="This is the example text that is sufficiently long to check max length.", text_pair="this is another sample to check if this makes the problem appear.", max_length=2, padding=True, truncation=True) {'input_ids': [0, 713, 16, 5, 1246, 2788, 14, 16, 21547, 251, 7, 1649, 19220, 5933, 4, 2, 2, 9226, 16, 277, 7728, 7, 1649, 114, 42, 817, 5, 936, 2082, 4, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` In summary: Both tokenizers require a minimum amount of tokens to properly identify the sentences and will ignore `max_length` if the value is too low. The minimum value is inconsistent between tokenizers (3 is sufficient for paired sentences for the Bert tokenizer, and 4 is required for the roberta tokenizer). ### Expected behavior Warnings at runtime and/or clearer documentation on huggingface.co
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27653/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27652/comments
https://api.github.com/repos/huggingface/transformers/issues/27652/events
https://github.com/huggingface/transformers/pull/27652
2,006,193,793
PR_kwDOCUB6oc5gH30w
27,652
Refactoring Trainer, adds `save_only_model` arg and simplifying FSDP integration
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi, \r\n\r\nThe (Circle)CI did run all the tests: for example, in `torch_job`, we can see\r\n\r\n```\r\npython -m pytest --junitxml=test-results/junit.xml -n 6 --max-worker-restart=0 --dist=loadfile --make-reports=tests_torch tests/benchmark tests/bettertransformer tests/deepspeed tests/extended tests/fixtures tests/fsdp tests/generation tests/models tests/optimization tests/peft_integration tests/quantization tests/sagemaker tests/test_backbone_common.py tests/test_configuration_common.py tests/test_configuration_utils.py tests/test_feature_extraction_common.py tests/test_feature_extraction_utils.py tests/test_image_processing_common.py tests/test_image_processing_utils.py tests/test_image_transforms.py tests/test_modeling_common.py tests/test_modeling_flax_common.py tests/test_modeling_flax_utils.py tests/test_modeling_tf_common.py tests/test_modeling_tf_utils.py tests/test_modeling_utils.py tests/test_pipeline_mixin.py tests/test_sequence_feature_extraction_common.py tests/test_tokenization_common.py tests/test_tokenization_utils.py tests/tokenization tests/tools tests/trainer tests/utils || true\r\n```\r\n\r\nThe CI uses the latest `main` branch of accelerate (so newer than the latest released version)\r\n```\r\n\"pip install -U --upgrade-strategy eager -e git+https://github.com/huggingface/accelerate@main#egg=accelerate\"\r\n````\r\n\r\nNothing extra to check.", "save_only_model is a nice feature indeed, but it does not work together with load_best_model_at_end (at least with deepspeed enabled), since the final model cannot be loaded from the checkpoint. " ]
1,700
1,701
1,700
CONTRIBUTOR
null
# What does this PR do? 1. Bumps up the minimum Accelerate version to 0.21.0 2. Add `save_only_model` arg - This enables the feature request https://github.com/huggingface/transformers/issues/26706 3. Simplifies a lot of logic in FSDP: a. Currently, FSDP-XLA logic is custom in Trainer and normal FSDP is using the Accelerate's integration. There were many zombie code snippets related to normal FSDP. Cleaned those. b. Made it easier to train with FSDP. When using `FULL_STATE_DICT` setting, it should now save the model in transformers format using the default safetensors sharded format. This reduces the burden on users to later load, shard and save in safetensors format. c. Should fix https://github.com/huggingface/transformers/issues/27432 but don't have access to TPUs to test this. d. Fixes https://github.com/huggingface/transformers/issues/27166 e. This is built upon the PR in Accelerate to simplify FSDP integration https://github.com/huggingface/accelerate/pull/2177. It should be merged first.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27652/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27652/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27652", "html_url": "https://github.com/huggingface/transformers/pull/27652", "diff_url": "https://github.com/huggingface/transformers/pull/27652.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27652.patch", "merged_at": 1700806252000 }
https://api.github.com/repos/huggingface/transformers/issues/27651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27651/comments
https://api.github.com/repos/huggingface/transformers/issues/27651/events
https://github.com/huggingface/transformers/issues/27651
2,006,176,100
I_kwDOCUB6oc53k9Fk
27,651
In which function it is best way to use the temperature parameter .from_pretrained() or .generate()
{ "login": "pradeepdev-1995", "id": 41164884, "node_id": "MDQ6VXNlcjQxMTY0ODg0", "avatar_url": "https://avatars.githubusercontent.com/u/41164884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pradeepdev-1995", "html_url": "https://github.com/pradeepdev-1995", "followers_url": "https://api.github.com/users/pradeepdev-1995/followers", "following_url": "https://api.github.com/users/pradeepdev-1995/following{/other_user}", "gists_url": "https://api.github.com/users/pradeepdev-1995/gists{/gist_id}", "starred_url": "https://api.github.com/users/pradeepdev-1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pradeepdev-1995/subscriptions", "organizations_url": "https://api.github.com/users/pradeepdev-1995/orgs", "repos_url": "https://api.github.com/users/pradeepdev-1995/repos", "events_url": "https://api.github.com/users/pradeepdev-1995/events{/privacy}", "received_events_url": "https://api.github.com/users/pradeepdev-1995/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey ๐Ÿค— thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? \r\nI think the answer depends on the use case ๐Ÿ˜‰ \r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
### Feature request in which function it is the best way to use the temperature parameter .from_pretrained or .generate() It seems that in both functions we can pass the temperature parameter. Which is the best way to do this? ``` Model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", temperature=0.1, do_sample=True, torch_dtype=torch.bfloat16) ``` or ``` Model.generate(**model_input,max_length=1000, temperature=0.1,do_sample=True,) ``` ### Motivation In which function it is the best way to use the temperature parameter .from_pretrained or .generate() ### Your contribution in which function it is the best way to use the temperature parameter .from_pretrained or .generate()
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27651/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27650/comments
https://api.github.com/repos/huggingface/transformers/issues/27650/events
https://github.com/huggingface/transformers/issues/27650
2,006,116,599
I_kwDOCUB6oc53kuj3
27,650
Mask2Former slowdown starting from version 4.32.0
{ "login": "matteot11", "id": 15927868, "node_id": "MDQ6VXNlcjE1OTI3ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/15927868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matteot11", "html_url": "https://github.com/matteot11", "followers_url": "https://api.github.com/users/matteot11/followers", "following_url": "https://api.github.com/users/matteot11/following{/other_user}", "gists_url": "https://api.github.com/users/matteot11/gists{/gist_id}", "starred_url": "https://api.github.com/users/matteot11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matteot11/subscriptions", "organizations_url": "https://api.github.com/users/matteot11/orgs", "repos_url": "https://api.github.com/users/matteot11/repos", "events_url": "https://api.github.com/users/matteot11/events{/privacy}", "received_events_url": "https://api.github.com/users/matteot11/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @amyeroberts ", "Hi @matteot11 - thanks for raising! Looking into it ๐Ÿ•ต๏ธ ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @matteot11 - I'm still looking into this. You're completely correct in your diagnosis of the offending PRs. The difficulty is that these were fixing other issues e.g. enabling tracing for the model. At the moment I'm looking into how to improve this e.g. profiling for other places to improve, rewriting the current logic or reenabling einsum (some tracing issues have been fixed in torch) " ]
1,700
1,707
null
NONE
null
### System Info - System information: x86_64 GNU/Linux (with Titan RTX GPU) - Ubuntu version: 18.04 - Python version: 3.8.12 - CUDA version: 11.1 - PyTorch version: 2.0.1 - transformers version: 4.32.0 ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import Mask2FormerForUniversalSegmentation import time device = torch.device("cuda:0") model = Mask2FormerForUniversalSegmentation.from_pretrained( "facebook/mask2former-swin-tiny-coco-instance", ).eval().to(device) dummy_input = torch.randn((2,3,640,640)).to(device) times = [] with torch.no_grad(): for i in range(100): t1 = time.time() out = model(dummy_input) t2 = time.time() times.append(t2-t1) print(sum(times)/len(times)) ``` The code above computes average forward time for Mask2Former through 100 iterations. The following average forward time is obtained with different ```transformers``` versions: - ```transformers==4.31.0```-> ~0.133 s - ```transformers==4.32.0``` -> ~0.405 s - ```transformers==4.33.1``` -> 0.507 s Versions 4.32.0 and 4.33.1 introduced, respectively: - https://github.com/huggingface/transformers/pull/25297 - https://github.com/huggingface/transformers/pull/25741 I also report here my original torchscript export issue, form which the above PRs originated: https://github.com/huggingface/transformers/issues/25261 ### Expected behavior I would expect similar (or slightly higher) inference times for Mask2Former after einsum removal and memory load reduction.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27650/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27650/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27649/comments
https://api.github.com/repos/huggingface/transformers/issues/27649/events
https://github.com/huggingface/transformers/issues/27649
2,006,086,108
I_kwDOCUB6oc53knHc
27,649
Adding support for lookahead decoding for autoregressive (decoder + encoder-decoder) models
{ "login": "shermansiu", "id": 12627125, "node_id": "MDQ6VXNlcjEyNjI3MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shermansiu", "html_url": "https://github.com/shermansiu", "followers_url": "https://api.github.com/users/shermansiu/followers", "following_url": "https://api.github.com/users/shermansiu/following{/other_user}", "gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions", "organizations_url": "https://api.github.com/users/shermansiu/orgs", "repos_url": "https://api.github.com/users/shermansiu/repos", "events_url": "https://api.github.com/users/shermansiu/events{/privacy}", "received_events_url": "https://api.github.com/users/shermansiu/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "FYI @gante so we keep track of this. Shared offline but might be good after the cache refactoring. ", "The current reference implementation builds directly on top of Huggingface transformers, but the authors have mentioned that they plan to release a custom CUDA kernel to speed up the method.\r\n\r\nShould we wait for this kernel? (My opinion: No, we shouldn't wait. Plus, I'm skeptical about whether such a kernel would be compatible with Flash Attention's own CUDA kernel, but we'll see.)", "Cache refactoring PR: #26681", "While we're waiting for the KV cache refactor to be completed, I think it might be worth considering how exactly to manage the Lookahead Decoding configuration, especially since there are a few associated parameters with it (e.g. the lookahead window size, the N-gram size).\r\n\r\nI suppose it would be better to introduce a LookaheadDecoderConfig dataclass for this?", "No I think these can just be passed in the generation config.", "Hi @shermansiu ๐Ÿ‘‹ \r\n\r\nBefore commenting here, I've spent some time playing with [lookahead decoding](https://github.com/hao-ai-lab/LookaheadDecoding/tree/main). In particular, using a modified version of their `minimal.py`, so I could benchmark against datasets. I'm pasting an example in the collapsible below:\r\n\r\n<details>\r\n <summary>LADE test script</summary>\r\n\r\n ```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nfrom datasets import load_dataset\r\nimport time\r\nimport torch\r\nimport os\r\nif int(os.environ.get(\"LOAD_LADE\", 0)):\r\n import lade\r\n lade.augment_all()\r\n # lade.config_lade(LEVEL=7, WINDOW_SIZE=20, GUESS_SET_SIZE=20, DEBUG=1)\r\n lade.config_lade(LEVEL=4, WINDOW_SIZE=8, GUESS_SET_SIZE=8, DEBUG=1)\r\n\r\nassert torch.cuda.is_available()\r\n\r\nnum_samples = 20\r\ndevice = \"cuda:0\"\r\nmodel_name = \"meta-llama/Llama-2-7b-chat-hf\"\r\n# model_name = \"TheBloke/Llama-2-7B-Chat-AWQ\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, device_map=device)\r\n# model = AutoModelForCausalLM.from_pretrained(model_name, device_map=device, use_flash_attention_2=True)\r\nmodel.tokenizer = tokenizer\r\n\r\nds = load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"validation\", streaming=True)\r\nds_iterator = iter(ds.take(num_samples))\r\n\r\ntorch.cuda.reset_peak_memory_stats(\"cuda\")\r\ntorch.cuda.empty_cache()\r\ntorch.cuda.synchronize()\r\n\r\n#warm up\r\ngreedy_output = model.generate(torch.ones((1, 10), dtype=torch.long, device=device), max_new_tokens=1)\r\n#end warm up\r\n\r\nellapsed_time = 0\r\ngenerated_tokens = 0\r\nfor _ in range(num_samples):\r\n chat = [\r\n {\"role\": \"system\", \"content\": \"You are a helpful model that summarizes a given article.\"},\r\n {\"role\": \"user\", \"content\": next(ds_iterator)[\"article\"]}\r\n ]\r\n\r\n input_ids = tokenizer.apply_chat_template(chat, return_tensors='pt').to(device)\r\n start = time.time()\r\n greedy_output = model.generate(input_ids, max_new_tokens=2048, do_sample=False)\r\n end = time.time()\r\n\r\n generated_tokens += greedy_output.numel() - input_ids.numel()\r\n ellapsed_time += end - start\r\n\r\nmax_memory = torch.cuda.max_memory_allocated(\"cuda\")\r\nprint(\"\\nMax memory (MB): \", max_memory * 1e-6)\r\nprint(\"AVG Generated Tokens: \", (generated_tokens / num_samples))\r\nprint(\"AVG Generation Speed: \", (generated_tokens / ellapsed_time), \" tokens/s\")\r\n\r\n ```\r\n</details>\r\n\r\nHere are some findings:\r\n๐Ÿ‘‰ As mentioned in the blog post, you are increasing FLOPS to get additional LLM throughput. All is good if the model is small for your device, but it's hard to achieve speedups using modest models on consumer GPUs (e.g. 7B models in a 3090)\r\n๐Ÿ‘‰ After some fiddling with the LADE parameters, I was able to get a 25% speedup on a 7B model in a 3090, compared to the model without FA2. Running with their default parameterization actually slows the model down by 33%, despite achieving a high compression ratio (= FLOPS is the bottleneck)\r\n๐Ÿ‘‰ Doesn't work correctly with FA2: the output is significantly different\r\n๐Ÿ‘‰ Works with BNB, but I didn't manage to get a speedup on my setup, only slowdowns\r\n๐Ÿ‘‰ Works with AWQ, same findings as in the case without quantization\r\n\r\nOn top of that, from the blog post we know that:\r\n๐Ÿ‘‰ It requires changes in the modeling code of each model, so it will require a lot of work to add and to maintain\r\n๐Ÿ‘‰ It is limited to greedy decoding, meaning that it doesn't support the most common use case (`do_sample=True`)\r\n๐Ÿ‘‰ Batching with this technique is much trickier -- just like in speculative decoding/assisted generation, we may have more than one accepted token per forward pass\r\n\r\n___________________________________________________________________________________________________\r\n\r\n\r\nThe idea does look very promising -- it would be amazing to be able to speed up a model without relying on external models. However, the current benefits are limited to GPU-rich users using a GPU oversized for the task at hand, and the addition costs are heavy, especially with model-level changes. The original code is also open-source and `transformers`-compatible, despite being limited to `llama`.\r\n\r\nIf a model-independent solution can be achieved, more positive benchmarks are found, or upgrades to the technique are released, I'd be happy to reconsider this decision! \r\n\r\nLet's keep this issue open for discussion ๐Ÿค— ", "^ Some of the acronyms in the above response:\r\nLADE = Lookahead decoding\r\nFA2 = Flash Attention 2\r\nBNB: Bitsandbytes\r\nAWQ: Activation-aware Weight Quantization.", "The authors mentioned that they are working on an FA2-compatible CUDA kernel, so hopefully we'll see better results soon!", "BTW, here's a PR where we are looking at adding sampling support.\r\n\r\nhttps://github.com/hao-ai-lab/LookaheadDecoding/pull/6\r\n" ]
1,700
1,701
null
CONTRIBUTOR
null
### Feature request Fu et al. propose a novel decoding technique that accelerates greedy decoding on Llama 2 and Code-Llama by 1.5-2x across various parameters sizes, without a draft model. This method can be extended to work on beam search decoding. Blog post: https://lmsys.org/blog/2023-11-21-lookahead-decoding/ Code: https://github.com/hao-ai-lab/LookaheadDecoding ### Motivation Lookahead decoding provides a massive speedup at a worthwhile tradeoff (namely, a windowed n-gram cache and a custom attention mask). There have been other proposals to integrate lookahead decoding in other libraries like TGI or vLLM, but it seems that for this specific feature, it would be best integrated into the core `transformers` library the same way that Flash Attention has. ### Your contribution I'm busy with thesis work, but I can submit a PR based on the original implementation here if I have time.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27649/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27648/comments
https://api.github.com/repos/huggingface/transformers/issues/27648/events
https://github.com/huggingface/transformers/issues/27648
2,005,914,513
I_kwDOCUB6oc53j9OR
27,648
kosmos_processor `sorted_length =sorted([(idx, len(x)) for idx, x in enumerate(text_encoding.input_ids)])` wrong?
{ "login": "YamingZhang", "id": 50822118, "node_id": "MDQ6VXNlcjUwODIyMTE4", "avatar_url": "https://avatars.githubusercontent.com/u/50822118?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YamingZhang", "html_url": "https://github.com/YamingZhang", "followers_url": "https://api.github.com/users/YamingZhang/followers", "following_url": "https://api.github.com/users/YamingZhang/following{/other_user}", "gists_url": "https://api.github.com/users/YamingZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/YamingZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YamingZhang/subscriptions", "organizations_url": "https://api.github.com/users/YamingZhang/orgs", "repos_url": "https://api.github.com/users/YamingZhang/repos", "events_url": "https://api.github.com/users/YamingZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/YamingZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Mmmm yeah seems like it. Would you like to open a PR? \r\nfyi @ydshieh ", "Sure, I'd be happy to help out", "Hi @YamingZhang \r\n\r\nthe latest version is\r\n\r\n```\r\n sorted_length = sorted(\r\n [(idx, len(x)) for idx, x in enumerate(text_encoding.input_ids)], key=lambda x: x[-1]\r\n )\r\n```\r\nsee #27323.\r\n", "\r\n\r\n> Hi @YamingZhang\r\n> \r\n> the latest version is\r\n> \r\n> ```\r\n> sorted_length = sorted(\r\n> [(idx, len(x)) for idx, x in enumerate(text_encoding.input_ids)], key=lambda x: x[-1]\r\n> )\r\n> ```\r\n> \r\n> see #27323.\r\n\r\nThank you very much.\r\n" ]
1,700
1,700
1,700
NONE
null
https://github.com/huggingface/transformers/blob/514de24abfd4416aeba6a6455ad5920f57f3567d/src/transformers/models/kosmos2/processing_kosmos2.py#L214C17-L214C17 The sorted code lacks a sorting function and should be changed to `sorted_length =sorted([(idx, len(x)) for idx, x in enumerate(text_encoding.input_ids)], key=lambda x: x[1])` , is that correct?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27648/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27647/comments
https://api.github.com/repos/huggingface/transformers/issues/27647/events
https://github.com/huggingface/transformers/pull/27647
2,005,859,302
PR_kwDOCUB6oc5gGuT_
27,647
Add push_to_hub also for image_processor
{ "login": "correll", "id": 399192, "node_id": "MDQ6VXNlcjM5OTE5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/399192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/correll", "html_url": "https://github.com/correll", "followers_url": "https://api.github.com/users/correll/followers", "following_url": "https://api.github.com/users/correll/following{/other_user}", "gists_url": "https://api.github.com/users/correll/gists{/gist_id}", "starred_url": "https://api.github.com/users/correll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/correll/subscriptions", "organizations_url": "https://api.github.com/users/correll/orgs", "repos_url": "https://api.github.com/users/correll/repos", "events_url": "https://api.github.com/users/correll/events{/privacy}", "received_events_url": "https://api.github.com/users/correll/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,700
1,704
1,704
NONE
null
# What does this PR do? Without this the image-segmentation pipeline will fail with a user's own model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27647/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27647", "html_url": "https://github.com/huggingface/transformers/pull/27647", "diff_url": "https://github.com/huggingface/transformers/pull/27647.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27647.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27646/comments
https://api.github.com/repos/huggingface/transformers/issues/27646/events
https://github.com/huggingface/transformers/pull/27646
2,005,853,813
PR_kwDOCUB6oc5gGtHa
27,646
Add "accelerate" to the list of required pacakges
{ "login": "correll", "id": 399192, "node_id": "MDQ6VXNlcjM5OTE5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/399192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/correll", "html_url": "https://github.com/correll", "followers_url": "https://api.github.com/users/correll/followers", "following_url": "https://api.github.com/users/correll/following{/other_user}", "gists_url": "https://api.github.com/users/correll/gists{/gist_id}", "starred_url": "https://api.github.com/users/correll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/correll/subscriptions", "organizations_url": "https://api.github.com/users/correll/orgs", "repos_url": "https://api.github.com/users/correll/repos", "events_url": "https://api.github.com/users/correll/events{/privacy}", "received_events_url": "https://api.github.com/users/correll/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I include the error message when running \"train()\". Apparently it is only needed when using torch. \r\n\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n[<ipython-input-11-6385862da3e9>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 training_args = TrainingArguments(\r\n 2 output_dir=\"segformer-b0-scene-parse-150\",\r\n 3 learning_rate=6e-5,\r\n 4 num_train_epochs=5,\r\n 5 per_device_train_batch_size=2,\r\n\r\n4 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/training_args.py](https://localhost:8080/#) in _setup_devices(self)\r\n 1785 if not is_sagemaker_mp_enabled():\r\n 1786 if not is_accelerate_available(min_version=\"0.20.1\"):\r\n-> 1787 raise ImportError(\r\n 1788 \"Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`\"\r\n 1789 )\r\n\r\nImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`", "Correct, which is why it's part of `transformers[torch]` (as it's only applicable to the torch backend). We should never always install accelerate with base transformers", "pip install transformers[torch] did not work on my machine, but indeed does work on colab. " ]
1,700
1,700
1,700
NONE
null
# What does this PR do? Added "accelerate" to the list of required packages <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27646/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27646", "html_url": "https://github.com/huggingface/transformers/pull/27646", "diff_url": "https://github.com/huggingface/transformers/pull/27646.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27646.patch", "merged_at": null }