url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/26931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26931/comments | https://api.github.com/repos/huggingface/transformers/issues/26931/events | https://github.com/huggingface/transformers/pull/26931 | 1,952,174,442 | PR_kwDOCUB6oc5dRTop | 26,931 | Fix license | {
"login": "MedAymenF",
"id": 33981698,
"node_id": "MDQ6VXNlcjMzOTgxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/33981698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MedAymenF",
"html_url": "https://github.com/MedAymenF",
"followers_url": "https://api.github.com/users/MedAymenF/followers",
"following_url": "https://api.github.com/users/MedAymenF/following{/other_user}",
"gists_url": "https://api.github.com/users/MedAymenF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MedAymenF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MedAymenF/subscriptions",
"organizations_url": "https://api.github.com/users/MedAymenF/orgs",
"repos_url": "https://api.github.com/users/MedAymenF/repos",
"events_url": "https://api.github.com/users/MedAymenF/events{/privacy}",
"received_events_url": "https://api.github.com/users/MedAymenF/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26931). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes the license for Fuyu. See [blog](https://www.adept.ai/blog/fuyu-8b) and [Model Card](https://huggingface.co/adept/fuyu-8b#model-description).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26931/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26931",
"html_url": "https://github.com/huggingface/transformers/pull/26931",
"diff_url": "https://github.com/huggingface/transformers/pull/26931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26931.patch",
"merged_at": 1697722601000
} |
https://api.github.com/repos/huggingface/transformers/issues/26930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26930/comments | https://api.github.com/repos/huggingface/transformers/issues/26930/events | https://github.com/huggingface/transformers/pull/26930 | 1,952,156,083 | PR_kwDOCUB6oc5dRPiw | 26,930 | testing doc-builder | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/429
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26930/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26930",
"html_url": "https://github.com/huggingface/transformers/pull/26930",
"diff_url": "https://github.com/huggingface/transformers/pull/26930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26930.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26929/comments | https://api.github.com/repos/huggingface/transformers/issues/26929/events | https://github.com/huggingface/transformers/pull/26929 | 1,951,981,174 | PR_kwDOCUB6oc5dQpdn | 26,929 | Fix RoPE config validation for FalconConfig + various config typos | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26929). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | MEMBER | null | # What does this PR do?
One change that fixes a bug:
* Fix RoPE config validation: `config = FalconConfig(alibi=False, rope_scaling={"type": "linear", "factor": 2.0})` would fail. (1fbd6993b8ac0c5dee1b26656648ee8360c06aa3)
And three changes that fix typos/improve maintainability etc.
* Fix code block tag (b9e4f0a826792ba7bd4ed20d810a0311ac76890f)
* Fix typos: an float -> a float (69214ce56aeeb450c6deb49c3365048b99225436)
* Implement copy functionality for Fuyu and Persimmon (4ee278a6a5c904feee02df7eaaddb91d11839d00)
I can split this up into two smaller PRs if preferred - just let me know.
## Details
### Fix RoPE config validation
```python
from transformers import FalconConfig
config = FalconConfig(alibi=False, rope_scaling={"type": "linear", "factor": 2.0})
```
Running this on main produces:
```
ValueError: `rope_scaling` is not supported when `alibi` is `True`.
```
Which is obviously nonsensical. This is caused by `if self.rotary: raise ValueError` where `self.rotary` is just `not self.alibi`. So, this ValueError is thrown when alibi is `False`, not when it is `True`.
The other 3 changes speak for themselves.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
No new tests - I can add one for the first bug fix if preferred.
## Who can review?
@gante
- Tom Aarsen | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26929/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26929",
"html_url": "https://github.com/huggingface/transformers/pull/26929",
"diff_url": "https://github.com/huggingface/transformers/pull/26929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26929.patch",
"merged_at": 1698169029000
} |
https://api.github.com/repos/huggingface/transformers/issues/26928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26928/comments | https://api.github.com/repos/huggingface/transformers/issues/26928/events | https://github.com/huggingface/transformers/pull/26928 | 1,951,945,448 | PR_kwDOCUB6oc5dQhcJ | 26,928 | [Table Transformer] Add Transformers-native checkpoints | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
This PR adds Table Transformer checkpoints that leverages the `AutoBackbone` API (instead of the timm library) for the ResNet backbones. This enables people to use the `TableTransformer` models without relying on timm as dependency. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26928",
"html_url": "https://github.com/huggingface/transformers/pull/26928",
"diff_url": "https://github.com/huggingface/transformers/pull/26928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26928.patch",
"merged_at": 1700037354000
} |
https://api.github.com/repos/huggingface/transformers/issues/26927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26927/comments | https://api.github.com/repos/huggingface/transformers/issues/26927/events | https://github.com/huggingface/transformers/pull/26927 | 1,951,851,788 | PR_kwDOCUB6oc5dQNt3 | 26,927 | [Docs] Make sure important decode and generate method are nicely displayed in Whisper docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @ArthurZucker @sanchit-gandhi for viz"
] | 1,697 | 1,698 | 1,697 | MEMBER | null | All in the title | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26927/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26927",
"html_url": "https://github.com/huggingface/transformers/pull/26927",
"diff_url": "https://github.com/huggingface/transformers/pull/26927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26927.patch",
"merged_at": 1697713307000
} |
https://api.github.com/repos/huggingface/transformers/issues/26926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26926/comments | https://api.github.com/repos/huggingface/transformers/issues/26926/events | https://github.com/huggingface/transformers/issues/26926 | 1,951,845,758 | I_kwDOCUB6oc50Vs1- | 26,926 | mms.service.PredictionException: Tokenizer class LlamaTokenizer does not exist or is not currently imported. | {
"login": "ashwincv0112",
"id": 129140539,
"node_id": "U_kgDOB7KHOw",
"avatar_url": "https://avatars.githubusercontent.com/u/129140539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashwincv0112",
"html_url": "https://github.com/ashwincv0112",
"followers_url": "https://api.github.com/users/ashwincv0112/followers",
"following_url": "https://api.github.com/users/ashwincv0112/following{/other_user}",
"gists_url": "https://api.github.com/users/ashwincv0112/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashwincv0112/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashwincv0112/subscriptions",
"organizations_url": "https://api.github.com/users/ashwincv0112/orgs",
"repos_url": "https://api.github.com/users/ashwincv0112/repos",
"events_url": "https://api.github.com/users/ashwincv0112/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashwincv0112/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I think the transformers version you are using is wrong. Llama was part of the `4.32` release. Make sure to use this transformers version if you wanna use Llama. Codellama is `4.33.1`",
"Thank you for the response.... \r\nAlso, could you also let me know the corresponding version for \r\npytorch_version=\"1.13\" and \r\npy_version=\"py39\"",
"Mmmm torch should be alright, python as well",
"ok. thanks. I will try it and let you know...\r\nAppreciate your help!",
"Hi Arthur,\r\n\r\nI tried deploying the endpoint with the above mentioned transformer version (4.33.1 for CodeLlama)\r\n\r\nhuggingface_model = HuggingFaceModel(\r\n model_data=model_output_path, # path to your trained SageMaker model\r\n role=role, # IAM role with permissions to create an endpoint\r\n env=hub,\r\n transformers_version=\"4.33.1\", # Transformers version\r\n pytorch_version=\"1.13\", # PyTorch version\r\n py_version=\"py39\", # Python version used\r\n)\r\n\r\n\r\nI am getting the below error (I did upgrade the Sagemaker as mentioned):\r\n\r\n`ValueError: Unsupported huggingface version: 4.33.1. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.26.0, 4.28.1, 4.6, 4.10, 4.11, 4.12, 4.17, 4.26, 4.28.`\r\n",
"Ouch, that's outside my knowledge so I'll ping @philschmid ",
"@ashwincv0112 we recommend using TGI to deploy LLMs, see here for an example: https://www.philschmid.de/sagemaker-llama-llm",
"@philschmid thank you for the information.\r\n\r\nWe are actually trying to deploy a trained model saved in S3 to Sagemaker endpoint.\r\n\r\nJust wanted to confirm, would the above approach work for that as well.\r\n",
"Yes see here for an example: https://www.philschmid.de/sagemaker-mistral#4-deploy-fine-tuned-mistral-7b-on-amazon-sagemaker",
"Thank you too much. I will try the same and let you know. Appreciate it!!!",
"> Yes see here for an example: https://www.philschmid.de/sagemaker-mistral#4-deploy-fine-tuned-mistral-7b-on-amazon-sagemaker\r\n\r\nThank you @philschmid , I am able to deploy the endpoint and inference on the same.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### System Info
I am trying to run a Prediction(Inference) by accessing a deployed AWS Sagemaker endpoint for CodeLlama-Python-7B model.
While I was able to finetune and deploy the endpoint. I am getting the above error while running a prediction.
Could someone help on this. I changed the tokenised_class in the model repo to LlamaTokenizer while running this.
Also, tried with CodeLlamaTokenizer, getting the same error there as well.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
hub = {
'HF_TASK':'text-generation' # NLP task you want to use for predictions
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
model_data=model_output_path, # path to your trained SageMaker model
role=role, # IAM role with permissions to create an endpoint
env=hub,
transformers_version="4.26", # Transformers version
pytorch_version="1.13", # PyTorch version
py_version="py39", # Python version used
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="<GPU Instance>"
)
predictor.predict(sentiment_input)
### Expected behavior
Able to get the Model output for the provided Prompt (by accessing the Sagemaker endpoint) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26925/comments | https://api.github.com/repos/huggingface/transformers/issues/26925/events | https://github.com/huggingface/transformers/pull/26925 | 1,951,767,438 | PR_kwDOCUB6oc5dP6_a | 26,925 | add hybrid fsdp ShardingStrategy | {
"login": "pkumc",
"id": 9345057,
"node_id": "MDQ6VXNlcjkzNDUwNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9345057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pkumc",
"html_url": "https://github.com/pkumc",
"followers_url": "https://api.github.com/users/pkumc/followers",
"following_url": "https://api.github.com/users/pkumc/following{/other_user}",
"gists_url": "https://api.github.com/users/pkumc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pkumc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pkumc/subscriptions",
"organizations_url": "https://api.github.com/users/pkumc/orgs",
"repos_url": "https://api.github.com/users/pkumc/repos",
"events_url": "https://api.github.com/users/pkumc/events{/privacy}",
"received_events_url": "https://api.github.com/users/pkumc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@pacman100 Do you have time to review this PR?",
"> @pacman100 Do you have time to review this PR?\r\n\r\n@muellerzr Could you review this PR?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26925). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Add [hybrid fsdp ShardingStrategy](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.ShardingStrategy) to Trainer.
## Who can review?
- trainer: @muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26925/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26925",
"html_url": "https://github.com/huggingface/transformers/pull/26925",
"diff_url": "https://github.com/huggingface/transformers/pull/26925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26925.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26924/comments | https://api.github.com/repos/huggingface/transformers/issues/26924/events | https://github.com/huggingface/transformers/pull/26924 | 1,951,728,911 | PR_kwDOCUB6oc5dPye5 | 26,924 | [ examples] fix loading jsonl with load dataset in run translation example | {
"login": "mathiasesn",
"id": 27091759,
"node_id": "MDQ6VXNlcjI3MDkxNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/27091759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathiasesn",
"html_url": "https://github.com/mathiasesn",
"followers_url": "https://api.github.com/users/mathiasesn/followers",
"following_url": "https://api.github.com/users/mathiasesn/following{/other_user}",
"gists_url": "https://api.github.com/users/mathiasesn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathiasesn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathiasesn/subscriptions",
"organizations_url": "https://api.github.com/users/mathiasesn/orgs",
"repos_url": "https://api.github.com/users/mathiasesn/repos",
"events_url": "https://api.github.com/users/mathiasesn/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathiasesn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26924). All of your documentation changes will be reflected on that endpoint.",
"> LGTM, this might need to be propagated to other examples? Also can you update the pr description to say what this fixes and the name of the PR with [ `examples`] fix blablalbla\r\n\r\nUpdate name of the PR to: [ examples] fix loading jsonl with load dataset in run translation example.",
"Sorry for the delay! "
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26868
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26924",
"html_url": "https://github.com/huggingface/transformers/pull/26924",
"diff_url": "https://github.com/huggingface/transformers/pull/26924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26924.patch",
"merged_at": 1700491543000
} |
https://api.github.com/repos/huggingface/transformers/issues/26923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26923/comments | https://api.github.com/repos/huggingface/transformers/issues/26923/events | https://github.com/huggingface/transformers/issues/26923 | 1,951,637,965 | I_kwDOCUB6oc50U6HN | 26,923 | Falcon tokenizer doesn't use eos_token. | {
"login": "zhaowei-wang-nlp",
"id": 22047467,
"node_id": "MDQ6VXNlcjIyMDQ3NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaowei-wang-nlp",
"html_url": "https://github.com/zhaowei-wang-nlp",
"followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers",
"following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs",
"repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos",
"events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
transformers==4.34.0
python==3.9.17
### Who can help?
@ArthurZucker @youne
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
# Load model directly
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b")
token_ids = tokenizer("I eat an apple")["input_ids"]
print(tokenizer.convert_ids_to_tokens(token_ids))
token_ids = tokenizer("I eat an apple", "I eat an banana")["input_ids"]
print(tokenizer.convert_ids_to_tokens(token_ids))
```
### Expected behavior
Current output of "tokenizer("I eat an apple", "I eat an banana")":
<img width="843" alt="image" src="https://github.com/huggingface/transformers/assets/22047467/616a49f0-f0db-4432-8ed6-31447530e1e5">
There is not eos_token between two sentences.
The expected behavior should be a eos_token/[SEP] token between these two sentences.
bert-base-uncased's behavior:
<img width="1297" alt="image" src="https://github.com/huggingface/transformers/assets/22047467/cefd450c-b03e-46b4-82a8-c1a0b03853c1">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26922/comments | https://api.github.com/repos/huggingface/transformers/issues/26922/events | https://github.com/huggingface/transformers/issues/26922 | 1,951,487,858 | I_kwDOCUB6oc50UVdy | 26,922 | Logo | {
"login": "IfeanyiEkperi",
"id": 6451526,
"node_id": "MDQ6VXNlcjY0NTE1MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6451526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IfeanyiEkperi",
"html_url": "https://github.com/IfeanyiEkperi",
"followers_url": "https://api.github.com/users/IfeanyiEkperi/followers",
"following_url": "https://api.github.com/users/IfeanyiEkperi/following{/other_user}",
"gists_url": "https://api.github.com/users/IfeanyiEkperi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IfeanyiEkperi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IfeanyiEkperi/subscriptions",
"organizations_url": "https://api.github.com/users/IfeanyiEkperi/orgs",
"repos_url": "https://api.github.com/users/IfeanyiEkperi/repos",
"events_url": "https://api.github.com/users/IfeanyiEkperi/events{/privacy}",
"received_events_url": "https://api.github.com/users/IfeanyiEkperi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sounds good but we already have a logo: 🤗 ",
"Okay. I was thinking we could give it a facelift and make it more responsive.\r\n\r\nBuy, No problem if it's not needed at this time.\r\n\r\nThank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
Hugging face needs a facelift... A logo that is versatile and can accommodate all of their beaches with minimal change....
I'm proposing a fun logo just as the names. If I'm given a go ahead, I can create one asap
### Motivation
I want to make tech logos around the world look awesome
### Your contribution
To design a versatile and suitable LOGO for HUGGING FACE and incorporate all its sub-brands like Transformers as well...
Which means, I'll be creating one central logo and several logos for the sub-brands | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26922/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 4,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/transformers/issues/26922/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26921/comments | https://api.github.com/repos/huggingface/transformers/issues/26921/events | https://github.com/huggingface/transformers/issues/26921 | 1,951,413,364 | I_kwDOCUB6oc50UDR0 | 26,921 | BarkModel: support multiple history prompt for batch api | {
"login": "Selectorrr",
"id": 973663,
"node_id": "MDQ6VXNlcjk3MzY2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/973663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Selectorrr",
"html_url": "https://github.com/Selectorrr",
"followers_url": "https://api.github.com/users/Selectorrr/followers",
"following_url": "https://api.github.com/users/Selectorrr/following{/other_user}",
"gists_url": "https://api.github.com/users/Selectorrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Selectorrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Selectorrr/subscriptions",
"organizations_url": "https://api.github.com/users/Selectorrr/orgs",
"repos_url": "https://api.github.com/users/Selectorrr/repos",
"events_url": "https://api.github.com/users/Selectorrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/Selectorrr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @ylacombe ",
"Hi @Selectorrr, thanks for opening this issue!\r\n\r\nit would actually be a great feature, in line with recent #26673 and #26672. \r\nI believe that it would need two main changes:\r\n1. Allowing BarkProcessor to get a [list of history prompts](https://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/bark/processing_bark.py#L218-L228) and adapt the output in consequence\r\n2. Adapt Bark modeling code -> making sure that every usage of history prompt is adapted and of the right shape. It might be more difficult\r\n\r\nBasically, we would need to make sure that the number of history prompts is either 1 or the same as the batch size and throw errors otherwise. In the modeling code, we probably would need to adapt lines like this one to not repeat_interleave if the history prompt is of the right shape: https://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/bark/modeling_bark.py#L782 \r\n\r\nWould you like to start working on this ?",
"@ylacombe Unfortunately my python development experience is not enough to say with certainty that I will do this. If I can do it, I will write it, but please consider it a free task.",
"@ylacombe and @Selectorrr Can I work on this? I understand history_prompt here represents speaker voices and what we want is more than one speaker for batch. Let me know if I need to know more.",
"@ylacombe and @Selectorrr Can I work on this? ",
"> @ylacombe and @Selectorrr Can I work on this? \n\nI don't do that, I'd be glad if you did.",
"Hey @shankarsharma8089 and @hackpk , feel free to work on this! Let me know if I can help you",
"@ylacombe I took an approach to loop the generation of semantic, coarse, and fine for every speaker it worked but it's not an optimized solution as we can have as many speakers as we want. At this moment I'm stuck to get an optimized solution. I was thinking maybe converting the list of speakers semantic, coarse, and fine to a list. I'm new to ML, Transformer, and Tensors which I'm learning on the go. Please guide me in the right direction. Thank you",
"Hey @hackpk, I believe that [my previous comment](https://github.com/huggingface/transformers/issues/26921#issuecomment-1797875170) should have everything you need to address this issue. It's okay to loop to load voice presets, but I don't think you should do that to generate. \r\nThis PR is ideal for people having a little experience in the field. If this is out of reach for you, don't hesitate to let @shankarsharma8089 handle this! There are a bunch of issues that might be more suitable for you under the [Good First Issue](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+label%3A%22Good+First+Issue%22) tag!\r\n",
"Thanks @ylacombe I will work on some other suitable issue. @shankarsharma8089 you can take this",
"Hi @ylacombe @Selectorrr if it s ok with you, I'll have a stab at it. Not necessarily my area of expertise but happy to learn and contribute."
] | 1,697 | 1,703 | null | NONE | null | ### Feature request
Add multiple history prompt support for batch api
### Motivation
Batch API speeds up the audio generation process a lot. Unfortunately we can't use it to generate long sequences.
### Your contribution
I may be able to look at it soon | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26921/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26920/comments | https://api.github.com/repos/huggingface/transformers/issues/26920/events | https://github.com/huggingface/transformers/issues/26920 | 1,951,294,486 | I_kwDOCUB6oc50TmQW | 26,920 | Owlv2 seems to have a really poor performance in image guided detection and the example of owlv2 on Huggingface indicated that. | {
"login": "c1ekrt",
"id": 40287606,
"node_id": "MDQ6VXNlcjQwMjg3NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/40287606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c1ekrt",
"html_url": "https://github.com/c1ekrt",
"followers_url": "https://api.github.com/users/c1ekrt/followers",
"following_url": "https://api.github.com/users/c1ekrt/following{/other_user}",
"gists_url": "https://api.github.com/users/c1ekrt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c1ekrt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c1ekrt/subscriptions",
"organizations_url": "https://api.github.com/users/c1ekrt/orgs",
"repos_url": "https://api.github.com/users/c1ekrt/repos",
"events_url": "https://api.github.com/users/c1ekrt/events{/privacy}",
"received_events_url": "https://api.github.com/users/c1ekrt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nYes that documentation example requires an update. I've noticed that running the image guided detection with OWLv2 is pretty sensitive to the threshold values. Using `threshold=0.9`, `nms_threshold=0.3` in the `post_process_image_guided_detection` method gives better results. See the bottom of my [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/OWLv2/Zero_and_one_shot_object_detection_with_OWLv2.ipynb).\r\n\r\nCould you open a PR to update the threshold values, as well as the expected values [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlv2/modeling_owlv2.py#L1604-L1623)?",
"@NielsRogge Thank you for your fast reply. In fact, parameter between Huggingface document and your demo notebook are the same, and the only difference is the query image you use. After changing the query image, the outcome Huggingface document produced is the same as in demo notebook. It seems that the image guided detection is quite sensitive.\r\nFurthermore, tweaking the parameter in documentation example can't yield satisfying result if the original image is input as reference. ",
"Hi @c1ekrt,\r\n\r\nI've opened a PR to address this, and I've also updated my [demo notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/OWLv2/Zero_and_one_shot_object_detection_with_OWLv2.ipynb) to illustrate the image guided detection in more detail. Hope that helps! "
] | 1,697 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.22.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ amyeroberts if not busy
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Copy paste the example of class transformers.Owlv2ForObjectDetection image_guided_detection and run it
https://huggingface.co/docs/transformers/main/en/model_doc/owlv2#transformers.Owlv2ForObjectDetection
### Expected behavior
In OwlVit model the image guided detection works out fine, successfully capture the reference image and identify two cats in the picture, however the new owlv2 seems to fail the given task of identifying the picture, which returns multiple boxes that don't make sense.
Detected similar object with confidence 0.938 at location [327.31, 54.94, 547.39, 268.06]
Detected similar object with confidence 0.959 at location [5.78, 360.65, 619.12, 366.39]
Detected similar object with confidence 0.902 at location [2.85, 360.01, 627.63, 380.79]
Detected similar object with confidence 0.985 at location [176.97, -29.45, 672.69, 182.83]
Detected similar object with confidence 1.0 at location [6.53, 14.35, 624.87, 470.82]
Detected similar object with confidence 0.998 at location [579.98, 29.14, 615.49, 489.05]
Detected similar object with confidence 0.985 at location [206.15, 10.53, 247.74, 466.01]
Detected similar object with confidence 0.947 at location [18.62, 429.72, 646.5, 457.72]
Detected similar object with confidence 0.996 at location [523.88, 20.69, 586.84, 483.18]
Detected similar object with confidence 0.998 at location [3.39, 360.59, 617.29, 499.21]
Detected similar object with confidence 0.969 at location [4.47, 449.05, 614.5, 474.76]
Detected similar object with confidence 0.966 at location [31.44, 463.65, 654.66, 471.07]
Detected similar object with confidence 0.924 at location [30.93, 468.07, 635.35, 475.39]

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26920/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26919/comments | https://api.github.com/repos/huggingface/transformers/issues/26919/events | https://github.com/huggingface/transformers/issues/26919 | 1,951,249,530 | I_kwDOCUB6oc50TbR6 | 26,919 | [i18n-<languageCode>] Translating docs to <languageName> | {
"login": "Jiuri05",
"id": 148340879,
"node_id": "U_kgDOCNeAjw",
"avatar_url": "https://avatars.githubusercontent.com/u/148340879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jiuri05",
"html_url": "https://github.com/Jiuri05",
"followers_url": "https://api.github.com/users/Jiuri05/followers",
"following_url": "https://api.github.com/users/Jiuri05/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiuri05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jiuri05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiuri05/subscriptions",
"organizations_url": "https://api.github.com/users/Jiuri05/orgs",
"repos_url": "https://api.github.com/users/Jiuri05/repos",
"events_url": "https://api.github.com/users/Jiuri05/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jiuri05/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27664/comments | https://api.github.com/repos/huggingface/transformers/issues/27664/events | https://github.com/huggingface/transformers/issues/27664 | 2,007,102,089 | I_kwDOCUB6oc53ofKJ | 27,664 | XLA Integration for TensorFlow Models | {
"login": "akote123",
"id": 133775732,
"node_id": "U_kgDOB_lBdA",
"avatar_url": "https://avatars.githubusercontent.com/u/133775732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akote123",
"html_url": "https://github.com/akote123",
"followers_url": "https://api.github.com/users/akote123/followers",
"following_url": "https://api.github.com/users/akote123/following{/other_user}",
"gists_url": "https://api.github.com/users/akote123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akote123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akote123/subscriptions",
"organizations_url": "https://api.github.com/users/akote123/orgs",
"repos_url": "https://api.github.com/users/akote123/repos",
"events_url": "https://api.github.com/users/akote123/events{/privacy}",
"received_events_url": "https://api.github.com/users/akote123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Here is what you gotta know about using xla:\r\n\r\n> When you are executing an XLA-enabled function (like `xla_generate()` above) for the first time, it will internally try to infer the computation graph, which is time-consuming. This process is known as [“tracing”](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing). \r\n\r\n> You might notice that the generation time is not fast. Successive calls of `xla_generate()` (or any other XLA-enabled function) won’t have to infer the computation graph, given the inputs to the function follow the same shape with which the computation graph was initially built. While this is not a problem for modalities with fixed input shapes (e.g., images), you must pay attention if you are working with variable input shape modalities (e.g., text).\r\n\r\nSee [the documentation](https://huggingface.co/docs/transformers/tf_xla#gotchas-to-be-aware-of)\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,703 | 1,703 | NONE | null | Hi,
I wanted to try XLA for Inference.
```python
import os
#environment configs
import tensorflow as tf
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
xla_generate = tf.function(model, jit_compile=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
from time import time
start_time = time()
for i in range(1):
output = xla_generate(encoded_input)
end_time = time()
time_taken = end_time - start_time
print("average time (seconds) for bert Inference: ", time_taken)`
```
For this code I am getting timing which more than without XLA and IF look into perf report of this I am not seeing any XLA OP and I am seeing oneDNN . So I wanted to understand how this XLA works with oneDNN | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27664/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26918/comments | https://api.github.com/repos/huggingface/transformers/issues/26918/events | https://github.com/huggingface/transformers/pull/26918 | 1,950,925,959 | PR_kwDOCUB6oc5dNEMh | 26,918 | Fix Fuyu image scaling bug | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"~~Thanks for the quick review! Feel free to merge it when you see fit, as I can't do it :)~~ Edit: saw that you requested a review from Pablo 👍 ",
"He might have missed it let's merge this 😉 \r\n"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | The coordinates are reversed, see https://github.com/huggingface/transformers/blob/574a5384557b1aaf98ddb13ea9eb0a0ee8ff2cb2/src/transformers/image_transforms.py#L328
It could produce negative padding and hence inference errors for certain image sizes, as [identified in the Fuyu demo](https://huggingface.co/spaces/adept/fuyu-8b-demo/discussions/1)
Happy to write a small test in the morning.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@molbap @ArthurZucker @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26918",
"html_url": "https://github.com/huggingface/transformers/pull/26918",
"diff_url": "https://github.com/huggingface/transformers/pull/26918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26918.patch",
"merged_at": 1697802366000
} |
https://api.github.com/repos/huggingface/transformers/issues/26917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26917/comments | https://api.github.com/repos/huggingface/transformers/issues/26917/events | https://github.com/huggingface/transformers/pull/26917 | 1,950,873,001 | PR_kwDOCUB6oc5dM5oF | 26,917 | [`core`] Fix gradient checkpointing `use_reentrant` issues | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Partially fixes: https://github.com/huggingface/trl/issues/835 more details coming soon | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26917/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26917",
"html_url": "https://github.com/huggingface/transformers/pull/26917",
"diff_url": "https://github.com/huggingface/transformers/pull/26917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26917.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26916/comments | https://api.github.com/repos/huggingface/transformers/issues/26916/events | https://github.com/huggingface/transformers/pull/26916 | 1,950,795,645 | PR_kwDOCUB6oc5dMohT | 26,916 | [`FA-2`] Revert suggestion that broke FA2 fine-tuning with quantized models | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Reverts: https://github.com/huggingface/transformers/pull/26846#discussion_r1361616963
This leads to an error because `getattr` will try to evaluate the third argument. for quantized models `weight` attribute do not exist leading to an error making the fine-tuning not possible
To repro on a more local scope:
```python
>>> import torch
>>> t = torch.randn(1)
>>> getattr(t, 'device', t.non_existent_attribute)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Tensor' object has no attribute 'non_existent_attribute'
```
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26916/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26916",
"html_url": "https://github.com/huggingface/transformers/pull/26916",
"diff_url": "https://github.com/huggingface/transformers/pull/26916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26916.patch",
"merged_at": 1697668585000
} |
https://api.github.com/repos/huggingface/transformers/issues/26915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26915/comments | https://api.github.com/repos/huggingface/transformers/issues/26915/events | https://github.com/huggingface/transformers/pull/26915 | 1,950,724,897 | PR_kwDOCUB6oc5dMZrf | 26,915 | testing doc-builder | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/425
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26915/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26915",
"html_url": "https://github.com/huggingface/transformers/pull/26915",
"diff_url": "https://github.com/huggingface/transformers/pull/26915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26915.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26914/comments | https://api.github.com/repos/huggingface/transformers/issues/26914/events | https://github.com/huggingface/transformers/pull/26914 | 1,950,706,612 | PR_kwDOCUB6oc5dMVs6 | 26,914 | [`FA-2`] Add fa2 support for `from_config` | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26914). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://github.com/huggingface/transformers/issues/26878
As per title and as asked by the community. This PR extends the FA2 API so that users can load FA2 models through `from_config` interface
Added also nice tests!
```python
import torch
from transformers import AutoConfig, AutoModelForCausalLM
model_id = "trl-internal-testing/tiny-random-LlamaForCausalLM"
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_config(config, use_flash_attention_2=True, torch_dtype=torch.float16)
print(model)
```
cc @LysandreJik @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26914/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26914",
"html_url": "https://github.com/huggingface/transformers/pull/26914",
"diff_url": "https://github.com/huggingface/transformers/pull/26914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26914.patch",
"merged_at": 1700495156000
} |
https://api.github.com/repos/huggingface/transformers/issues/26913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26913/comments | https://api.github.com/repos/huggingface/transformers/issues/26913/events | https://github.com/huggingface/transformers/pull/26913 | 1,950,633,890 | PR_kwDOCUB6oc5dMGMy | 26,913 | Corrected modalities description in README_ru.md | {
"login": "letohx",
"id": 99612194,
"node_id": "U_kgDOBe_2Ig",
"avatar_url": "https://avatars.githubusercontent.com/u/99612194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/letohx",
"html_url": "https://github.com/letohx",
"followers_url": "https://api.github.com/users/letohx/followers",
"following_url": "https://api.github.com/users/letohx/following{/other_user}",
"gists_url": "https://api.github.com/users/letohx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/letohx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/letohx/subscriptions",
"organizations_url": "https://api.github.com/users/letohx/orgs",
"repos_url": "https://api.github.com/users/letohx/repos",
"events_url": "https://api.github.com/users/letohx/events{/privacy}",
"received_events_url": "https://api.github.com/users/letohx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26913). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | Description:
- [x] This PR fixes a typo in README_ru.md.
This pull request addresses an issue with the modalities description in the README_ru.md. The modalities' descriptions have been corrected to provide a more accurate and grammatically correct representation of the supported tasks for text, images, and audio. This change improves the clarity of the information provided in the README_ru.md. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26913/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26913",
"html_url": "https://github.com/huggingface/transformers/pull/26913",
"diff_url": "https://github.com/huggingface/transformers/pull/26913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26913.patch",
"merged_at": 1697733027000
} |
https://api.github.com/repos/huggingface/transformers/issues/26912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26912/comments | https://api.github.com/repos/huggingface/transformers/issues/26912/events | https://github.com/huggingface/transformers/pull/26912 | 1,950,557,849 | PR_kwDOCUB6oc5dL1Ze | 26,912 | [`FA-2` / `Mistral`] Supprot fa-2 + right padding + forward | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @younesbelkada @ArthurZucker,\r\n\r\nThanks for your contribution !\r\n\r\nThis patch was removed in this [RP](https://github.com/huggingface/transformers/pull/27086). After tracking the commit history, I think it should be added back ?\r\n\r\nBest,\r\nZhihong",
"Thansk for noticing @zhjohnchan ! Just made: https://github.com/huggingface/transformers/pull/27125 that should revert the behaviour of this PR"
] | 1,697 | 1,698 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://github.com/huggingface/transformers/issues/26877
There should be no reason to not support right padding + pure forward as the reason to not support generate + right padding + fa-2 is the cache mechanism for that case that slices the cache from the left.
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26912/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26912",
"html_url": "https://github.com/huggingface/transformers/pull/26912",
"diff_url": "https://github.com/huggingface/transformers/pull/26912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26912.patch",
"merged_at": 1697723329000
} |
https://api.github.com/repos/huggingface/transformers/issues/26911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26911/comments | https://api.github.com/repos/huggingface/transformers/issues/26911/events | https://github.com/huggingface/transformers/pull/26911 | 1,950,217,141 | PR_kwDOCUB6oc5dKqCy | 26,911 | Add fuyu model | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"edit: a previous version of this comment referred to a bug in loading the model--I uninstalled and reinstalled `transformers` and it appears to have resolved itself.\r\n\r\nHowever, the processor does not seem to support batch_size > 1 inputs correctly, the code appears to construct the tensors correctly, but then slices to take their first element only (except the patch indices) [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/fuyu/processing_fuyu.py#L545-L546). Is this intentional?",
"How much VRAM does Fuyu-8B need to run? \r\n\r\nIs it possible to have quantized versions running in C like the models TheBloke provides?",
"1. You need around 20GB for the bfloat16\r\n2. You can already use the quantisation config with the integration in transformers see [the doc here](https://huggingface.co/docs/transformers/main_classes/quantization). \r\n3. This kind of question would be best asked directly on the model [on the hub](https://huggingface.co/adept/fuyu-8b/discussions)! "
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the model Fuyu from Adept, a LMM processing image and text inputs.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26911/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26911",
"html_url": "https://github.com/huggingface/transformers/pull/26911",
"diff_url": "https://github.com/huggingface/transformers/pull/26911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26911.patch",
"merged_at": 1697667851000
} |
https://api.github.com/repos/huggingface/transformers/issues/26910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26910/comments | https://api.github.com/repos/huggingface/transformers/issues/26910/events | https://github.com/huggingface/transformers/pull/26910 | 1,950,185,584 | PR_kwDOCUB6oc5dKi_e | 26,910 | [i18n-ZH] Translated fast_tokenizers.md to Chinese | {
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Add zh (Chinese) translation for fast_tokenizers.md. #20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26910/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26910",
"html_url": "https://github.com/huggingface/transformers/pull/26910",
"diff_url": "https://github.com/huggingface/transformers/pull/26910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26910.patch",
"merged_at": 1697651141000
} |
https://api.github.com/repos/huggingface/transformers/issues/26909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26909/comments | https://api.github.com/repos/huggingface/transformers/issues/26909/events | https://github.com/huggingface/transformers/pull/26909 | 1,950,122,573 | PR_kwDOCUB6oc5dKUuw | 26,909 | 🌐 [i18n-ZH] Translated fast_tokenizers.md to Chinese | {
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Add zh (Chinese) translation for fast_tokenizers.md. #20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26909/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26909",
"html_url": "https://github.com/huggingface/transformers/pull/26909",
"diff_url": "https://github.com/huggingface/transformers/pull/26909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26909.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26908/comments | https://api.github.com/repos/huggingface/transformers/issues/26908/events | https://github.com/huggingface/transformers/pull/26908 | 1,950,036,971 | PR_kwDOCUB6oc5dKDGC | 26,908 | timm to pytorch conversion for vit model fix | {
"login": "staghado",
"id": 84044788,
"node_id": "MDQ6VXNlcjg0MDQ0Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/84044788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/staghado",
"html_url": "https://github.com/staghado",
"followers_url": "https://api.github.com/users/staghado/followers",
"following_url": "https://api.github.com/users/staghado/following{/other_user}",
"gists_url": "https://api.github.com/users/staghado/gists{/gist_id}",
"starred_url": "https://api.github.com/users/staghado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/staghado/subscriptions",
"organizations_url": "https://api.github.com/users/staghado/orgs",
"repos_url": "https://api.github.com/users/staghado/repos",
"events_url": "https://api.github.com/users/staghado/events{/privacy}",
"received_events_url": "https://api.github.com/users/staghado/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26908). All of your documentation changes will be reflected on that endpoint.",
"Overall looking much better, should be more robust now.\r\n\r\nA few things, there are a number of vit configurations supported in timm that are not, to my knowledge, supported in transformers. Should there be an attempt to detect? Thinking of some examples\r\n* if `fc_norm` is present (norm after pooling)\r\n* use of global average pooling in combination (or without) class token\r\n* non-overlapping position and class token embedding\r\n* CLIP style vit with `norm_pre` layer present\r\n* SigLIP style vit with `attn_pool` layer present\r\n* and soon, use of 'registers' via `reg_token` param\r\n* use of layer scale in ViT model blocks",
"Hey! Make sure to rebase to only have your changes! 😉 ",
"> Hey! Make sure to rebase to only have your changes! 😉\r\n\r\nI have reset the branch's history and left only my changes which fix the issue here.",
"> Overall looking much better, should be more robust now.\r\n> \r\n> A few things, there are a number of vit configurations supported in timm that are not, to my knowledge, supported in transformers. Should there be an attempt to detect? Thinking of some examples\r\n> \r\n> * if `fc_norm` is present (norm after pooling)\r\n> * use of global average pooling in combination (or without) class token\r\n> * non-overlapping position and class token embedding\r\n> * CLIP style vit with `norm_pre` layer present\r\n> * SigLIP style vit with `attn_pool` layer present\r\n> * and soon, use of 'registers' via `reg_token` param\r\n> * use of layer scale in ViT model blocks\r\n\r\nI have tried to add some checks before trying to convert the model from timm to huggingface.\r\n Checks to be added :\r\n 1. non-overlapping position and class token embedding \r\n 2. use of 'registers' via `reg_token` param\r\n 3. check when a model has a convolution feature extractor like ResNet50\r\n\r\nI have tested the script on the pre-trained ViTs and only the following give errors:\r\n\r\n- vit_base_r50_s16_224.orig_in21k (contains a resnet block)\r\n- vit_base_r50_s16_384.orig_in21k_ft_in1k (contains a resnet block)\r\n- vit_small_r26_s32_224.augreg_in21k\r\n- vit_small_r26_s32_224.augreg_in21k_ft_in1k\r\n- vit_small_r26_s32_384.augreg_in21k_ft_in1k\r\n- vit_tiny_r_s16_p8_224.augreg_in21k\r\n- vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k\r\n- vit_tiny_r_s16_p8_384.augreg_in21k_ft_in1k\r\n",
"@staghado looking good, those hybrid resnet-vit models should be possible to catch (see if below) with a meaningful error .. other than looks ready to go \r\n\r\n`if not isinstance(model.patch_embed, timm.layers.PatchEmbed)` ...",
"@ArthurZucker"
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | This PR fixes this issue #26219 with timm to PyTorch conversion. It removes the need for hard coded values for model dims by using the attributes of the timm model without needing the model name.
It does the following things :
- [x] Extract model dims from the timm model directly, no need for ifs
- [x] Decides whether the converted model will be a classification model or only a feature extractor using the num_classes attribute of the timm model.
- [x] In the case of a feature extractor only model : remove the pooling layers from the PyTorch model and compare the output to the last hidden state instead.
This works for a large number of models in the ViT family.
@ArthurZucker, @amyeroberts, @rwightman | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26908/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26908",
"html_url": "https://github.com/huggingface/transformers/pull/26908",
"diff_url": "https://github.com/huggingface/transformers/pull/26908.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26908.patch",
"merged_at": 1700496030000
} |
https://api.github.com/repos/huggingface/transformers/issues/26907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26907/comments | https://api.github.com/repos/huggingface/transformers/issues/26907/events | https://github.com/huggingface/transformers/pull/26907 | 1,949,960,405 | PR_kwDOCUB6oc5dJyJx | 26,907 | Fix and re-enable ConversationalPipeline tests | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good - ready to merge after review!",
"@ArthurZucker I tried that, but sometimes the tests get very slow when `max_position_embeddings` is large! Using a small number like 10 or 20 keeps the test quick."
] | 1,697 | 1,697 | 1,697 | MEMBER | null | The bug didn't turn out to be too bad - some models just had very short `max_position_embeddings` in their test configs, which meant the conversation tests generated outputs that were too long. Limiting `max_new_tokens` seems to have fixed it, but I'm running other tests to be sure! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26907/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26907",
"html_url": "https://github.com/huggingface/transformers/pull/26907",
"diff_url": "https://github.com/huggingface/transformers/pull/26907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26907.patch",
"merged_at": 1697713465000
} |
https://api.github.com/repos/huggingface/transformers/issues/26906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26906/comments | https://api.github.com/repos/huggingface/transformers/issues/26906/events | https://github.com/huggingface/transformers/pull/26906 | 1,949,881,514 | PR_kwDOCUB6oc5dJhSR | 26,906 | Emergency PR to skip conversational tests to fix CI | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm seeing some issues with BlenderBot as well, so it's possible that not all tests are triggering in every CI run! I think to avoid blocking lots of other PRs, we should just disable the test until I can test everything and re-enable it."
] | 1,697 | 1,697 | 1,697 | MEMBER | null | The CI is currently red because of issues with the conversational pipeline tests on several models. This is caused by my recent PR #26795. I'm not sure why the issue didn't appear in that PR - maybe the tests weren't triggered for some reason.
This emergency temporary PR skips the tests for now while I work on a proper solution! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26906/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26906",
"html_url": "https://github.com/huggingface/transformers/pull/26906",
"diff_url": "https://github.com/huggingface/transformers/pull/26906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26906.patch",
"merged_at": 1697639623000
} |
https://api.github.com/repos/huggingface/transformers/issues/26905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26905/comments | https://api.github.com/repos/huggingface/transformers/issues/26905/events | https://github.com/huggingface/transformers/issues/26905 | 1,949,766,226 | I_kwDOCUB6oc50NxJS | 26,905 | TypeError: Object of type BitsAndBytesConfig is not JSON serializable when using gradient_checkpointing=True in TrainingArguments | {
"login": "andreducfer",
"id": 2621846,
"node_id": "MDQ6VXNlcjI2MjE4NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2621846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreducfer",
"html_url": "https://github.com/andreducfer",
"followers_url": "https://api.github.com/users/andreducfer/followers",
"following_url": "https://api.github.com/users/andreducfer/following{/other_user}",
"gists_url": "https://api.github.com/users/andreducfer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreducfer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreducfer/subscriptions",
"organizations_url": "https://api.github.com/users/andreducfer/orgs",
"repos_url": "https://api.github.com/users/andreducfer/repos",
"events_url": "https://api.github.com/users/andreducfer/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreducfer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"hi @andreducfer \r\ndo you still face this issue with the latest transformers version? `pip install -U transformers`",
"Hi @younesbelkada \r\nI did a Test with the Transformers version 4.36.0.dev0 and I am still facing the same problems. The log is attached.\r\n[slurm.log](https://github.com/huggingface/transformers/files/13528135/slurm.log)\r\n",
"The issue is that you are passing the quantization config to the tokenizer? \r\n\r\n```python \r\n tokenizer = AutoTokenizer.from_pretrained(\r\n MODEL,\r\n cache_dir=MODEL,\r\n token=TOKEN_HF,\r\n device_map=\"auto\",\r\n quantization_config=bnb_config\r\n )\r\n```\r\nthe error tracebacks to the serialization of the tokenizer by saying:\r\n```python \r\nFile \"/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py\", line 2445, in save_pretrained\r\nout_str = json.dumps(tokenizer_config, indent=2, sort_keys=True, ensure_ascii=False) + \"\\n\"\r\n```\r\nand \r\n```\r\nTypeError: Object of type BitsAndBytesConfig is not JSON serializable\r\n```\r\n\r\njust don't pass it to the tokenizers. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,704 | 1,704 | NONE | null | ### System Info
I am running via script inside a Docker running in a Linux environment.
### Who can help?
@younesbelkada this issue is similar but not equal to #24137.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Bellow is the script used to finetune a LLama2-7b-chat:
```python
import os
from datasets import load_dataset, concatenate_datasets
from transformers import TrainingArguments, AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import LoraConfig
import torch
from environment import TOKEN_HF, TOKEN_WANDB
from trl import SFTTrainer
import wandb
MODEL = "/scratch/LLM/LLAMA2/Llama-2-7b-chat-hf"
DATASET_NAME = "andreducfer/thesession-abc-prompts"
OUTPUT_DIR = "/scratch/andre/llama2_finetuned/"
REFINED_MODEL = "llama2-7b-finetuned-with-thesession-abc-prompts"
SEED = 5
os.environ["WANDB_SILENT"] = "true"
os.environ["WANDB_API_KEY"] = TOKEN_WANDB
wandb.init(project="llama2-music")
def download_dataset(dataset_name):
dataset_train = load_dataset(dataset_name, split="train", token=TOKEN_HF)
dataset_validation = load_dataset(dataset_name, split="validation", token=TOKEN_HF)
dataset_test = load_dataset(dataset_name, split="test", token=TOKEN_HF)
dataset = concatenate_datasets([dataset_train, dataset_validation, dataset_test])
return dataset
def create_model_tokenizer():
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
model = AutoModelForCausalLM.from_pretrained(
MODEL,
trust_remote_code=True,
quantization_config=bnb_config,
device_map="auto",
cache_dir=MODEL,
token=TOKEN_HF
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL,
cache_dir=MODEL,
token=TOKEN_HF,
device_map="auto",
quantization_config=bnb_config
)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model.config.use_cache = False
model.config.pretraining_tp = 1
model_config_json = model.config.to_json_string()
print(model_config_json)
return model, tokenizer
def create_lora_configuration():
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
)
return peft_config
def create_training_configuration():
training_args = TrainingArguments(
output_dir=OUTPUT_DIR,
num_train_epochs=3,
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
optim="paged_adamw_32bit",
save_steps=25,
logging_steps=25,
learning_rate=2e-4,
weight_decay=0.001,
fp16=False,
bf16=True,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant"
)
return training_args
def create_trainer(model, training_args, small_train_dataset, lora_configuration, tokenizer):
trainer = SFTTrainer(
model=model,
train_dataset=small_train_dataset,
peft_config=lora_configuration,
dataset_text_field="text",
tokenizer=tokenizer,
args=training_args,
max_seq_length=4096
)
return trainer
if __name__ == '__main__':
configured_model, configured_tokenizer = create_model_tokenizer()
peft_configuration = create_lora_configuration()
training_configuration = create_training_configuration()
loaded_train_dataset = download_dataset(DATASET_NAME)
configured_trainer = create_trainer(model=configured_model, training_args=training_configuration,
small_train_dataset=loaded_train_dataset, lora_configuration=peft_configuration,
tokenizer=configured_tokenizer)
configured_trainer.train()
configured_trainer.model.save_pretrained(REFINED_MODEL)
```
Bellow you can see 4 lines of the dataset that I am using:
```json
{"instruction": "Write a melody in ABC Notation in a specific style.", "input": "Folk", "output": "Here it is the melody in ABC Notation in Folk style:\nX:0\nT: Cluck Old Hen\nM: 4/4\nL: 1/8\nK: Ador\n|: e2ae g2ag|e2ae d2dd| e2ae g2ag| edc2 A2AA :| |: A2A2 [c4e4] |A2A2 [G4B4]| A2AA cAcd| edc2 A2AA :|", "prompt": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\n", "text": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\nHere it is the melody in ABC Notation in Folk style:\nX:0\nT: Cluck Old Hen\nM: 4/4\nL: 1/8\nK: Ador\n|: e2ae g2ag|e2ae d2dd| e2ae g2ag| edc2 A2AA :| |: A2A2 [c4e4] |A2A2 [G4B4]| A2AA cAcd| edc2 A2AA :|"}
{"instruction": "Write a melody in ABC Notation in a specific style.", "input": "Folk", "output": "Here it is the melody in ABC Notation in Folk style:\nX:0\nT: Flop-Eared Mule, The\nM: 4/4\nL: 1/8\nK: Dmaj\n|: e2 | \"D\" f2 ff d2 dd | \"D\" A2 AA F2 FF | \"A\" E2 EF GF E2 | \"D\" DEFG A2 A2 |\n\"D\" f2 ff d2 dd | \"D\" A2 AA F2 FF | \"A\" E2 EF GF E2 | \"D\" D2 F2 D2 :|\nK:A\n|: cd | e2 ec e2 ec | efed c2 cc | B2 Bc dc B2 | ABcd e2 cd | e2 ec e2 ec | efed c2 c2 | B2 Bc d2 cB | A2 c2 A2 :|", "prompt": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\n", "text": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\nHere it is the melody in ABC Notation in Folk style:\nX:0\nT: Flop-Eared Mule, The\nM: 4/4\nL: 1/8\nK: Dmaj\n|: e2 | \"D\" f2 ff d2 dd | \"D\" A2 AA F2 FF | \"A\" E2 EF GF E2 | \"D\" DEFG A2 A2 |\n\"D\" f2 ff d2 dd | \"D\" A2 AA F2 FF | \"A\" E2 EF GF E2 | \"D\" D2 F2 D2 :|\nK:A\n|: cd | e2 ec e2 ec | efed c2 cc | B2 Bc dc B2 | ABcd e2 cd | e2 ec e2 ec | efed c2 c2 | B2 Bc d2 cB | A2 c2 A2 :|"}
{"instruction": "Write a melody in ABC Notation in a specific style.", "input": "Folk", "output": "Here it is the melody in ABC Notation in Folk style:\nX:0\nT: Tha Mi Sgith\nM: 4/4\nL: 1/8\nK: Amin\n|:B| A<A a2 g<e g2| e>dB>A G>AB<G| A<A a2 g<e g2| e>dB>e A/A/A A:| |:g| e>dB>A G>AB<G| e>dB>A B2 B>g| e>dB>A G>AB<d| e>dB>e A/A/A A:| </html>", "prompt": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\n", "text": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\nHere it is the melody in ABC Notation in Folk style:\nX:0\nT: Tha Mi Sgith\nM: 4/4\nL: 1/8\nK: Amin\n|:B| A<A a2 g<e g2| e>dB>A G>AB<G| A<A a2 g<e g2| e>dB>e A/A/A A:| |:g| e>dB>A G>AB<G| e>dB>A B2 B>g| e>dB>A G>AB<d| e>dB>e A/A/A A:| </html>"}
{"instruction": "Write a melody in ABC Notation in a specific style.", "input": "Folk", "output": "Here it is the melody in ABC Notation in Folk style:\nX:0\nT: Cleveland Park\nM: 6/8\nL: 1/8\nK: Dmin\n A,DE FED | B,DE FED | =B,DE FED | (E/2F/2G)E CDE | A,DE FED | B,DE FED | (E/2F/2G)E CEG | GFE D z2 :| DFA dAF | GAB AFD | B,DF A,DF | FED ^CDE | DFA dAF | GAB AFD | B,DF A,DF | ED^C D3 :| A,DE FED | B,DE FED | =B,DE FED | (E/2F/2G)E CDE | A,DE FED | B,DE FED | (E/2F/2G)E CEG | GFE D z2 :| Adf afa | bag afd | Bdf Adf | fed ^cde | Adf a2a | bag afd | Bdf Adf | ed^c d2d | Adf afa | bag afd | Bdf Adf | fed ^cde | Ade fed | Bde fed | ege ceg | gfe d3 :|]", "prompt": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\n", "text": "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nWrite a melody in ABC Notation in a specific style.\n\n### Input:\nFolk\n\n### Response:\nHere it is the melody in ABC Notation in Folk style:\nX:0\nT: Cleveland Park\nM: 6/8\nL: 1/8\nK: Dmin\n A,DE FED | B,DE FED | =B,DE FED | (E/2F/2G)E CDE | A,DE FED | B,DE FED | (E/2F/2G)E CEG | GFE D z2 :| DFA dAF | GAB AFD | B,DF A,DF | FED ^CDE | DFA dAF | GAB AFD | B,DF A,DF | ED^C D3 :| A,DE FED | B,DE FED | =B,DE FED | (E/2F/2G)E CDE | A,DE FED | B,DE FED | (E/2F/2G)E CEG | GFE D z2 :| Adf afa | bag afd | Bdf Adf | fed ^cde | Adf a2a | bag afd | Bdf Adf | ed^c d2d | Adf afa | bag afd | Bdf Adf | fed ^cde | Ade fed | Bde fed | ege ceg | gfe d3 :|]"}
```
### Expected behavior
I'm trying to use QLoRA for fine-tuning llama2-7b-chat-hf for CASUAL_LM.
I am getting the following error:
```python
INFO: fuse: warning: library too old, some operations may not work
==========
== CUDA ==
CUDA Version 11.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|█████ | 1/2 [00:29<00:29, 29.73s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:39<00:00, 17.96s/it]
Loading checkpoint shards: 100%|██████████| 2/2 [00:39<00:00, 19.73s/it]
{
"_name_or_path": "/scratch/LLM/LLAMA2/Llama-2-7b-chat-hf",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pretraining_tp": 1,
"quantization_config": {
"bnb_4bit_compute_dtype": "bfloat16",
"bnb_4bit_quant_type": "nf4",
"bnb_4bit_use_double_quant": true,
"llm_int8_enable_fp32_cpu_offload": false,
"llm_int8_has_fp16_weight": false,
"llm_int8_skip_modules": null,
"llm_int8_threshold": 6.0,
"load_in_4bit": true,
"load_in_8bit": false,
"quant_method": "bitsandbytes"
},
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.35.0.dev0",
"use_cache": false,
"vocab_size": 32000
}
0%| | 0/750 [00:00<?, ?it/s]You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding.
/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
0%| | 1/750 [00:07<1:33:01, 7.45s/it]
0%| | 2/750 [00:10<1:03:41, 5.11s/it]
0%| | 3/750 [00:14<53:38, 4.31s/it]
1%| | 4/750 [00:17<48:40, 3.92s/it]
1%| | 5/750 [00:20<44:54, 3.62s/it]
1%| | 6/750 [00:23<42:28, 3.43s/it]
1%| | 7/750 [00:26<40:52, 3.30s/it]
1%| | 8/750 [00:29<39:40, 3.21s/it]
1%| | 9/750 [00:32<38:46, 3.14s/it]
1%|▏ | 10/750 [00:35<38:06, 3.09s/it]
1%|▏ | 11/750 [00:38<37:39, 3.06s/it]
2%|▏ | 12/750 [00:41<37:18, 3.03s/it]
2%|▏ | 13/750 [00:44<37:00, 3.01s/it]
2%|▏ | 14/750 [00:47<36:44, 2.99s/it]
2%|▏ | 15/750 [00:50<36:28, 2.98s/it]
2%|▏ | 16/750 [00:53<36:15, 2.96s/it]
2%|▏ | 17/750 [00:56<36:03, 2.95s/it]
2%|▏ | 18/750 [00:59<35:52, 2.94s/it]
3%|▎ | 19/750 [01:02<35:37, 2.92s/it]
3%|▎ | 20/750 [01:05<35:03, 2.88s/it]
3%|▎ | 21/750 [01:07<34:07, 2.81s/it]
3%|▎ | 22/750 [01:10<32:55, 2.71s/it]
3%|▎ | 23/750 [01:12<32:05, 2.65s/it]
3%|▎ | 24/750 [01:15<31:28, 2.60s/it]
3%|▎ | 25/750 [01:17<30:59, 2.57s/it]
{'loss': 1.7484, 'learning_rate': 0.0002, 'epoch': 0.1}
3%|▎ | 25/750 [01:17<30:59, 2.57s/it]Traceback (most recent call last):
File "/home/andre/ondemand/data/sys/myjobs/projects/default/4/finetuning.py", line 92, in
configured_trainer.train()
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1506, in train
return inner_training_loop(
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1869, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2224, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2281, in _save_checkpoint
self.save_model(output_dir, _internal_call=True)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2768, in save_model
self._save(output_dir)
File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2831, in _save
self.tokenizer.save_pretrained(output_dir)
File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 2445, in save_pretrained
out_str = json.dumps(tokenizer_config, indent=2, sort_keys=True, ensure_ascii=False) + "\n"
File "/usr/lib/python3.10/json/init.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.10/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/lib/python3.10/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.10/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.class.name} '
TypeError: Object of type BitsAndBytesConfig is not JSON serializable
```
This error happened when I started using the parameter **gradient_checkpointing=True** in the **TrainingArguments()**. So in the step when saving the checkpoint (in our code example the step 25, because **save_steps=25**), it gives the error:
**TypeError: Object of type BitsAndBytesConfig is not JSON serializable** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26905/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26904/comments | https://api.github.com/repos/huggingface/transformers/issues/26904/events | https://github.com/huggingface/transformers/pull/26904 | 1,949,765,011 | PR_kwDOCUB6oc5dJH5Q | 26,904 | Pin Keras for now | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There are a few `pip install -U --upgrade-strategy eager tensorflow_probability` in `.circleci/create_circleci_config.py`. I guess we will need to pin them too. But not sure if we already know the corresponding version.",
"@ydshieh yeah, `tensorflow_probability` versions don't seem to be linked to TF versions. It also doesn't seem to use Keras much - hopefully we can leave it unpinned for now, and do a PR to pin it if it becomes a problem later!",
"(remark: I did see some failing before when `tensorflow_probability` complained about the TF version is not compatible with it - or vice versa 😅 ) "
] | 1,697 | 1,697 | 1,697 | MEMBER | null | With Keras 3 on the horizon, let's pin Keras versions to ensure things don't break until we can transition properly | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26904/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26904",
"html_url": "https://github.com/huggingface/transformers/pull/26904",
"diff_url": "https://github.com/huggingface/transformers/pull/26904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26904.patch",
"merged_at": 1697722772000
} |
https://api.github.com/repos/huggingface/transformers/issues/26903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26903/comments | https://api.github.com/repos/huggingface/transformers/issues/26903/events | https://github.com/huggingface/transformers/pull/26903 | 1,949,649,950 | PR_kwDOCUB6oc5dIuwg | 26,903 | Mark test_add_special_tokens as slow for Whisper | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing this because it seems to be superseded by #26570"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | test_add_special_tokens has been causing some CI timeouts, and the test seems to be very slow for Whisper when I test it locally. I'm marking the test as `@slow` so it stops causing CI issues, but still runs nightly.
cc @ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26903/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26903",
"html_url": "https://github.com/huggingface/transformers/pull/26903",
"diff_url": "https://github.com/huggingface/transformers/pull/26903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26903.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26902/comments | https://api.github.com/repos/huggingface/transformers/issues/26902/events | https://github.com/huggingface/transformers/issues/26902 | 1,949,598,924 | I_kwDOCUB6oc50NITM | 26,902 | Token based (or sequence of token based) repetition penalty exclusion | {
"login": "teknium1",
"id": 127238744,
"node_id": "U_kgDOB5WCWA",
"avatar_url": "https://avatars.githubusercontent.com/u/127238744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teknium1",
"html_url": "https://github.com/teknium1",
"followers_url": "https://api.github.com/users/teknium1/followers",
"following_url": "https://api.github.com/users/teknium1/following{/other_user}",
"gists_url": "https://api.github.com/users/teknium1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teknium1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teknium1/subscriptions",
"organizations_url": "https://api.github.com/users/teknium1/orgs",
"repos_url": "https://api.github.com/users/teknium1/repos",
"events_url": "https://api.github.com/users/teknium1/events{/privacy}",
"received_events_url": "https://api.github.com/users/teknium1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"+1",
"an addendum to this for at least transformers, maybe TGI also, is if this system works, perhaps can add it as a tokenizer config setting similar to the chat templates so users dont have to implement this for whatever inference they do",
"cc @gante @Rocketknight1 ",
"for reference hf transformers' code for this is even simpler:\r\n\r\n```py\r\nclass RepetitionPenaltyLogitsProcessor(LogitsProcessor):\r\n def __init__(self, penalty: float):\r\n if not isinstance(penalty, float) or not (penalty > 0):\r\n raise ValueError(f\"`penalty` has to be a strictly positive float, but is {penalty}\")\r\n\r\n self.penalty = penalty\r\n\r\n @add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING)\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:\r\n score = torch.gather(scores, 1, input_ids)\r\n\r\n # if score < 0 then repetition penalty has to be multiplied to reduce the token probabilities\r\n score = torch.where(score < 0, score * self.penalty, score / self.penalty)\r\n\r\n scores.scatter_(1, input_ids, score)\r\n return scores\r\n```\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py#L270C1-L325C22",
"This is a really good point that I hadn't thought of! This code is mostly @gante's area but I think I can see how to implement this, so if you're busy let me know and I can take it!",
"Hey also I should add, it would be really beneficial if we can select whole sections of a prompt format to omit as well. Thinking more on this - for lets say, RAG or Memories, other things where you have context that the model **should** be drawing on as closely as possible, it seems rep penalty would interfere with its ability to properly recite from source material in context. This seems doable but probably harder to implement.\r\n\r\nThere are also much harder to deal with problems like the following:\r\n\r\n```\r\nUser: What's your favorite color?\r\nAssistant: Blue!\r\nUser: What did you say your favorite color was?\r\nAssistant: hmm, Red!\r\n```\r\n\r\nWhere this may be the model being dumb, or may be because of rep penalty. For solving this one, I really have no idea, since its so dynamic and situational..",
"Yes, we generally need a better solution than simple (stupid) repetition penalty. There are many use cases where the LLM needs to repeat information verbatim, especially when generating code. ChatGPT/GPT-4 does that extremely well, where you talk back and forth to iterate over the code. Local LLMs wouldn't be able to handle that with regular repetition penalty settings.\r\n\r\nOr imagine a situation like this (disregarding that LLMs aren't calculators ;)):\r\n\r\n0+2=2, 2+0=2, 3-1=2, 4-2=2, 1+1=\r\n\r\nWith all those 2's already in the prompt, the answer is likely not what we'd expect.",
"Hey folks! 👋 \r\n\r\nBefore further discussion, a recap of the repetition penalty:\r\n1. It was designed for greedy methods to avoid low-quality repetitions. This problem was not present in sampling, which is what most LLMs rely on;\r\n2. In my experience, and from what I've heard from users over time, reasonable values of repetition penalty (e.g. `1.2` or `1.3`) usually prevent low-quality repetitions and not critical pieces of the generated text (like `<|im_end|>`);\r\n3. It is only applied at most once per token. The sequences `the the the` and `the` will have the same penalty for the token corresponding to `the`;\r\n4. It may negatively impact when used in problems where repetition is expected, like multi-turn chats or summarization. Playing with other parameters that are more closely related to the model's probability distribution of the next token (like `top_k`, `temperature`, and `top_p`) is often preferred -- e.g. increasing temperature makes repetition less likely.\r\n\r\n________________________________________________________________\r\n\r\nBased on what I know, I would need stronger evidence, well beyond a few examples (that might be false positives since LLMs typically sample), to support your claims. I also agree with these two comments that are skeptical about this being a problem that the \"improved\" repetition penalty would solve ([1](https://github.com/ggerganov/llama.cpp/pull/3538#issuecomment-1770501514) [2](https://github.com/huggingface/text-generation-inference/issues/1170#issuecomment-1770501934)).\r\n\r\nHowever, this does not prevent you from using it with `transformers`! You can pass custom logit processor instances through the [`logits_processor`](https://huggingface.co/docs/transformers/v4.34.1/en/main_classes/text_generation#transformers.GenerationMixin.generate.logits_processor) argument in `generate` 🤗 \r\n\r\nIn conclusion, I'm skeptical but open to being convinced otherwise with evidence :)",
"Thanks for the recap!\r\n\r\nSince I'm doing a lot of model comparisons and tests with multi-turn chats, I use deterministic settings (do_sample=false with oobabooga's text-generation-webui or with llama.cpp/koboldcpp temperature=0, top_k=1, top_p=0, top_a=0) to eliminate as many random factors as possible. I'm using repetition penalty 1.18, range 0, no slope.\r\n\r\nMy intention is to test what the model considers the most likely generation, which isn't perfect of course, but outside of running an infinite number of gens and picking the average, it's the best I could come up with. Always open for better ideas, though.\r\n\r\nJust so you know my setup and with which settings I observed the issues I consider as problems caused by repetition penalty for many months. If you think no repetition penalty would be better (now that llama.cpp's tokenizer bug that messes up EOS and other special tokens is fixed - https://github.com/ggerganov/llama.cpp/pull/3538 - which could have contributed to the excessive repetition issues so many Llama 2 models exhibited), I'd happily test going without repetition penalty.",
"@StefanDanielSchwarz thank you for your reply :)\r\n\r\nIn a deterministic setup you should see improvements with a moderate repetition penalty like yours, as it is common for the model to repeat concepts (or even get into never-ending loops). The best would be a blind test with sampling, like it is done in [lmsys' chatbot arena](https://chat.lmsys.org/). After a few hundred evals, it should be clear whether it makes a difference or not to exclude special tokens from the repetition penalty or not. Keep in mind that, if the difference is minimal, less code is better!",
"@gante But how would you handle the use case of e. g. code generation? Imagine a pair programmer/co-pilot scenario which I use a lot with ChatGPT/GPT-4: Describe what program you want, LLM gives you the code, you tell it what to change, and after a lot of back-and-forth, it's usable. The slightest repetition penalty could ruin that, so we'd probably need a way to exempt code blocks from repetition penalty. Same for RAG/retrieved memories as @teknium1 mentioned.",
"@StefanDanielSchwarz It may indeed, but this is the first time I'm reading this issue (keeping in mind that I'm tagged in everything `generate`-related) and, from a numerical point of view, I don't think the repetition penalty is strong enough to block high-confidence cases like those. Nevertheless, I recommend experimenting with it, as I might be wrong :)\r\n\r\nRegardless of my opinion, we need to see clear results before we add new code to `transformers` 🤗 Otherwise, the codebase will grow beyond our means to maintain it. This is why we have provided a code path for custom operations like the one this issue is proposing!",
"@gante Thanks for your feedback again. I agree with you about clear results being required. Hopefully this discussion has raised awareness of this (potential) issue.\r\n\r\nSo far I've been quite happy with repetition penalty 1.18 and my deterministic settings, and the problems I noticed might be attributed to other factors like improper tokenization, quantization, or model-specific quirks. So I'll keep my eyes open and hope others do the same, so that *if* there is an actual issue, it will eventually be proven and fixed.\r\n\r\nThank you all for your attention and please do keep up the great work! 😎👍",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
I started this issue in TGI but it applies to all inference code that has a form of rep penalty, will paste my feature request notes from there here as well, find original here: https://github.com/huggingface/text-generation-inference/issues/1170
Hello, I would like to propose a feature that allows you to set a list of tokens, or even token sequences, that can be excluded from repetition penalty calculations.
The reasoning for this being that, given a prompt format for multiturn, such as:
```
user: abc
assistant: def
user: ghi
assistant: jkl
```
Or even worse, a format like ChatML, where it is in now standard case using <|im_end|> as a stopping token and included in every turn, it seems only logical that given these tokens all appear in every turn, that, especially in short token turn sequences, repetition penalty will destroy the validity of these prompt formats.
While I havent noticed this using Hermes 2, it may be solely because it has long responses, where, if avg turn length is very few tokens, the problem may become more prominent.
### Motivation

### Your contribution
The following is TGI's code, I haven't looked at transformer's code, but I assume the principles are the same:
```py
class HeterogeneousRepetitionPenaltyLogitsProcessor(LogitsProcessor):
r"""
[`LogitsProcessor`] enforcing an exponential penalty on repeated sequences.
This version allows for a separate value for each sample and runs inplace when possible.
It doesn't validate inputs.
Args:
repetition_penalty (`List[float]`):
The parameter for repetition penalty. 1.0 means no penalty. See [this
paper](https://arxiv.org/pdf/1909.05858.pdf) for more details.
"""
def __init__(self, penalty: List[float], dtype: torch.dtype, device: torch.device):
self.penalty = penalty
self.penalty_tensor = torch.tensor(
penalty, dtype=dtype, device=device
).unsqueeze(1)
def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.Tensor:
score = torch.gather(scores, 1, input_ids)
# if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability
score = torch.where(
score < 0, score * self.penalty_tensor, score / self.penalty_tensor
)
scores.scatter_(1, input_ids, score)
return scores
def filter(self, indices):
self.penalty = [self.penalty[i] for i in indices]
if any([x != 1.0 for x in self.penalty]):
self.penalty_tensor = self.penalty_tensor[indices]
return self
return None
```
I'm thinking we take the input_id's before getting scored and simply replacing it with input_id's that remove any from some variable setting a list of token ids or token strings->id's. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26902/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26902/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26901/comments | https://api.github.com/repos/huggingface/transformers/issues/26901/events | https://github.com/huggingface/transformers/issues/26901 | 1,949,588,386 | I_kwDOCUB6oc50NFui | 26,901 | Add quantization_config in AutoModelForCausalLM.from_config() | {
"login": "ishaansharma",
"id": 8963395,
"node_id": "MDQ6VXNlcjg5NjMzOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8963395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishaansharma",
"html_url": "https://github.com/ishaansharma",
"followers_url": "https://api.github.com/users/ishaansharma/followers",
"following_url": "https://api.github.com/users/ishaansharma/following{/other_user}",
"gists_url": "https://api.github.com/users/ishaansharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ishaansharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishaansharma/subscriptions",
"organizations_url": "https://api.github.com/users/ishaansharma/orgs",
"repos_url": "https://api.github.com/users/ishaansharma/repos",
"events_url": "https://api.github.com/users/ishaansharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/ishaansharma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"WDYT @younesbelkada ",
"Hi @ishaansharma \r\nThanks a lot for the proposal! \r\nI personally would not advocate to go for that route, the quantization schemes we support right now consists on post-trainign quantization, meaning the usecase is always\r\n\r\n1- load pre-trained weights from the hub or locally\r\n2- quantize the pre-trained weights\r\n\r\nThe API you propose is cool, but I am afraid will not be used in practice as `from_config` will load random weights to the model. Let me know if I misunderstood anything!",
"> Hi @ishaansharma Thanks a lot for the proposal! I personally would not advocate to go for that route, the quantization schemes we support right now consists on post-trainign quantization, meaning the usecase is always\r\n> \r\n> 1- load pre-trained weights from the hub or locally 2- quantize the pre-trained weights\r\n> \r\n> The API you propose is cool, but I am afraid will not be used in practice as `from_config` will load random weights to the model. Let me know if I misunderstood anything!\r\n\r\n1. I wanted this feature because it will be very useful for pre-training from scratch from any large language model with huge parameters that usually cannot be done on small machines will very less computation cost . \r\n\r\n2. To pre-train any model from scratch and to build a language model on a totally new language , I don't think the loaded random weights from the config will cause any harm. as eventually weights will get updated with the training . \r\n\r\n@younesbelkada , I just want that even the pre-training a model of any language from scratch using any LLM architecture can be done on any machine . \r\n\r\nLet me know if this approach help . \r\n\r\nWarm Regard.",
"Thanks for getting back to me @ishaansharma !\r\n\r\n> I wanted this feature because it will be very useful for pre-training from scratch from any large language model with huge parameters that usually cannot be done on small machines will very less computation cost .\r\n\r\nSince you cannot perform full fine-tuning when the model is quantized I think that this is technically not possible :/ This comment can also be applied on your thoughts here:\r\n\r\n> To pre-train any model from scratch and to build a language model on a totally new language , I don't think the loaded random weights from the config will cause any harm. as eventually weights will get updated with the training . "
] | 1,697 | 1,698 | null | NONE | null | ### Feature request
Add quantization_config feature to AutoModelForCausalLM from config .
I am trying to pretrain a model from scratch and use bits and bytes so that It can be trained on less computation expensive machines.
Below is my quantization config :
```
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
```
When I attempted to take the config of certain model from_pretrained function it failed and raised a Type Error mentioned below.
```
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("mistralai/Mistral-7B-v0.1")
model = AutoModelForCausalLM.from_config(config,quantization_config=bnb_config, device_map={"":0})
```
The Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[23], line 7
3 # Download configuration from huggingface.co and cache.
5 configy = AutoConfig.from_pretrained("mistralai/Mistral-7B-v0.1")
----> 7 modely = AutoModelForCausalLM.from_config(configy,quantization_config=bnb_config, device_map={"":0})
File ~/miniconda3/envs/ai/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:441, in _BaseAutoModelClass.from_config(cls, config, **kwargs)
439 elif type(config) in cls._model_mapping.keys():
440 model_class = _get_model_class(config, cls._model_mapping)
--> 441 return model_class._from_config(config, **kwargs)
443 raise ValueError(
444 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
445 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
446 )
File ~/miniconda3/envs/ai/lib/python3.10/site-packages/transformers/modeling_utils.py:1192, in PreTrainedModel._from_config(cls, config, **kwargs)
1190 model = cls(config, **kwargs)
1191 else:
-> 1192 model = cls(config, **kwargs)
1194 # restore default dtype if it was modified
1195 if dtype_orig is not None:
TypeError: MistralForCausalLM.__init__() got an unexpected keyword argument 'quantization_config'
```
### Motivation
I had tried a work around by saving the model from the loaded config details from the model and then load the same model with quantization config .
I believe this process could get fixed and we can enable/add quantization while loading the model from the config itself.
### Your contribution
```
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("mistralai/Mistral-7B-v0.1")
model = AutoModelForCausalLM.from_config(config)
model.save_pretrained(MODEL_NAME_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME_PATH, quantization_config=bnb_config, device_map={"":0})
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26901/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26900/comments | https://api.github.com/repos/huggingface/transformers/issues/26900/events | https://github.com/huggingface/transformers/pull/26900 | 1,949,568,409 | PR_kwDOCUB6oc5dIcvH | 26,900 | Refactor code part in documentation translated to japanese | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
this PR fixes some coding examples which was translated with the original content to japanese text. refactoring that thing in this PR.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26895
@stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26900/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26900",
"html_url": "https://github.com/huggingface/transformers/pull/26900",
"diff_url": "https://github.com/huggingface/transformers/pull/26900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26900.patch",
"merged_at": 1697650559000
} |
https://api.github.com/repos/huggingface/transformers/issues/26899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26899/comments | https://api.github.com/repos/huggingface/transformers/issues/26899/events | https://github.com/huggingface/transformers/issues/26899 | 1,949,531,087 | I_kwDOCUB6oc50M3vP | 26,899 | NEFTune Support pls | {
"login": "nivibilla",
"id": 26687662,
"node_id": "MDQ6VXNlcjI2Njg3NjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/26687662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nivibilla",
"html_url": "https://github.com/nivibilla",
"followers_url": "https://api.github.com/users/nivibilla/followers",
"following_url": "https://api.github.com/users/nivibilla/following{/other_user}",
"gists_url": "https://api.github.com/users/nivibilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nivibilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nivibilla/subscriptions",
"organizations_url": "https://api.github.com/users/nivibilla/orgs",
"repos_url": "https://api.github.com/users/nivibilla/repos",
"events_url": "https://api.github.com/users/nivibilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/nivibilla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @nivibilla, we integrate well with TRL, which leverages a hefty number of components from the `transformers` library. What is your issue with the current TRL implementation/what would you like to enable by moving the support from TRL directly to `transformers` ?",
"a lot of my scripts and pipelines for finetuning use native transformers. ideally I don't want to have to update all of them to use trl if possible. ",
"@younesbelkada could you share a bit about how one could go about leveraging NEFTune with `transformers` models?\r\n\r\nI think it's important that the two frameworks complete each other rather than compete for the same features with differing implementations, so let's take a look at leveraging TRL simply for `transformers` models rather than importing the implementation within `transformers`.",
"currently in TRL we monkey-patch the forward pass of the emebdding layer with a new one, but with https://github.com/huggingface/trl/pull/889 we'll have a cleaner implementation that leverage forward post hooks which is cleaner. I will open a PR to add that feature in Trainer and leverage that feature in TRL"
] | 1,697 | 1,698 | 1,698 | NONE | null | ### Feature request
The TRL library introduced support for NEFTune in this [pr](https://github.com/huggingface/trl/pull/871)
### Motivation
Improves Supervised Fine-tuning performance
See paper : https://arxiv.org/abs/2310.05914
### Your contribution
I can try to port code over from TRL. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26899/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26898/comments | https://api.github.com/repos/huggingface/transformers/issues/26898/events | https://github.com/huggingface/transformers/issues/26898 | 1,949,430,039 | I_kwDOCUB6oc50MfEX | 26,898 | can't forward 4bit nllb-moe-54b (RuntimeError: result type Float can't be cast to the desired output type Byte) | {
"login": "CAH9487",
"id": 61491454,
"node_id": "MDQ6VXNlcjYxNDkxNDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/61491454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CAH9487",
"html_url": "https://github.com/CAH9487",
"followers_url": "https://api.github.com/users/CAH9487/followers",
"following_url": "https://api.github.com/users/CAH9487/following{/other_user}",
"gists_url": "https://api.github.com/users/CAH9487/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CAH9487/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CAH9487/subscriptions",
"organizations_url": "https://api.github.com/users/CAH9487/orgs",
"repos_url": "https://api.github.com/users/CAH9487/repos",
"events_url": "https://api.github.com/users/CAH9487/events{/privacy}",
"received_events_url": "https://api.github.com/users/CAH9487/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I think I would set the `moe_token_dropout` to `0` as a quick fix. Otherwise not sure why but the dtype is wrong cc @younesbelkada if you know of a quick fix on the modelling code?"
] | 1,697 | 1,698 | 1,698 | NONE | null | ### System Info
GPU: NVIDIA RTX A6000 (VRAM 48G)
transformers version: 4.34.0
Platform: Linux 5.15.0-69-generic
Python version: 3.8.10
Huggingface_hub version: 0.18.0
Safetensors version: 0.4.0
Accelerate version: 0.23.0
PyTorch version: 2.1.0+cu118
bitsandbytes version: 0.41.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
lang_map = {
"ja": "jpn_Jpan",
"zh": "zho_Hans",
}
model_path = 'facebook/nllb-moe-54b'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
tokenizer.src_lang = lang_map["ja"]
tokenizer.tgt_lang = lang_map["zh"]
model = AutoModelForSeq2SeqLM.from_pretrained(
model_path,
load_in_4bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
forced_bos_token_id = tokenizer.lang_code_to_id[lang_map["zh"]]
model.config.forced_bos_token_id = forced_bos_token_id
generation_config = dict(
repetition_penalty=1.2,
do_sample=False,
num_beams=5,
num_return_sequences=1,
max_new_tokens=512,
pad_token_id=tokenizer.eos_token_id,
)
input_text = '米財務省は12日、連邦政府債務上限の到達後も支払い履行など資金をやりくりしてきた特別措置について、今月10日時点であと880億ドル(約11兆9400億円)しか残されていないことを明らかにした。'
encodings = tokenizer(input_text, truncation=True, max_length=512, return_tensors="pt").to('cuda')
with torch.no_grad():
outputs = model.generate(input_ids=encodings["input_ids"], **generation_config)
preds = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(preds)
```
error message:
```bash
Traceback (most recent call last):
File "t.py", line 39, in <module>
outputs = self._model.generate(input_ids=encodings["input_ids"], **generation_config)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1496, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 661, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/nllb_moe/modeling_nllb_moe.py", line 1170, in forward
layer_outputs = encoder_layer(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/nllb_moe/modeling_nllb_moe.py", line 702, in forward
hidden_states, router_states = self.ffn(hidden_states, attention_mask)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/models/nllb_moe/modeling_nllb_moe.py", line 484, in forward
expert_output *= 1 - self.moe_token_dropout
RuntimeError: result type Float can't be cast to the desired output type Byte
```
### Expected behavior
translated text. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26898/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26897/comments | https://api.github.com/repos/huggingface/transformers/issues/26897/events | https://github.com/huggingface/transformers/issues/26897 | 1,949,367,926 | I_kwDOCUB6oc50MP52 | 26,897 | ValueError: Couldn't instantiate the backend tokenizer from one of | {
"login": "brifin",
"id": 89294527,
"node_id": "MDQ6VXNlcjg5Mjk0NTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/89294527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brifin",
"html_url": "https://github.com/brifin",
"followers_url": "https://api.github.com/users/brifin/followers",
"following_url": "https://api.github.com/users/brifin/following{/other_user}",
"gists_url": "https://api.github.com/users/brifin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brifin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brifin/subscriptions",
"organizations_url": "https://api.github.com/users/brifin/orgs",
"repos_url": "https://api.github.com/users/brifin/repos",
"events_url": "https://api.github.com/users/brifin/events{/privacy}",
"received_events_url": "https://api.github.com/users/brifin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! When you say `it does not work for me` what do you mean ? ",
"> Hey! When you say `it does not work for me` what do you mean ?\r\n\r\nwhich means the result remained the same😢",
"I solved it. There was some network problem when downloading the model files, and it made the model broken. I redownloaded it then succeeded. 😂\r\nBut the error and advice reported actually don't match the situation I met, hope some beneficial changes to be made."
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
System: Linux x86_64 GNU/Linux
Python=3.9.18
sentence-transformers 2.2.2
sentencepiece 0.1.99
tokenizers 0.14.1
torch 2.0.0
torchvision 0.15.1
transformers 4.34.0
"tokenizer_class": "GPTNeoXTokenizer"
"_name_or_path": "EleutherAI/pythia-1.4b-deduped",
"architectures": [
"GPTNeoXForCausalLM"
],
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Relative code in example_submission.py:**
```python
args = parse_args()
subtrack = args.subtrack
phase = args.phase
baseline = args.baseline
mode = args.mode
results_path = args.results_path
device = args.device
verbose = args.verbose
# ========== load the tokenizer and the model ========== #
tokenizer_path = model_path = f"./data/{phase}/{subtrack}/model"
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, padding_side='left')
tokenizer.add_special_tokens({'pad_token': '<|endoftext|>'})
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16).to(device).eval()
```
**Execute:**
python3 example_submission.py --subtrack base --phase dev --baseline PEZ --mode val
**Result:**
Traceback (most recent call last):
File "/xxx/example_submission.py", line 205, in <module>
main()
File "/xxx/example_submission.py", line 116, in main
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, padding_side='left')
File "/xxx/anaconda3/envs/trogan/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 751, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/xxx/anaconda3/envs/trogan/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2045, in from_pretrained
return cls._from_pretrained(
File "/xxx/anaconda3/envs/trogan/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2256, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/xxx/anaconda3/envs/trogan/lib/python3.9/site-packages/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py", line 109, in __init__
super().__init__(
File "/xxx/anaconda3/envs/trogan/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 120, in __init__
raise ValueError(
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a `tokenizers` library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
**Failed struggle:**
I have tried ‘pip install sentencepiece’, but it doesn't work for me.
### Expected behavior
Run successfullly | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26897/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26896/comments | https://api.github.com/repos/huggingface/transformers/issues/26896/events | https://github.com/huggingface/transformers/issues/26896 | 1,949,200,764 | I_kwDOCUB6oc50LnF8 | 26,896 | Inconsistent dropout implementation across models | {
"login": "rvashurin",
"id": 6410329,
"node_id": "MDQ6VXNlcjY0MTAzMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6410329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rvashurin",
"html_url": "https://github.com/rvashurin",
"followers_url": "https://api.github.com/users/rvashurin/followers",
"following_url": "https://api.github.com/users/rvashurin/following{/other_user}",
"gists_url": "https://api.github.com/users/rvashurin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rvashurin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rvashurin/subscriptions",
"organizations_url": "https://api.github.com/users/rvashurin/orgs",
"repos_url": "https://api.github.com/users/rvashurin/repos",
"events_url": "https://api.github.com/users/rvashurin/events{/privacy}",
"received_events_url": "https://api.github.com/users/rvashurin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't really see a reason why not to standardize the dropout to improve consistency ad readability, but that would probably require us to change quite a lot of models (380~ lines use `nn.functional.dropout` and 758 use `self.dropout = nn.Dropout` ) \r\nWondering rather if there are any specific known reason to use one rather than the other in the first place. \r\n\r\nFYI @LysandreJik ",
"Not an issue for me. We did something similar for activation functions ~3 years ago. I'd just make sure it's backwards compatible and that it's the most versatile (in this case, the one that can be patched)",
"I will try to assess what the best approach is and conjure up some draft PR soon-ish.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
Dropout implementations vary across different models - for example, T5 implements dropout as a layer https://github.com/huggingface/transformers/blob/e893b1efbbce80b7eaaf24f9e0134450820782b5/src/transformers/models/t5/modeling_t5.py#L285 which is visible in the model printout, while Bart uses functional form of dropout https://github.com/huggingface/transformers/blob/e893b1efbbce80b7eaaf24f9e0134450820782b5/src/transformers/models/bart/modeling_bart.py#L434 which is essentially hidden from the user. It would be useful if either all models had the same (preferably, layer-like) implementation of dropout, or models had flag indicating how dropout is implemented in a model.
### Motivation
I am currently working with some Monte-Carlo sampling from parameter distributions of models, which requires a re-implementation of dropout to essentially repeat masks across batch and beam dimensions, instead of sampling fully random mask for all tensor dimensions. This is a trivial task if the model has separate layer for dropout operation, but not if it uses functional form. The latter case requires one to patch `torch.nn.functional.dropout` during runtime which is as dirty a hack as they come. I understand that this is a very niche problem, yet this would be beneficial from implementation consistency perspective if nothing else.
### Your contribution
I could work on solution if necessary and if HF staff signals that this can possibly be merged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26896/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26896/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26895/comments | https://api.github.com/repos/huggingface/transformers/issues/26895/events | https://github.com/huggingface/transformers/issues/26895 | 1,949,169,087 | I_kwDOCUB6oc50LfW_ | 26,895 | some code part in documentation was translated to japanese | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @stevhliu "
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | ### System Info
https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/ja/preprocessing#pad
some code part in documentation was mistakenly translated to Japanese.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
-
### Expected behavior
```
batch_sentences = [
"でもセカンドブレックファーストはどうなるの?",
"セカンドブレックファーストについては知らないと思う、ピップ。",
"イレブンジーズはどうなの?",
]
encoded_input = tokenizer(batch_sentences, padding=True)
print(encoded_input)
```
should be
```python
batch_sentences = [
"でもセカンドブレックファーストはどうなるの?", #english text
"セカンドブレックファーストについては知らないと思う、ピップ。", #english text
"イレブンジーズはどうなの?", #english text
]
encoded_input = tokenizer(batch_sentences, padding=True)
print(encoded_input)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26895/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26894/comments | https://api.github.com/repos/huggingface/transformers/issues/26894/events | https://github.com/huggingface/transformers/pull/26894 | 1,948,969,999 | PR_kwDOCUB6oc5dGaRE | 26,894 | Translating `en/main_classes` folder docs to Japanese 🇯🇵 | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26894). All of your documentation changes will be reflected on that endpoint.",
"see @stevhliu all checks passed",
"I restructured toctree.yml again by following english tochtree have alook ",
"@stevhliu Other PR is ready for `en/tasks` files and currently working on `model_doc` as well. as `model_doc` has too many files in it, what should be the best way to approach `model_doc` files?",
"I would approach it by breaking the contents of `model_doc` into smaller chunks which makes it easier to translate and review. Maybe translate ~10 models in each PR?",
"> I would approach it by breaking the contents of `model_doc` into smaller chunks which makes it easier to translate and review. Maybe translate ~10 models in each PR?\r\n\r\n@stevhliu Sounds good! I will work on 10-12 files in each PR review them and add.."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26893
@stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26894/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26894",
"html_url": "https://github.com/huggingface/transformers/pull/26894",
"diff_url": "https://github.com/huggingface/transformers/pull/26894.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26894.patch",
"merged_at": 1698683955000
} |
https://api.github.com/repos/huggingface/transformers/issues/26893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26893/comments | https://api.github.com/repos/huggingface/transformers/issues/26893/events | https://github.com/huggingface/transformers/issues/26893 | 1,948,967,870 | I_kwDOCUB6oc50KuO- | 26,893 | [i18n-jpTranslating `en/main_classes` folder docs to Japanese 🇯🇵 | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Japanese-speaking community!
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
<!--
Keep on adding more as you go 🔥
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26893/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26892/comments | https://api.github.com/repos/huggingface/transformers/issues/26892/events | https://github.com/huggingface/transformers/pull/26892 | 1,948,951,556 | PR_kwDOCUB6oc5dGWRu | 26,892 | add attention_mask and position_ids in assisted model | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @jiqing-feng 👋 \r\n\r\nI agree in principle with the changes that you are proposing, but you probably need to do a few changes to make our CI go green :)",
"Hi @gante . I use `assistant_model.prepare_inputs_for_generation` to get the inputs of the assistant model. The CI all goes green and I also tested on my several examples to make sure the outputs is correct. Would you please help me review it? Thx!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26892). All of your documentation changes will be reflected on that endpoint.",
"Hi @gante . Would you please review it again? Thx!",
"@jiqing-feng Ah, actually I have two requests before asking for the green light of a core maintainer:\r\n1. There is a merge conflict, due to recent changes for a new model. If you're not able to sort it out, let me know :)\r\n2. Let's confirm that we haven't lost throughput with the changes (e.g. the assertion might be producing slowdowns). To test it, feel free to clone [this folder](https://github.com/gante/huggingface-demos/tree/main/experiments/faster_generation), move there, and then run `python benchmark_decoder_open.py facebook/opt-6.7b --aux-model facebook/opt-125m --dtype fp16 --num-samples 20` on `main` and on your branch. The execution times should be nearly identical! 🤗 If you have your own test script, feel free to use it instead -- just let us know of the numbers :)",
"Hi @gante . I tested it on my CPU device since the GPU is unavailable to me. The new branch is a little faster (around 3%) than the main branch. The test script is as follows, feel free to test it on both GPU and CPU.\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\nimport time\r\n\r\nprompt = \"Speculative decoding is\"\r\ncheckpoint = \"bigscience/bloom-7b1\"\r\nassistant_checkpoint = \"bigscience/bloom-560m\"\r\ndevice = \"cpu\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(device)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True).to(device)\r\n\r\ngeneration_kwargs = {\"do_sample\": False, \"max_new_tokens\": 64, \"temperature\": 1.0, \"top_p\": 1.0, \"num_beams\": 1}\r\n\r\nassistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True).to(device)\r\n\r\nfor i in range(5):\r\n start = time.time()\r\n outputs = model.generate(**inputs, assistant_model=assistant_model, **generation_kwargs)\r\n end = time.time()\r\n new_tokens = outputs.shape[-1] - inputs[\"input_ids\"].shape[-1]\r\n print(f\"Assistant decoding latency per token is {(end-start)/new_tokens * 1000} ms\")\r\nprint(tokenizer.batch_decode(outputs, skip_special_tokens=True))\r\n```",
"> Hi @gante . I tested it on my CPU device since the GPU is unavailable to me. The new branch is a little faster (around 3%) than the main branch. The test script is as follows, feel free to test it on both GPU and CPU.\r\n> \r\n> ```python\r\n> from transformers import AutoModelForCausalLM, AutoTokenizer\r\n> import torch\r\n> import time\r\n> \r\n> prompt = \"Speculative decoding is\"\r\n> checkpoint = \"bigscience/bloom-7b1\"\r\n> assistant_checkpoint = \"bigscience/bloom-560m\"\r\n> device = \"cpu\"\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> inputs = tokenizer(prompt, return_tensors=\"pt\").to(device)\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True).to(device)\r\n> \r\n> generation_kwargs = {\"do_sample\": False, \"max_new_tokens\": 64, \"temperature\": 1.0, \"top_p\": 1.0, \"num_beams\": 1}\r\n> \r\n> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True).to(device)\r\n> \r\n> for i in range(5):\r\n> start = time.time()\r\n> outputs = model.generate(**inputs, assistant_model=assistant_model, **generation_kwargs)\r\n> end = time.time()\r\n> new_tokens = outputs.shape[-1] - inputs[\"input_ids\"].shape[-1]\r\n> print(f\"Assistant decoding latency per token is {(end-start)/new_tokens * 1000} ms\")\r\n> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))\r\n> ```\r\n\r\nHi @gante . Could you have a look at this? Thx!",
"Hi @jiqing-feng \r\n\r\nRunning on my end (`python benchmark_decoder_open.py facebook/opt-6.7b --aux-model facebook/opt-125m --dtype fp16` from [this folder](https://github.com/gante/huggingface-demos/tree/main/experiments/faster_generation)), I got\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/multiprocessing/process.py\", line 314, in _bootstrap\r\n self.run()\r\n File \"/usr/lib/python3.10/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/home/joao/huggingface-demos/experiments/faster_generation/utils.py\", line 99, in run_new_model\r\n new_outputs = run_prediction_loop(model, tokenizer, args.num_samples, args.temperature, aux_model)\r\n File \"/home/joao/huggingface-demos/experiments/faster_generation/benchmark_decoder_open.py\", line 35, in run_prediction_loop\r\n gen_out = model.generate(\r\n File \"/home/joao/venvs/hf/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/joao/transformers/src/transformers/generation/utils.py\", line 1736, in generate\r\n return self.assisted_decoding(\r\n File \"/home/joao/transformers/src/transformers/generation/utils.py\", line 4594, in assisted_decoding\r\n assistant_attention_mask = torch.cat(\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_CUDA_cat)\r\n```\r\n\r\ni.e. the newly generated masks that are appended must be created in the same device as the existing mask :)",
"Hi @gante . Would you please try it again? It should be fixed and I also tested it on A100, the results and performance are exactly the same. BTW, the failed test seems not related to my changes.",
"@jiqing-feng perfect, all works well on my end.\r\n\r\nTwo related notes:\r\n1. The CI is indeed red for external reasons, waiting for [this PR](https://github.com/huggingface/transformers/pull/27353) to get merged\r\n2. The diff shows that `assistant_accepts_encoder_outputs` (a recent addition in assisted generation for distil-whisper, to support assistants with shared encoders) is removed, which means your changes are not built on top of the latest version. \r\n\r\n👉 you will need to rebase your changes to fix both issues, but only after the PR linked above gets merged. You may get minor rebase issues due to 2., but they should be trivial to fix\r\n\r\nAfter that is done, I'll tag a core maintainer for a final quick check :)",
"> 2\\. assistant_accepts_encoder_outputs\r\n\r\nHi @gante . I removed `assistant_accepts_encoder_outputs` because it is useless in my new changes, all inputs should be generated by `assistant_model.prepare_inputs_for_generation`.",
"> Hi @gante . I removed assistant_accepts_encoder_outputs because it is useless in my new changes, all inputs should be generated by assistant_model.prepare_inputs_for_generation.\r\n\r\n🤦 my apologies, you're absolutely right.\r\n\r\nIn that case, rebasing to get the CI green is all you need to do. Tagging a core maintainer for a quick final check :)",
"> Thanks for adding this!\r\n> \r\n> Beautiful PR :) Nice to see a feature addition which results in improving the code's simplicity.\r\n> \r\n> Only request before merging is a test to make sure the inputs can be correctly passed in\r\n\r\nHi @amyeroberts . [This script](https://github.com/huggingface/transformers/pull/26892#issuecomment-1797132296) could test the outputs before and after my changes. I guess you mean adding a test in the tests file to run the CI, if so, would you please tell me which file should be modified to add this test? Thx",
"> @jiqing-feng perfect, all works well on my end.\r\n> \r\n> Two related notes:\r\n> \r\n> 1. The CI is indeed red for external reasons, waiting for [this PR](https://github.com/huggingface/transformers/pull/27353) to get merged\r\n> 2. The diff shows that `assistant_accepts_encoder_outputs` (a recent addition in assisted generation for distil-whisper, to support assistants with shared encoders) is removed, which means your changes are not built on top of the latest version.\r\n> \r\n> 👉 you will need to rebase your changes to fix both issues, but only after the PR linked above gets merged. You may get minor rebase issues due to 2., but they should be trivial to fix\r\n> \r\n> After that is done, I'll tag a core maintainer for a final quick check :)\r\n\r\nHi @gante . I see [this PR](https://github.com/huggingface/transformers/pull/27353) you mentioned has been merged and my PR is already up to date, but some of the CI are still red.",
"@jiqing-feng There were some unexpected failures because of new package releases - thankfully not related to this PR! They should now be resolved on main - rebasing should fix them here. \r\n\r\n> Hi @amyeroberts . https://github.com/huggingface/transformers/pull/26892#issuecomment-1797132296 could test the outputs before and after my changes. I guess you mean adding a test in the tests file to run the CI, if so, would you please tell me which file should be modified to add this test? Thx\r\n\r\nYes, I meant to add a test to the CI runs. It looks like it should be tested in [tests/generation/test_utils.py](https://github.com/huggingface/transformers/blob/c5d7754b1140aaad95ad36c5d9ddfda5abda19f6/tests/generation/test_utils.py#L3024) - but I'll let @gante confirm",
"(woops, wrong button)",
"@amyeroberts not sure if we can test this feature reliably: there are no output differences, since assisted generation always outputs what the main model dictates and this PR only modifies the assistant model's inputs to be more aligned with the main model's.\r\n\r\nWhat we should see on average is a higher speedup with masked inputs, as the assistant model will receive the same inputs and thus has a higher chance of matching the main model, but that is far guaranteed for all calls. A speed test would be very flaky 🤔 ",
"@gante I understand - I wasn't clear enough before. Really all I was looking for it to make sure that this can be safely used for different assistant models i.e. can I pass in a decoder-only model? How about encoder-decoder. So not speed or values, just API",
"@amyeroberts we do have Mixin tests ([e.g.](https://github.com/huggingface/transformers/blob/c5037b459e117b9286c611092f38663f6cb763b0/tests/generation/test_utils.py#L1579)), so any issue regarding API should have been caught there :)",
"@gante Sweet - in that case it's all good 👍 Re the failing tests - there's some PRs due to be merge which should (hopefully, this time) resolve the issues we've been having ",
"Hi, @gante @amyeroberts . All CI are green. I think it is time to merge : )",
"@jiqing-feng thank you for iterating with us and making `transformers` better 💛 ",
"@amyeroberts @jiqing-feng There are currently some unexpected CI failures caused by `test_assisted_decoding_sample` (see #27351 and #27450 ). Are they related to this recently merged PR? I can see from the testing log that this PR did not run those tests involving `test_assisted_decoding_sample` during CI checking. Thanks!",
"> @amyeroberts @jiqing-feng There are currently some unexpected CI failures caused by `test_assisted_decoding_sample` (see #27351 and #27450 ). Are they related to this recently merged PR? I can see from the testing log that this PR did not run those tests involving `test_assisted_decoding_sample` during CI checking. Thanks!\r\n\r\nHi, @VsonicV . Sorry for the failed CI. It is weird that I can successfully run pytest in my local repo (which has updated to origin/main). I see that your CI failed at `blenderbot` and `pegasus`, but I can pass the local test by running pytest locally. Would you please update your repo and rerun the CI? Thx! \r\nIf it still doesn't solve your problem, just revert my changes to see if this is my PR's error.",
"Hi, @jiqing-feng, thanks for the quick check. This happened exactly the same for me: I can run `pytest tests/models/blenderbot/test_modeling_blenderbot.py`, etc., successfully in my local up-to-date repo, but it failed in the CI checks. Moreoever, these CI failures not only happen at `blenderbot` and `pegasus`, it also happened for 'umt5' (in one of my previous CI tests), and `switch_transformers` and `t5` in another recent PR (see #27450 ). I asked here because this is the only recent PR that seems related to `test_assisted_decoding_sample`, but maybe the problem is somewhere else. Thanks for the help anyways!",
"> Hi, @jiqing-feng, thanks for the quick check. This happened exactly the same for me: I can run `pytest tests/models/blenderbot/test_modeling_blenderbot.py`, etc., successfully in my local up-to-date repo, but it failed in the CI checks. Moreoever, these CI failures not only happen at `blenderbot` and `pegasus`, it also happened for 'umt5' (in one of my previous CI tests), and `switch_transformers` and `t5` in another recent PR (see #27450 ). I asked here because this is the only recent PR that seems related to `test_assisted_decoding_sample`, but maybe the problem is somewhere else. Thanks for the help anyways!\r\n\r\nI submitted a new [PR](https://github.com/huggingface/transformers/pull/27503), and all CI passed. Would you apply my PR and see if the CI is ok?\r\n\r\nFurthermore, it is worth a try that update your repo by merging the origin/main and pushing these updates to rerun the CI.",
"@jiqing-feng Hi, thanks for this prompt fix! I will rebase my PR and re-do the CI checks after your new PR is merged. Fingers crossed!",
"This PR broke speculative decoding for Whisper, can we maybe revert it for now? ",
"Issue reported here: https://huggingface.co/openai/whisper-large-v3/discussions/20"
] | 1,697 | 1,700 | 1,699 | CONTRIBUTOR | null | Hi @gante
Do you think that we should also add `assistant_attention_mask` and `assistant_position_ids` in `assisted_decoding`? I see that the original model has `attention_mask` and `position_ids`(in most models) in the model inputs but the assistant model has no these kinds of input.
If you think it is okay to align the inputs of the original model and the assistant model, maybe we can find a more elegant way to integrate it. Thx! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26892/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26892",
"html_url": "https://github.com/huggingface/transformers/pull/26892",
"diff_url": "https://github.com/huggingface/transformers/pull/26892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26892.patch",
"merged_at": 1699614315000
} |
https://api.github.com/repos/huggingface/transformers/issues/26891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26891/comments | https://api.github.com/repos/huggingface/transformers/issues/26891/events | https://github.com/huggingface/transformers/issues/26891 | 1,948,874,546 | I_kwDOCUB6oc50KXcy | 26,891 | apply_chat_template result of Mistral is not restrictly align to the template on its website | {
"login": "annahung31",
"id": 39179888,
"node_id": "MDQ6VXNlcjM5MTc5ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39179888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/annahung31",
"html_url": "https://github.com/annahung31",
"followers_url": "https://api.github.com/users/annahung31/followers",
"following_url": "https://api.github.com/users/annahung31/following{/other_user}",
"gists_url": "https://api.github.com/users/annahung31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/annahung31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/annahung31/subscriptions",
"organizations_url": "https://api.github.com/users/annahung31/orgs",
"repos_url": "https://api.github.com/users/annahung31/repos",
"events_url": "https://api.github.com/users/annahung31/events{/privacy}",
"received_events_url": "https://api.github.com/users/annahung31/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Rocketknight1 if you can have a look! ",
"Thanks for the warning @annahung31! I wrote that chat template based on the layout in the model card, but I didn't realize there was a slightly different template on their website. I pinged some people from Mistral in the repo issue you opened, so hopefully we can get them to confirm the actual template that was used.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.22.1
- Platform: Linux-5.4.0-150-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N
- Using distributed or parallel set-up in script?: N
(I don't know why transformers-cli env got wrong transformers version.
If I run
```
import transformers
transformers.__version__
```
I'll get '4.34.0'.
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
from transformers import AutoTokenizer
chat = [
{"role": "user", "content": "USER_INSTRUCTION_1"},
{"role": "assistant", "content": "RESPONSE_1"},
{"role": "user", "content": "USER_INSTRUCTION_2"},
{"role": "assistant", "content": "RESPONSE_2"},
]
res_apply_chat_template = tokenizer.apply_chat_template(chat, tokenize=False)
res_mistral_website = '<s>[INST] USER_INSTRUCTION_1 [/INST] RESPONSE_1</s>[INST] USER_INSTRUCTION_2 [/INST] RESPONSE_2</s>'
print(res_apply_chat_template)
print(res_mistral_website)
```
The result is:
```
'<s>[INST] USER_INSTRUCTION_1 [/INST]RESPONSE_1</s> [INST] USER_INSTRUCTION_2 [/INST]RESPONSE_2</s> '
'<s>[INST] USER_INSTRUCTION_1 [/INST] RESPONSE_1</s>[INST] USER_INSTRUCTION_2 [/INST] RESPONSE_2</s>'
```
There are two main difference:
1. According to the Mistral 7B website: https://docs.mistral.ai/usage/guardrailing/#appendix
There is always a blank after `[INST]` and `[/INST]`, but result of `apply_chat_template` seems not following it.
2. In `res_apply_chat_template`, There is an additional blank at the end of a turn.
I also encode the two sentences and decode them back. The results show that the word will be tokenized into different tokens because of the blank after `[/INST]`:
```
decoded_apply_chat_template = []
for a in ids_apply_chat_template:
decoded_apply_chat_template.append(tokenizer.decode(a))
decoded_mistral_website = []
for b in ids_mistral_website:
decoded_mistral_website.append(tokenizer.decode(b))
#decoded_apply_chat_template
['<s>', '[', 'INST', ']', 'US', 'ER', '_', 'IN', 'STRU', 'CTION', '_', '1', '[', '/', 'INST', ']', 'RE', 'SP', 'ON', 'SE', '_', '1', '</s>', '', '[', 'INST', ']', 'US', 'ER', '_', 'IN', 'STRU', 'CTION', '_', '2', '[', '/', 'INST', ']', 'RE', 'SP', 'ON', 'SE', '_', '2', '</s>', ' ']
#decoded_mistral_website
['<s>', '[', 'INST', ']', 'US', 'ER', '_', 'IN', 'STRU', 'CTION', '_', '1', '[', '/', 'INST', ']', 'RES', 'P', 'ON', 'SE', '_', '1', '</s>', '[', 'INST', ']', 'US', 'ER', '_', 'IN', 'STRU', 'CTION', '_', '2', '[', '/', 'INST', ']', 'RES', 'P', 'ON', 'SE', '_', '2', '</s>']
```
I guess it's okay to do either way, but shall we better to align with how it was been done during finetuning?
(I think the problem is the chat_template provide in model condig.json, so I also post this issue under their hub.)
### Expected behavior
Behavior are align to each other. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26891/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26890/comments | https://api.github.com/repos/huggingface/transformers/issues/26890/events | https://github.com/huggingface/transformers/pull/26890 | 1,948,563,906 | PR_kwDOCUB6oc5dFA9I | 26,890 | Bump urllib3 from 1.26.17 to 1.26.18 in /examples/research_projects/visual_bert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.18</h2>
<ul>
<li>Made body stripped from HTTP requests changing the request method to GET after HTTP 303 "See Other" redirect responses. (GHSA-g4mx-q9vg-27p4)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.18 (2023-10-17)</h1>
<ul>
<li>Made body stripped from HTTP requests changing the request method to GET after HTTP 303 "See Other" redirect responses.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/9c2c2307dd1d6af504e09aac0326d86ee3597a0b"><code>9c2c230</code></a> Release 1.26.18 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3159">#3159</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/b594c5ceaca38e1ac215f916538fb128e3526a36"><code>b594c5c</code></a> Merge pull request from GHSA-g4mx-q9vg-27p4</li>
<li><a href="https://github.com/urllib3/urllib3/commit/944f0eb134485f41bc531be52de12ba5a37bca73"><code>944f0eb</code></a> [1.26] Use vendored six in urllib3.contrib.securetransport</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.17...1.26.18">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26890/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26890",
"html_url": "https://github.com/huggingface/transformers/pull/26890",
"diff_url": "https://github.com/huggingface/transformers/pull/26890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26890.patch",
"merged_at": 1697628650000
} |
https://api.github.com/repos/huggingface/transformers/issues/26889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26889/comments | https://api.github.com/repos/huggingface/transformers/issues/26889/events | https://github.com/huggingface/transformers/pull/26889 | 1,948,563,452 | PR_kwDOCUB6oc5dFA2q | 26,889 | Bump urllib3 from 1.26.17 to 1.26.18 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.18</h2>
<ul>
<li>Made body stripped from HTTP requests changing the request method to GET after HTTP 303 "See Other" redirect responses. (GHSA-g4mx-q9vg-27p4)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.18 (2023-10-17)</h1>
<ul>
<li>Made body stripped from HTTP requests changing the request method to GET after HTTP 303 "See Other" redirect responses.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/9c2c2307dd1d6af504e09aac0326d86ee3597a0b"><code>9c2c230</code></a> Release 1.26.18 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3159">#3159</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/b594c5ceaca38e1ac215f916538fb128e3526a36"><code>b594c5c</code></a> Merge pull request from GHSA-g4mx-q9vg-27p4</li>
<li><a href="https://github.com/urllib3/urllib3/commit/944f0eb134485f41bc531be52de12ba5a37bca73"><code>944f0eb</code></a> [1.26] Use vendored six in urllib3.contrib.securetransport</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.17...1.26.18">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26889/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26889",
"html_url": "https://github.com/huggingface/transformers/pull/26889",
"diff_url": "https://github.com/huggingface/transformers/pull/26889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26889.patch",
"merged_at": 1697628666000
} |
https://api.github.com/repos/huggingface/transformers/issues/26888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26888/comments | https://api.github.com/repos/huggingface/transformers/issues/26888/events | https://github.com/huggingface/transformers/pull/26888 | 1,948,563,147 | PR_kwDOCUB6oc5dFAyR | 26,888 | Bump urllib3 from 1.26.17 to 1.26.18 in /examples/research_projects/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26888). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.18</h2>
<ul>
<li>Made body stripped from HTTP requests changing the request method to GET after HTTP 303 "See Other" redirect responses. (GHSA-g4mx-q9vg-27p4)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h1>1.26.18 (2023-10-17)</h1>
<ul>
<li>Made body stripped from HTTP requests changing the request method to GET after HTTP 303 "See Other" redirect responses.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/9c2c2307dd1d6af504e09aac0326d86ee3597a0b"><code>9c2c230</code></a> Release 1.26.18 (<a href="https://redirect.github.com/urllib3/urllib3/issues/3159">#3159</a>)</li>
<li><a href="https://github.com/urllib3/urllib3/commit/b594c5ceaca38e1ac215f916538fb128e3526a36"><code>b594c5c</code></a> Merge pull request from GHSA-g4mx-q9vg-27p4</li>
<li><a href="https://github.com/urllib3/urllib3/commit/944f0eb134485f41bc531be52de12ba5a37bca73"><code>944f0eb</code></a> [1.26] Use vendored six in urllib3.contrib.securetransport</li>
<li>See full diff in <a href="https://github.com/urllib3/urllib3/compare/1.26.17...1.26.18">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26888/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26888",
"html_url": "https://github.com/huggingface/transformers/pull/26888",
"diff_url": "https://github.com/huggingface/transformers/pull/26888.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26888.patch",
"merged_at": 1698304229000
} |
https://api.github.com/repos/huggingface/transformers/issues/26887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26887/comments | https://api.github.com/repos/huggingface/transformers/issues/26887/events | https://github.com/huggingface/transformers/issues/26887 | 1,948,506,277 | I_kwDOCUB6oc50I9il | 26,887 | Inconsistency between CLIPTokenizer and CLIPTokenizerFast | {
"login": "sayanshaw24",
"id": 52221015,
"node_id": "MDQ6VXNlcjUyMjIxMDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/52221015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayanshaw24",
"html_url": "https://github.com/sayanshaw24",
"followers_url": "https://api.github.com/users/sayanshaw24/followers",
"following_url": "https://api.github.com/users/sayanshaw24/following{/other_user}",
"gists_url": "https://api.github.com/users/sayanshaw24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayanshaw24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayanshaw24/subscriptions",
"organizations_url": "https://api.github.com/users/sayanshaw24/orgs",
"repos_url": "https://api.github.com/users/sayanshaw24/repos",
"events_url": "https://api.github.com/users/sayanshaw24/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayanshaw24/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! This is a duplicate of #22166 I believe! "
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.12.1+cpu (False)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Import CLIPTokenizer and CLIPTokenizerFast
2. Create the "openai/clip-vit-base-patch32" pretrained tokenizers
3. Compare input ids of tokenizers with some example text using words with apostrophes (e.g., you're, don't, her's, etc.)
```
from transformers import CLIPTokenizer, CLIPTokenizerFast
slow_tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
fast_tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32")
test_str = "you're, don't, her's"
clip_slow_out = slow_tokenizer(test_str)
clip_fast_out = fast_tokenizer(test_str)
slow_input_ids = clip_slow_out ['input_ids']
fast_input_ids = clip_fast_out ['input_ids']
np.testing.assert_array_equal(slow_input_ids, fast_input_ids)
```
### Expected behavior
The input ids should be the same for both regular and fast tokenizers, however CLIPTokenizer tokenizes apostrophes separately (['you', ''', 're']) but CLIPTokenizerFast tokenizes together (['you', ''re']).
Which of these results are the correct expected values? They should be consistent, should they not? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26887/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26886/comments | https://api.github.com/repos/huggingface/transformers/issues/26886/events | https://github.com/huggingface/transformers/issues/26886 | 1,948,481,103 | I_kwDOCUB6oc50I3ZP | 26,886 | metric_for_best_model not found | {
"login": "asaluja",
"id": 2447063,
"node_id": "MDQ6VXNlcjI0NDcwNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2447063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asaluja",
"html_url": "https://github.com/asaluja",
"followers_url": "https://api.github.com/users/asaluja/followers",
"following_url": "https://api.github.com/users/asaluja/following{/other_user}",
"gists_url": "https://api.github.com/users/asaluja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asaluja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asaluja/subscriptions",
"organizations_url": "https://api.github.com/users/asaluja/orgs",
"repos_url": "https://api.github.com/users/asaluja/repos",
"events_url": "https://api.github.com/users/asaluja/events{/privacy}",
"received_events_url": "https://api.github.com/users/asaluja/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @pacman100 ",
"@ArthurZucker @pacman100 following up on this, am I doing something wrong here or is there a bug?",
"cc @muellerzr if you can have a look since @pacman100 is off! 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@muellerz can you please take a look?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Gentle ping @muellerzr @pacman100 ",
"Gently pinging @muellerzr @pacman100, if you can have a look as this has been stale for quite some time. ",
"Hello @asaluja,\r\n\r\nI have gone through the implementation of DPOTrainer at https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py. \r\n\r\nit overrides the `compute_loss` and `prediction_step` methods to specifically compute and store DPO related metrics such as `rewards/chosen`, `rewards/accuracies` etc which is then store in private dict attribute specific to DPOTrainer `_stored_metrics` and it is only used in `log` method to log to tensorboard/wandb ... \r\n\r\nhttps://github.com/huggingface/trl/blob/f12a1da74baef7aff6656bb48247ac109ce72744/trl/trainer/dpo_trainer.py#L1222-L1236\r\n\r\nOn the other hand, for you to use `metric_for_best_model`, you need to pass `compute_metrics` function when creating Trainer instance which should compute the metric you are interested in by taking as input `EvalPrediction`. Because you are not doing that, `eval_rewards/accuracies` isn't computed by the parent Trainer's evaluation loop thereby such metric being unavailable leading to key error. To resolve this, you need to pass the `compute_metrics` which mimics what DPOTrainer is doing internally for each prediction step and adapt it. \r\n\r\nThis is not an issue with Trainer and it should be handled by the calling script/user.",
"thanks for clarifying @pacman100 ! It looks like I'll have to re-implement `get_batch_loss_metrics` [here](https://github.com/huggingface/trl/blob/f12a1da74baef7aff6656bb48247ac109ce72744/trl/trainer/dpo_trainer.py#L1000) in `compute_metrics`. I will close this issue now."
] | 1,697 | 1,707 | 1,707 | NONE | null | ### System Info
Transformers version: 4.31.0
Python version: 3.10
Accelerate version: 0.21.0
TRL version: 0.7.2
I am not sure if this issue should be here or in the `trl` repo, but the stack trace showed an error in `transformers/trainer.py`.
I am using `trl.DPOTrainer`. I define `metric_for_best_model="eval_rewards/accuracies"` (a metric from `trl.DPOTrainer`), but then I get the following error:
```
KeyError: 'eval_rewards/accuracies'
return inner_training_loop(
File "/mnt/miniconda3/envs/synlm/lib/python3.10/site-packages/transformers/trainer.py", line 1901, in _inner_training_loop
self._save_checkpoint(model, trial, metrics=metrics)
File "/mnt/miniconda3/envs/synlm/lib/python3.10/site-packages/transformers/trainer.py", line 2356, in _save_checkpoint
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/mnt/miniconda3/envs/synlm/lib/python3.10/site-packages/transformers/trainer.py", line 2237, in _maybe_log_save_evaluate
print(metrics['eval_rewards/accuracies'])
KeyError: 'eval_rewards/accuracies'
self._save_checkpoint(model, trial, metrics=metrics)
File "/mnt/miniconda3/envs/synlm/lib/python3.10/site-packages/transformers/trainer.py", line 2356, in _save_checkpoint
print(metrics['eval_rewards/accuracies'])
KeyError: 'eval_rewards/accuracies'
```
I noticed that the metric I defined is only available on `cuda:0` (I am running on 8 40GB A100s). Is there something special I need to do to get this to work? If this is an issue for the `trl` repo let me know accordingly please.
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
training_args = TrainingArguments(
remove_unused_columns=False,
per_device_train_batch_size=config.BATCH_SIZE_TRAIN,
gradient_accumulation_steps=config.GRADIENT_ACC_STEPS,
lr_scheduler_type="linear",
warmup_steps=150,
report_to="tensorboard",
max_steps=args.max_steps,
num_train_epochs=config.NUM_EPOCHS,
learning_rate=config.LEARNING_RATE,
bf16=True,
logging_steps=config.SAVE_STEPS,
optim="paged_adamw_8bit",
max_grad_norm=0.3,
evaluation_strategy="steps" if val_set_size > 0 else "no",
save_strategy="steps",
eval_steps=config.SAVE_STEPS if val_set_size > 0 else None,
per_device_eval_batch_size=config.BATCH_SIZE_EVAL,
eval_accumulation_steps=128,
save_steps=config.SAVE_STEPS,
output_dir=args.output_dir,
load_best_model_at_end=True,
metric_for_best_model="eval_rewards/accuracies", # if I remove this line and the following line it works
greater_is_better=True,
ddp_find_unused_parameters=False,
group_by_length=False,
dataloader_num_workers=accelerator.num_processes,
save_total_limit=1,
)
lora_config = LoraConfig(
r=config.LORA_R,
lora_alpha=config.LORA_ALPHA,
target_modules=config.LORA_TARGET_MODULES,
lora_dropout=config.LORA_DROPOUT,
bias="none",
task_type="CAUSAL_LM",
)
trainer = DPOTrainer(
model,
model_ref,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=training_args,
tokenizer=tokenizer,
beta=config.DPO_BETA,
max_length=config.EXAMPLE_MAX_LENGTH,
max_prompt_length=config.EXAMPLE_MAX_LENGTH - synlm_config.BUFFER_TOKENS,
truncation_mode="keep_start",
peft_config=lora_config,
)
trainer.train()
```
### Expected behavior
Trainer will find best model as per `eval_rewards/accuracies` and save that to disk. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26886/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26885/comments | https://api.github.com/repos/huggingface/transformers/issues/26885/events | https://github.com/huggingface/transformers/pull/26885 | 1,948,405,608 | PR_kwDOCUB6oc5dEepT | 26,885 | Add InstructBlip to VQA pipeline | {
"login": "stevenmanton",
"id": 3666725,
"node_id": "MDQ6VXNlcjM2NjY3MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3666725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenmanton",
"html_url": "https://github.com/stevenmanton",
"followers_url": "https://api.github.com/users/stevenmanton/followers",
"following_url": "https://api.github.com/users/stevenmanton/following{/other_user}",
"gists_url": "https://api.github.com/users/stevenmanton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevenmanton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevenmanton/subscriptions",
"organizations_url": "https://api.github.com/users/stevenmanton/orgs",
"repos_url": "https://api.github.com/users/stevenmanton/repos",
"events_url": "https://api.github.com/users/stevenmanton/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevenmanton/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hey! Feel free to ping @rafaelpadilla as well once this is ready! ",
"Ok, I've added some simple logic to the the `VisualQuestionAnsweringPipeline` class that will load the InstructBlipProcessor if required. It still seems a little clunky to me, but I don't know of a better way. I also added a slow test for InstructBlip and all the tests in `tests/pipelines/test_pipelines_visual_question_answering.py` pass.\r\n\r\n@rafaelpadilla Can you take a look as suggested by @ArthurZucker ? Thanks, team!",
"@NielsRogge , it's all a question of *invariant*.\r\n\r\nIn order for some piece of software/code to be easy to use from the external world, it needs to uphold invariants, declare it's usage and outcome very clearly and fail clearly when used outside of that.\r\nTypically so users really can forget about the internals, and focus on the inside.\r\n\r\nGood case : \r\n```\r\ndef lowercase(name: str) -> str:\r\n # internals\r\n```\r\nI don't have to read the doc, or anything, I'll send it a `str`, and receive a `str` back, whatever the insides are. (and the name happens to be quite self explaining)\r\n\r\nBad case\r\n```\r\ndef prepare(*args, *kwargs):\r\n # internals\r\n```\r\nI have no idea what I'm supposed to send, no idea what I'm going to get (am I even going to get anything ?). Do I even get always the same things depending on what I send.\r\n\r\n\r\nAll models should accept several tensors. We have a relatively standardized name for the ones which are commonly shared: `input_ids` is the text sequence in tensor form `pixel_values` are the images etc..\r\nNot all models accept the same set of tensors, but all (to the best of my knowledge) are relatively clear on what's the minimal set of sendable tensors (Big asterisk here because we have lots of contrived pathways like `input_embeds` but I'm putting them aside for now, as I think it's a relatively marginal use case).\r\n\r\nFor multimodal, the number and names of tensors has expanded quite a bit, and I really feel like we should have 1 *model class* per standard signature, and not play around with the args like crazy (`input_embeds` kind of set a bad example here IMHO which is why I guess we've allowed ourselves to go that way).\r\n\r\nSince multimodal kind of requires different kinds of models with potentially explicitly different signatures, we need to find a way to make the `pipeline` 's signature standard still.\r\nWith that regard, currently the upheld *invariant* is the entire chain (from the pipeline's point of view): \r\n\r\n```\r\ninputs = tokenizer(string) # Omitting the return types \r\nlogits = model(**inputs)\r\n```\r\nNot that `inputs` is a dict, which can contain arbitrary keys, but we *know* it's `Dict[string, Tensor]`. and `string : str`.\r\n\r\nThe problem with current `Processor` is that it does not uphold *any* for of signature invariant. It might accept anything `string`, `pixel_values`, and any combination thereof. It's a *GREAT* tool for short snippet but quite impossible to use in practice.\r\n\r\nThis is the sort of mess we should strive to avoid : https://github.com/search?q=repo%3Ahuggingface%2Ftransformers%20inspect.signature&type=code\r\nInspecting signature is slow, and most importantly can create super nasty silent bug. By modifying stuff on the fly without the caller's realization which might be removing/adding some important information which may not have been desired.\r\n\r\nSo for `Processor` to be more usage I think it should follow the pipeline's signature calls and output something like the tokenizer output `Dict[string, Tensor]`.\r\n\r\n-> `TextProcessor ~= Tokenizer` (For instance the marian tokenizer which contains 2 actual tokenizers would be a good contender, just like this particular PR which does pretty much the same but using 2 actual different tokenizer).\r\n-> `ImageProcessor` exactly like the current one \r\n-> `TextImageProcessor` (For something like visual qa, `text` ~= `question` + `image`-> `pixel_values`)\r\n...etc..\r\n\r\nYes there would be many but each would uphold a standard (and different from each other) signature.\r\nI think this could be done in a non breaking way but is probably a good way to go IMO.\r\nIt's also probably A LOT of work because of all the very weird tiny detail differences in multimodal models (which we might be able to make slightly more uniform).\r\nDefinitely not something I would have the time to dive into.\r\n\r\nIf something like that could exist we could have:\r\n-> Each `AutoModel` knowing the `XXXXProcessor` that it needs so we could keep the classic way with `AutoProcessor`.\r\n-> The pipeline knowing already the `XXXProcessor` it needs (because they have the same signature than the actual pipeline). And it could actually check that the model and the `XXXProcessor` do match.\r\n\r\nThen the pipeline's code could probably simplify away a lot of those if/else.\r\nThe main issue I see which might be coming up, is in the `parameters`, which are the non standard way of modifying output.\r\nFor instance, things like `return_timestamps` within a pipeline. Should only the pipeline take care of it ? Should all the `XXProcessor` align with it (which would mean basically the pipeline would become `def preprocess(...): self.processor(\r\n...)` (Not declared as kwargs of course with a real signature) ?\r\n\r\nDoes this make any sense to you ?",
"Hi @Narsil those are all really great points. I agree with the need to manage interfaces more carefully. What do you suggest in the context of this PR? Are your suggestions that we should implement before merging this or is this a much broader scope that would be added or refactored later? Or is there a way that we can get there incrementally (e.g., new code adopts formal interfaces, but old code still works)? Thanks for your guidance!",
"Reading @Narsil , I have some questions:\r\n\r\n- Regarding the invariants, it seems to me you are looking for:\r\n - the (main) **inputs** to a pipeline should match the (main) **input**(s) to the processor (if we ever want to use it). For example, for `ImageToTextPipeline`, we want to make sure, if its `__call__` accepts `images` (possibly with `text` as an prompt), the `processor` used in that pipele should match this signature. We are sure this holds for text only and image only processor, but this is not the case for the general processor. Do I understand this correctly? \r\n - for the **output**, I think the processors output `Dict[string, Tensor]` (well, if return_tensors=... is specified). So it's not clear to me what's the concern here. And just like the tokenizers, the dict's can contain arbitrary kesy, but this is not really an issue (as you mentioned in your comment), and that is not pipeline's responsiblility to assure, but the binding of a processor to a model should match (during model addition, inside our auto mapping, etc.). So could you elaborate a bit more regarindg this `processor output` part if this is really an concern?",
"> Do I understand this correctly?\r\n\r\nYes Processor in general don't have a fixed signature.\r\n\r\n> So could you elaborate a bit more regarindg this processor output part if this is really an concern?\r\n\r\nProcessor output being undeterministic, just underlines the fact that we are lacking clear signatures to `AutoModel`.\r\nThis is acceptable since we're using this property already for `model(**tokenizer(string, return_tensors=\"xx\"))` but not *ideal* in my book.\r\nIdeal is as the name suggests, an ideal, something we should be something we should aim for, but not an absolute set of rules, pragmatic solutions will always beat ideal code, and ideal code never actually exists.",
"In spirit of that, I'm adding common processor tests, starting at #27720. This to make sure that multimodal processors follow the same API (or at least, to the greatest extent possible).\r\n\r\nSimilar PRs are done for testing the forward signature of models, at #27681 and #27729. "
] | 1,697 | 1,707 | null | CONTRIBUTOR | null | # What does this PR do?
This PR attempts to support InstructBlip as an acceptable model in the VQA pipeline. Currently, only Blip2 is supported.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@NielsRogge @amyeroberts @Narsil
# Description
This PR is not complete, but I'd like some guidance so that I can finalize the contribution. One issue is that InstructBlip handles the tokenization a bit. I have a bit of an ugly check; maybe this can be improved.
The bigger issue is that the pipeline loads a `BlipImageProcessor` instead of `InstructBlipProcessor`. I think this comes from this [config](https://huggingface.co/Salesforce/instructblip-flan-t5-xl/blob/main/preprocessor_config.json). The wrinkle here is that the Q-former (the `InstructBlipProcessor` class) is a layer above the image processor (the `BlipImageProcessor` class). We need a handle to an instantiated `InstructBlipProcessor`, but the `pipeline` function doesn't seem to load it. I don't see a hook for subclasses to load additional processors either. Maybe add the `InstructBlipProcessor` as a kwarg to `__init__` in the `VisualQuestionAnsweringPipeline` class? Would love feedback from the core devs.
# Testing
The following code works with my changes:
```python
from transformers import pipeline
checkpoint = "Salesforce/instructblip-flan-t5-xl"
pipe = transformers.pipeline("vqa", model=checkpoint)
# This shouldn't be necessary, but I'm not sure why AutoImageProcessor loads the wrong processor:
image_processor = InstructBlipProcessor.from_pretrained(checkpoint)
pipe.image_processor = image_processor
image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/lena.png"
pipe(question="What is she wearing ?", image=image_url)
# [{'answer': 'hat'}]
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26885/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26885",
"html_url": "https://github.com/huggingface/transformers/pull/26885",
"diff_url": "https://github.com/huggingface/transformers/pull/26885.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26885.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26884/comments | https://api.github.com/repos/huggingface/transformers/issues/26884/events | https://github.com/huggingface/transformers/issues/26884 | 1,948,355,610 | I_kwDOCUB6oc50IYwa | 26,884 | Issue loading models saved with older version | {
"login": "valbarriere",
"id": 11158292,
"node_id": "MDQ6VXNlcjExMTU4Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/11158292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valbarriere",
"html_url": "https://github.com/valbarriere",
"followers_url": "https://api.github.com/users/valbarriere/followers",
"following_url": "https://api.github.com/users/valbarriere/following{/other_user}",
"gists_url": "https://api.github.com/users/valbarriere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valbarriere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valbarriere/subscriptions",
"organizations_url": "https://api.github.com/users/valbarriere/orgs",
"repos_url": "https://api.github.com/users/valbarriere/repos",
"events_url": "https://api.github.com/users/valbarriere/events{/privacy}",
"received_events_url": "https://api.github.com/users/valbarriere/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! You are jumping between two majors releases where backward compatibility is usually broken. So this is expected",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-debian-bullseye-sid
- Python version: 3.7.10
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Save a model with transformers 3.5.1
2. Load it with transformers 4.28.1
Here's my code to create the model, before loading it and obtaining the error. Do not take into account all the inputs variable of the function
```
def compile_and_return_model(model_name, nb_classes, learning_rate,
proxies=None, MAX_SEQ_LEN=128, use_token_type_ids_roberta=True,
verbose=False):
"""
Create the deep learning model, compile it and return it
TODO: input_img_dim is hard-coded, but this value should come from the loaded dataset !!
"""
### The models from the transformers library
MODELS_TF_dict = {'bert-transformers' : (TFBertModel, BertTokenizer),
'xlm-roberta-transformers' : (TFXLMRobertaModel, XLMRobertaTokenizer),
'roberta-transformers' : (TFRobertaModel, RobertaTokenizer),
# TODO: t5 does not work right now, even with pytorch....
't5-transformers' : (TFT5Model, T5Tokenizer), # 't5-11b' is supposed the best model at the end of 04/2020
'electra-transformers' : (TFElectraModel, ElectraTokenizer), # TODO: do the tests with Electra model
'camembert-transformers' : (TFCamembertModel, CamembertTokenizer),
'automodel' : (TFAutoModel, AutoTokenizer)
# 'longformer-transformers' : (TFLongformerModel, LongformerTokenizer), # TODO: do the tests with Longformer model
}
### Model compiling
model_gen, tok_gen = (TFXLMRobertaModel, XLMRobertaTokenizer)
str_model_to_load, str_tokenizer_to_load = return_model_tokenize_to_load(model_name)
model_to_load = 'jplu/tf-xlm-roberta-base'
str_tokenizer_to_load = 'jplu/tf-xlm-roberta-base'
tokenizer = tok_gen.from_pretrained(str_tokenizer_to_load, proxies=proxies, cache_dir = CACHE_DIR)
from_pt = False
model_tf_transformer = model_gen.from_pretrained(str_model_to_load, proxies=proxies, cache_dir = CACHE_DIR, from_pt=from_pt)
if ('roberta' in model_name) and use_token_type_ids_roberta:
nb_sentences = use_token_type_ids_roberta + 1
print('Adding a special token_type_embeddings layer to RoBERTa, of dimension {}'.format(nb_sentences))
model_tf_transformer.config.type_vocab_size = nb_sentences
from transformers import __version__ as tv
if int(tv.split('.')[0]) > 3:
model_tf_transformer.roberta.embeddings.token_type_embeddings = tf.Variable(np.random.normal(0.0,model_tf_transformer.config.initializer_range, (model_tf_transformer.config.type_vocab_size, model_tf_transformer.config.hidden_size)).astype('float32'))
else:
model_tf_transformer.roberta.embeddings.token_type_embeddings = tf.keras.layers.Embedding(2, model_tf_transformer.config.hidden_size, embeddings_initializer = tf.keras.initializers.RandomNormal(mean=0.0, stddev=model_tf_transformer.config.initializer_range))
tokenizer.create_token_type_ids = True
#input layers
# MAX_SEQ_LEN=128
input_word_ids = tf.keras.layers.Input(shape=(MAX_SEQ_LEN,), dtype=tf.int32, name="input_word_ids")
input_mask = tf.keras.layers.Input(shape=(MAX_SEQ_LEN,), dtype=tf.int32, name="input_mask")
segment_ids = tf.keras.layers.Input(shape=(MAX_SEQ_LEN,), dtype=tf.int32, name="segment_ids")
sequence_output = model_tf_transformer({'input_ids' : input_word_ids, 'attention_mask' : input_mask, 'token_type_ids' : segment_ids})[0]
### SAME config as in the class RobertaClassificationHead of `transformers` --> take `sequence_output[:, 0, :]`
# dropout value from https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-config.json
hidden_size_output = 768
x = sequence_output[:, 0, :] # take <s> token (equiv. to [CLS])
x = tf.keras.layers.Dropout(0.1)(x)
x = tf.keras.layers.Dense(hidden_size_output, activation="tanh", name="hidden-layer1")(x)
x = tf.keras.layers.Dropout(0.1)(x)
out = tf.keras.layers.Dense(nb_classes, name="dense_output")(x)
from_logits = True
model = tf.keras.models.Model(
inputs=[input_word_ids, input_mask, segment_ids],
outputs=out
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=from_logits)
metric = tf.keras.metrics.SparseCategoricalAccuracy('acc')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
if verbose:
model.summary(line_length=150)
return model, tokenizer
```
### Expected behavior
Hi,
I am using transformers 4.28.1 and I have an issue loading a model I saved with transformers 3.5.1
It seems that some layers are not loaded in the same order and it is messing with the general loading process...
```
WARNING:tensorflow:Skipping loading of weights for layer tfxlm_roberta_model due to mismatch in shape ((2, 768) vs (768,)).
WARNING:tensorflow:Skipping loading of weights for layer tfxlm_roberta_model due to mismatch in shape ((768,) vs (2, 768)).
Loading already trained model...
Traceback (most recent call last):
File "../Transformers_MM_class.py", line 1257, in <module>
dict_lab=dict_lab, path_corpus=path_corpus_test, list_countries=['France', 'Ukraine'], n_duplicates=100)
File "/home/barriva/Valentin_code/biases_calculation_huggingfacehub.py", line 216, in calculate_sentiment_bias
model, X_text, y = prepare_data_and_model(modelFilePath, model, input_data_File, dict_lab, path_corpus)
File "/home/barriva/Valentin_code/biases_calculation_huggingfacehub.py", line 123, in prepare_data_and_model
model.load_weights(modelFilePath)
File "/jeodpp_envs/bdap-gpu-txt-tf2.4.1/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 2234, in load_weights
hdf5_format.load_weights_from_hdf5_group(f, self.layers)
File "/jeodpp_envs/bdap-gpu-txt-tf2.4.1/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 710, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/jeodpp_envs/bdap-gpu-txt-tf2.4.1/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/jeodpp_envs/bdap-gpu-txt-tf2.4.1/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3706, in batch_set_value
x.assign(np.asarray(value, dtype=dtype(x)))
File "/jeodpp_envs/bdap-gpu-txt-tf2.4.1/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 891, in assign
(tensor_name, self._shape, value_tensor.shape))
ValueError: Cannot assign to variable Variable:0 due to variable shape (2, 768) and value shape (768,) are incompatible
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26884/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26883/comments | https://api.github.com/repos/huggingface/transformers/issues/26883/events | https://github.com/huggingface/transformers/pull/26883 | 1,948,304,967 | PR_kwDOCUB6oc5dEIPI | 26,883 | [docstring] Fix docstring for speech-to-text config | {
"login": "R055A",
"id": 14985050,
"node_id": "MDQ6VXNlcjE0OTg1MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/14985050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/R055A",
"html_url": "https://github.com/R055A",
"followers_url": "https://api.github.com/users/R055A/followers",
"following_url": "https://api.github.com/users/R055A/following{/other_user}",
"gists_url": "https://api.github.com/users/R055A/gists{/gist_id}",
"starred_url": "https://api.github.com/users/R055A/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/R055A/subscriptions",
"organizations_url": "https://api.github.com/users/R055A/orgs",
"repos_url": "https://api.github.com/users/R055A/repos",
"events_url": "https://api.github.com/users/R055A/events{/privacy}",
"received_events_url": "https://api.github.com/users/R055A/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh did the PR fail checks because any line(s) being beyond a 119-char limit?",
"Yes, you should run `make style` to apply our styling to the code ! this will need the `pip install -e \".[quality]\"` flags to make sure you have the required packages ",
"> Yes, you should run `make style` to apply our styling to the code ! this will need the `pip install -e \".[quality]\"` flags to make sure you have the required packages\r\n\r\nThank you @ArthurZucker for the help. I have run `make style` as well as `black .\\configuration_speech_to_text.py` and `ruff .\\configuration_speech_to_text.py`. In each case, the file in this PR passes. I have shortened lines in the doc to ensure they do not exceed the 119-char limit but this has not resolved the code quality issue in the PR checks. It is difficult to determine the issue from this [error](https://app.circleci.com/pipelines/github/huggingface/transformers/75710/workflows/96a915d6-07d5-46c2-ace0-94c66d1de672/jobs/960847/parallel-runs/0/steps/0-114)",
"Commit `13b0ad2` is the result from running: `python utils/check_docstrings.py --fix_and_overwrite` following removing `Speech2TextConfig` from `OBJECTS_TO_IGNORE`",
"> ach case, the file in this PR passes. I have shortened lines in t\r\n\r\nHi @R055A\r\n\r\nCould you run `doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source`\r\n\r\nMake sure you have installed `pip install hf-doc-builder`",
"> > ach case, the file in this PR passes. I have shortened lines in t\r\n> \r\n> Hi @R055A\r\n> \r\n> Could you run `doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source`\r\n> \r\n> Make sure you have installed `pip install hf-doc-builder`\r\n\r\nDone. Thank you @ydshieh ",
"> Thank you. Just a few (very) tiny suggestion and we are ready to merge!\r\n\r\nDone, Thank you @ydshieh ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26883). All of your documentation changes will be reflected on that endpoint.",
"Thank you @R055A for the contribution 🤗 ~"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638
Fix doc string for speech-to-text configuration.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26883/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26883",
"html_url": "https://github.com/huggingface/transformers/pull/26883",
"diff_url": "https://github.com/huggingface/transformers/pull/26883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26883.patch",
"merged_at": 1697788195000
} |
https://api.github.com/repos/huggingface/transformers/issues/26882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26882/comments | https://api.github.com/repos/huggingface/transformers/issues/26882/events | https://github.com/huggingface/transformers/issues/26882 | 1,948,209,408 | I_kwDOCUB6oc50H1EA | 26,882 | Baichuan model is not working | {
"login": "Fangzhou-Ai",
"id": 31551580,
"node_id": "MDQ6VXNlcjMxNTUxNTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/31551580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fangzhou-Ai",
"html_url": "https://github.com/Fangzhou-Ai",
"followers_url": "https://api.github.com/users/Fangzhou-Ai/followers",
"following_url": "https://api.github.com/users/Fangzhou-Ai/following{/other_user}",
"gists_url": "https://api.github.com/users/Fangzhou-Ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fangzhou-Ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fangzhou-Ai/subscriptions",
"organizations_url": "https://api.github.com/users/Fangzhou-Ai/orgs",
"repos_url": "https://api.github.com/users/Fangzhou-Ai/repos",
"events_url": "https://api.github.com/users/Fangzhou-Ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fangzhou-Ai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! This is because of a recent update to `transformers`, the model on the hub should be updated to move the call of `super()__init__` at the end of the `__init__()` of the model"
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
using latest vllm by ```pip install vllm``` directly.
GPU: A100 @ 80 GB
System: Debian 10
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simple test using current example
```bash
python benchmarks/benchmark_latency.py --trust-remote-code \
--model baichuan-inc/Baichuan-13B-Base \
--input-len 256 --output-len 256 \
--batch-size 32 \
--tensor-parallel 2
```
### Expected behavior
Error info : ```AttributeError: 'BaichuanTokenizer' object has no attribute 'sp_model'```
The error is due to the transformers 4.34.0, which is required or Mistral but imconpatiable with current vllm Baichuan model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26882/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26881/comments | https://api.github.com/repos/huggingface/transformers/issues/26881/events | https://github.com/huggingface/transformers/pull/26881 | 1,948,183,229 | PR_kwDOCUB6oc5dDs8Y | 26,881 | [docs] Update CPU/GPU inference docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lvwerra did you participate in writing the original document as well? Do you have feedback on this refactor?",
"Nice overhaul! I think it's also an opportunity to mention Optimum as this falls right into what we are doing:\r\n- I think the GPU inference page should briefly mention inference with ORT and link to: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu\r\n- Same for CPU with a link to: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models\r\n- And for Intel CPUs I would also refer to Optimum Intel: https://huggingface.co/docs/optimum/intel/index\r\n\r\nWDYT?\r\n\r\nCc @echarlaix @mfuntowicz ",
"> @lvwerra did you participate in writing the original document as well? Do you have feedback on this refactor?\r\n\r\nOnly did the general structure of that part so leave it to others to comment on content :) ",
"> Nice overhaul! I think it's also an opportunity to mention Optimum as this falls right into what we are doing:\r\n> \r\n> * I think the GPU inference page should briefly mention inference with ORT and link to: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu\r\n> * Same for CPU with a link to: https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models\r\n> * And for Intel CPUs I would also refer to Optimum Intel: https://huggingface.co/docs/optimum/intel/index\r\n> \r\n> WDYT?\r\n> \r\n> Cc @echarlaix @mfuntowicz\r\n\r\nTotally aligned with your suggestion @regisss, let us know if we can help @stevhliu ",
"Thanks for the feedback y'all! \r\n\r\nLet me know what you think of the new Optimum sections @regisss @echarlaix :)"
] | 1,697 | 1,698 | 1,698 | MEMBER | null | Implements the proposal suggested in #26723, namely:
- consolidating the inference on one/many GPUs docs into one page
- removing the inference on specialized hardware page
- cleanups and updates to the inference docs to provide more context about the how/what of the optimization techniques, code examples so users don't have to skip around to other pages, etc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26881/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26881",
"html_url": "https://github.com/huggingface/transformers/pull/26881",
"diff_url": "https://github.com/huggingface/transformers/pull/26881.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26881.patch",
"merged_at": 1698770692000
} |
https://api.github.com/repos/huggingface/transformers/issues/26880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26880/comments | https://api.github.com/repos/huggingface/transformers/issues/26880/events | https://github.com/huggingface/transformers/pull/26880 | 1,948,110,491 | PR_kwDOCUB6oc5dDdMw | 26,880 | [docstring] Fix docstring for ChineseCLIPImageProcessor, ChineseCLIPTextConfig, ChineseCLIPVisionConfig | {
"login": "Sparty",
"id": 3923604,
"node_id": "MDQ6VXNlcjM5MjM2MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3923604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparty",
"html_url": "https://github.com/Sparty",
"followers_url": "https://api.github.com/users/Sparty/followers",
"following_url": "https://api.github.com/users/Sparty/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparty/subscriptions",
"organizations_url": "https://api.github.com/users/Sparty/orgs",
"repos_url": "https://api.github.com/users/Sparty/repos",
"events_url": "https://api.github.com/users/Sparty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Could you please review this?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26880). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/26638
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26880/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26880",
"html_url": "https://github.com/huggingface/transformers/pull/26880",
"diff_url": "https://github.com/huggingface/transformers/pull/26880.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26880.patch",
"merged_at": 1697705534000
} |
https://api.github.com/repos/huggingface/transformers/issues/26879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26879/comments | https://api.github.com/repos/huggingface/transformers/issues/26879/events | https://github.com/huggingface/transformers/pull/26879 | 1,948,059,579 | PR_kwDOCUB6oc5dDSGh | 26,879 | Fixed a typo in mistral.md | {
"login": "DTennant",
"id": 15844949,
"node_id": "MDQ6VXNlcjE1ODQ0OTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15844949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DTennant",
"html_url": "https://github.com/DTennant",
"followers_url": "https://api.github.com/users/DTennant/followers",
"following_url": "https://api.github.com/users/DTennant/following{/other_user}",
"gists_url": "https://api.github.com/users/DTennant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DTennant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DTennant/subscriptions",
"organizations_url": "https://api.github.com/users/DTennant/orgs",
"repos_url": "https://api.github.com/users/DTennant/repos",
"events_url": "https://api.github.com/users/DTennant/events{/privacy}",
"received_events_url": "https://api.github.com/users/DTennant/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26879/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26879",
"html_url": "https://github.com/huggingface/transformers/pull/26879",
"diff_url": "https://github.com/huggingface/transformers/pull/26879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26879.patch",
"merged_at": 1697576797000
} |
https://api.github.com/repos/huggingface/transformers/issues/26878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26878/comments | https://api.github.com/repos/huggingface/transformers/issues/26878/events | https://github.com/huggingface/transformers/issues/26878 | 1,948,015,660 | I_kwDOCUB6oc50HFws | 26,878 | use_flash_attention_2 not supported for AutoConfig and/or from_config | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @younesbelkada and @LysandreJik I remember we had a debate on whether or not this should be saved 😃 ",
"Rather than being saved, I think @dakinggg is asking to be able to pass it when initializing the model with a config, so probably something like this:\r\n```\r\nmodel = BertModel(BertConfig(), use_flash_attention_2=True)\r\n# or\r\nmodel = BertModel(BertConfig(use_flash_attention_2=True))\r\n```\r\n\r\nThis seems reasonable to me, is that what you had in mind @dakinggg ?",
"Yeah, but then we would need to pop it to make sure it's not saved as we wanted to prevent this use case",
"Yes, that is correct @LysandreJik! With one minor addition to also support it via `AutoModel.from_config(cfg)` (maybe this ends up with the same code paths you mentioned, not sure of the exact internal implementation)"
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | ### Feature request
Let me know if I've missed something, but I think `use_flash_attention_2` is only supported via the `from_pretrained` API. So I think I have to do something like `config._flash_attn_2_enabled = use_flash_attention_2` outside of the normal transformers API in order to initialize a model with flash attention 2 from a config. It would be great if the config based APIs supported this argument too.
### Motivation
Parity across the ways of initializing a transformers model.
### Your contribution
N/A | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26878/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26877/comments | https://api.github.com/repos/huggingface/transformers/issues/26877/events | https://github.com/huggingface/transformers/issues/26877 | 1,947,994,950 | I_kwDOCUB6oc50HAtG | 26,877 | Mistral with flash attention 2 and right padding | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Having the same issues\r\nValueError: You are attempting to perform batched generation with padding_side='right' this may lead to unexpected behaviour for Flash Attention version of Mistral. Make sure to call `tokenizer.padding_side = 'left'` before tokenizing the input. ",
"cc @younesbelkada ",
"Indeed forward should be supported but not generation, will raise a patch soon for this",
"Hey @younesbelkada, I am still seeing this error after set `tokenizer.padding_side = 'left'`\r\nThis is my demo notebook: https://colab.research.google.com/drive/1sVqbYEOqjJYl7CzNzXzviEBB6A984cMq?usp=sharing\r\n\r\nTokenizer already set with left padding\r\n\r\n\r\nStill have: `ValueError: You are attempting to perform batched generation with padding_side='right' this may lead to unexpected behaviour for Flash Attention version of Mistral. Make sure to call `tokenizer.padding_side = 'left'` before tokenizing the input.`\r\n\r\n\r\nNot sure is this because of trl or something wrong within transformer?\r\n\r\ntransformers 4.36.2\r\ntrl 0.7.7\r\npeft 0.6.0\r\n",
"I am having the same issue. Even after I set the tokenizer padding side = left, this error still occurs during training.",
"Thanks everyone for reporting, might be an issue with TRL I think, let me have a deeper look and get back ASAP",
"> Thanks everyone for reporting, might be an issue with TRL I think, let me have a deeper look and get back ASAP\r\n\r\nI opened a issue in trl as well: https://github.com/huggingface/trl/issues/1217#issuecomment-1888582975",
"You need to set `use_cache = False` for both the main and reference model. See my comment here: https://github.com/huggingface/trl/issues/1217#issuecomment-1889282654"
] | 1,697 | 1,705 | 1,697 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
If you run a batch through mistral with flash attention 2 with right padding, you get
```
ValueError: You are attempting to perform batched generation with padding_side='right' this may lead to unexpected behaviour for Flash Attention version of Mistral. Make sure to call `tokenizer.padding_side = 'left'` before tokenizing the
input.
```
I am not doing generation, just calling forward. Is the error message incorrect and you actually meant to prevent _all_ usage of right padding here? Or is the implementation wrong and this was meant to only prevent _generate_ usage of right padding? Or perhaps I am missing something else. Thanks!
### Expected behavior
Either right padding is ok for calling forward, or the error message correctly states the problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26877/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26876/comments | https://api.github.com/repos/huggingface/transformers/issues/26876/events | https://github.com/huggingface/transformers/pull/26876 | 1,947,975,166 | PR_kwDOCUB6oc5dDADH | 26,876 | [Docs] Model_doc structure/clarity improvements | {
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"If @ArthurZucker is busy, maybe @amyeroberts could take a look?"
] | 1,697 | 1,699 | 1,699 | CONTRIBUTOR | null | This PR streamlines pages in model_doc, so that a user who has seen at least one of these pages, should find it easier to navigate throughout any other model_doc page. These are the changes:
1. Individual pages now have (with a few exceptions) the same structure: overview, followed by author acknowledgement, usage tips and/or examples, resources (other docs, notebooks, blog posts, etc.), API reference (autodoc). Previously, some pages did not have the same order which could make searching through them confusing.
2. Usage tips now have their own section with a header. Previously, they often have been buried within the overview section, now one can find them faster.
3. In cases where a model has the same architecture as another model, a visible `<Tip>` has been added with a link to API reference of the original model.
4. “Documentation resources” and “resources” were combined into “Resources”
5. For models that support TF and JAX, a `<framework>` tag was added so that a user could collapse sections irrelevant to them.
6. Removed disclaimers like “This model has been recently added. If you see something strange, file a GitHub Issue” for models that have been in the library longer than 1 year.
The PR aims to:
- Make it easier to find usage tips and examples
- Make it easier to find API reference for models that are based on another model’s architecture
- Possibly reduce the bloat by letting the users collapse framework-specific sections
- Streamline the model_doc pages structure
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26876/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26876",
"html_url": "https://github.com/huggingface/transformers/pull/26876",
"diff_url": "https://github.com/huggingface/transformers/pull/26876.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26876.patch",
"merged_at": 1699023424000
} |
https://api.github.com/repos/huggingface/transformers/issues/26875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26875/comments | https://api.github.com/repos/huggingface/transformers/issues/26875/events | https://github.com/huggingface/transformers/pull/26875 | 1,947,837,609 | PR_kwDOCUB6oc5dCh9z | 26,875 | [WIP][Speculative Decoding] Enable batch size > 1 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26875). All of your documentation changes will be reflected on that endpoint.",
"Hi, good work! I am curious about how to deal with the wrong kv cache token rolling back for draft model, or you just mask them?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
This PR enables speculative decoding when batch size is > 1.
This PR has been tested to work correctly for Whisper. You can try it as follows:
```py
from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
assistant_model_id = "distil-whisper/distil-large-v2"
assistant_model = AutoModelForCausalLM.from_pretrained(
assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
assistant_model.to(device)
model_id = "openai/whisper-large-v2"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
generate_kwargs={"assistant_model": assistant_model},
torch_dtype=torch_dtype,
chunk_length_s=15,
batch_size=4,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "default", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
## TODOS
- [x] Test that it works correctly for Whisper
- [ ] Test for Llama
- [ ] Write nice tests
- [ ] Clean code
- [ ] Write docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26875/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26875/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26875",
"html_url": "https://github.com/huggingface/transformers/pull/26875",
"diff_url": "https://github.com/huggingface/transformers/pull/26875.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26875.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26874/comments | https://api.github.com/repos/huggingface/transformers/issues/26874/events | https://github.com/huggingface/transformers/issues/26874 | 1,947,615,349 | I_kwDOCUB6oc50FkB1 | 26,874 | deepspeed multi-gpu inference | {
"login": "liuchengyuan123",
"id": 34617968,
"node_id": "MDQ6VXNlcjM0NjE3OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/34617968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuchengyuan123",
"html_url": "https://github.com/liuchengyuan123",
"followers_url": "https://api.github.com/users/liuchengyuan123/followers",
"following_url": "https://api.github.com/users/liuchengyuan123/following{/other_user}",
"gists_url": "https://api.github.com/users/liuchengyuan123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liuchengyuan123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuchengyuan123/subscriptions",
"organizations_url": "https://api.github.com/users/liuchengyuan123/orgs",
"repos_url": "https://api.github.com/users/liuchengyuan123/repos",
"events_url": "https://api.github.com/users/liuchengyuan123/events{/privacy}",
"received_events_url": "https://api.github.com/users/liuchengyuan123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @pacman100 ",
"Hello, I don't understand what the issue is here. As the error suggests, DeepSpeed ZeRO only makes sense for prediction if it is stage 3 because the parameters would get sharded. DeepSpeed Stage 2 would not be leading to any memory savings and in that case it would be better to not use DeepSeed at all as it would be same as DDP inference.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,705 | 1,705 | NONE | null | ### System Info
I'm using transformers.Trainer with deepspeed. During training, Zero 2 is adopted. For evaluation, I just want to accelerate with multi-GPU inference like in normal DDP, while deepspeed raises ValueError: "ZeRO inference only makes sense with ZeRO Stage 3 - please adjust your config".
I find that in evaluation it works, but direct calling "trainer.predict" after initialize the trainer would cause the above error.
I understand the error, that means optimizer and scheduler of ZeRO 2 is not optimized during train, but is there an approach that allows multiple-GPUs inference given configure of ZeRO 2?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
deepspeed test.py ... --deepspeed deepspeed.json
```
where deepspeed.json is a configure file of ZeRO 2, and `test.py` is the code to call `trainer.predict` after initialize trainer.
### Expected behavior
ZeRO inference only makes sense with ZeRO Stage 3 - please adjust your config | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26874/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26874/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26873/comments | https://api.github.com/repos/huggingface/transformers/issues/26873/events | https://github.com/huggingface/transformers/pull/26873 | 1,947,594,765 | PR_kwDOCUB6oc5dBth2 | 26,873 | Fix wrong index for sequence length in Falcon | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you @Rocketknight1 ",
"Nevermind, I forgot about the reshaping happening in the modeling code itself. So the current index in `prepare_inputs_for_generation` is correct."
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | As per title, please refer to https://github.com/huggingface/transformers/blob/db611aabee863cc5b1fdc22dcec5ce8e6c3e3b36/src/transformers/models/falcon/modeling_falcon.py#L1110 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26873/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26873",
"html_url": "https://github.com/huggingface/transformers/pull/26873",
"diff_url": "https://github.com/huggingface/transformers/pull/26873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26873.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26872/comments | https://api.github.com/repos/huggingface/transformers/issues/26872/events | https://github.com/huggingface/transformers/pull/26872 | 1,947,572,541 | PR_kwDOCUB6oc5dBorf | 26,872 | Update README_hd.md | {
"login": "biswabaibhab007",
"id": 94667039,
"node_id": "U_kgDOBaSBHw",
"avatar_url": "https://avatars.githubusercontent.com/u/94667039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/biswabaibhab007",
"html_url": "https://github.com/biswabaibhab007",
"followers_url": "https://api.github.com/users/biswabaibhab007/followers",
"following_url": "https://api.github.com/users/biswabaibhab007/following{/other_user}",
"gists_url": "https://api.github.com/users/biswabaibhab007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/biswabaibhab007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/biswabaibhab007/subscriptions",
"organizations_url": "https://api.github.com/users/biswabaibhab007/orgs",
"repos_url": "https://api.github.com/users/biswabaibhab007/repos",
"events_url": "https://api.github.com/users/biswabaibhab007/events{/privacy}",
"received_events_url": "https://api.github.com/users/biswabaibhab007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've committed the suggestion you can merge it now.",
"@stevhliu btw on line 90 there's jax installation if that's what you meant you ."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | - Fixed broken links.
I hope this small contribution adds value to this project. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26872",
"html_url": "https://github.com/huggingface/transformers/pull/26872",
"diff_url": "https://github.com/huggingface/transformers/pull/26872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26872.patch",
"merged_at": 1697837021000
} |
https://api.github.com/repos/huggingface/transformers/issues/26871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26871/comments | https://api.github.com/repos/huggingface/transformers/issues/26871/events | https://github.com/huggingface/transformers/issues/26871 | 1,947,436,183 | I_kwDOCUB6oc50E4SX | 26,871 | Added tokens incorrectly splitted when they immediately follow a subword without a leading whitespace | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, if you add the tokens using the following it will work as expected:\r\n```python \r\n>>> from transformers import AutoTokenizer, AddedToken\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\n>>> tokenizer.add_tokens([AddedToken(\"<|im_end|>\", normalized = False, special = False), AddedToken(\"<|im_start|>\", normalized = False, special = False)])\r\n>>> tokenizer.tokenize(\"Hey<|im_end|>how<|im_start|>\")\r\n['▁Hey', '<|im_end|>', '▁how', '<|im_start|>']\r\n```\r\nif you do not want the space after the token, you should set `legacy=False`:\r\n```python \r\n>>> from transformers import AutoTokenizer, AddedToken\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\", use_fast = False, legacy = False)\r\n>>> tokenizer.add_tokens([AddedToken(\"<|im_end|>\", normalized = False, special = False), AddedToken(\"<|im_start|>\", normalized = False, special = False)])\r\n>>> tokenizer.tokenize(\"Hey<|im_end|>how<|im_start|>\")\r\n['▁Hey', '<|im_end|>', 'how', '<|im_start|>']\r\n```",
"Hi @ArthurZucker I was not aware of this feature, thank you it now works as expected!"
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
- `transformers` version: 4.34.0.dev0
- Platform: macOS-14.0-x86_64-i386-64bit
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am experimenting with the new Chat Template feature, my use case involves using ChatML with the Llama-base models for further finetuning. I added the `<|im_start|>` and `<|im_end|>` tokens
```python
from transformers import AutoTokenizer
chat = [
{"role": "user", "content": "Hi there!"},
{"role": "assistant", "content": "Nice to meet you!"}
]
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
tokenizer.chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}"
tokenizer.add_tokens(["<|im_start|>", "<|im_end|>"])
tokenizer.use_default_system_prompt = False
encodings = tokenizer.apply_chat_template(chat, tokenize=True)
print([tokenizer.decode(i) for i in encodings])
```
The output is
```python
['<|im_start|>',
'user',
'\n',
'Hi',
'there',
'!',
'<',
'|',
'im',
'_',
'end',
'|',
'>',
'\n',
'<',
'|',
'im',
'_',
'start',
'|',
'>',
'ass',
'istant',
'\n',
'N',
'ice',
'to',
'meet',
'you',
'!',
'<',
'|',
'im',
'_',
'end',
'|',
'>',
'\n']
```
As you can see, the string is not properly tokenized, the tokens `<|im_start|>` and `<|im_end|>` are splitted into subwords. This is not specific to chat template but this is how I encountered the issue. When prefixing the ChatML tokens with a whitespace, the tokenizer properly tokenizes the string:
```python
tokenizer.chat_template = "{% for message in messages %}{{' <|im_start|>' + message['role'] + '\n' + message['content'] + ' <|im_end|>' + '\n'}}{% endfor %}"
tokenizer.add_tokens(["<|im_start|>", "<|im_end|>"])
tokenizer.use_default_system_prompt = False
encodings = tokenizer.apply_chat_template(chat, tokenize=True)
print([tokenizer.decode(i) for i in encodings])
>> ['',
'<|im_start|>',
'user',
'\n',
'Hi',
'there',
'!',
'<|im_end|>',
'\n',
'<|im_start|>',
'ass',
'istant',
'\n',
'N',
'ice',
'to',
'meet',
'you',
'!',
'<|im_end|>',
'\n']
```
### Expected behavior
The added tokens should be properly tokenized, even if there is no leading whitespace in front of them. The expected output is:
```python
['<|im_start|>',
'user',
'\n',
'Hi',
'there',
'!',
'<|im_end|>',
'\n',
'<|im_start|>',
'ass',
'istant',
'\n',
'N',
'ice',
'to',
'meet',
'you',
'!',
'<|im_end|>',
'\n']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26871/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26870/comments | https://api.github.com/repos/huggingface/transformers/issues/26870/events | https://github.com/huggingface/transformers/pull/26870 | 1,947,408,799 | PR_kwDOCUB6oc5dBEkZ | 26,870 | Port IDEFICS to tensorflow | {
"login": "a8nova",
"id": 11152705,
"node_id": "MDQ6VXNlcjExMTUyNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/11152705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/a8nova",
"html_url": "https://github.com/a8nova",
"followers_url": "https://api.github.com/users/a8nova/followers",
"following_url": "https://api.github.com/users/a8nova/following{/other_user}",
"gists_url": "https://api.github.com/users/a8nova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/a8nova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/a8nova/subscriptions",
"organizations_url": "https://api.github.com/users/a8nova/orgs",
"repos_url": "https://api.github.com/users/a8nova/repos",
"events_url": "https://api.github.com/users/a8nova/events{/privacy}",
"received_events_url": "https://api.github.com/users/a8nova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@VictorSanh just an fyi - I am hoping this model is TF portable.. ",
"also ccing @Rocketknight1 ",
"🤗👀",
"Hey @a8nova! I'm the TF maintainer around here, and this sounds like a great idea. Feel free to ping me if you encounter any difficulties. In general when porting code to TF you will first want to do the 'boilerplate'. Take a look at #22970 to see how this works! All of these steps are fairly quick:\r\n\r\n1) Copy the `modeling_idefics.py` file to `modeling_tf_idefics.py`\r\n2) Rename all the classes e.g. `IdeficsForXXX -> TFIdeficsForXXX`\r\n3) Add the relevant imports in `src/transformers/models/idefics/__init__.py` and `/src/transformers/__init__.py`\r\n4) Add the TF classes to `modeling_tf_auto.py` \r\n5) Add the TF classes in `docs/source/en/model_doc/`\r\n\r\nAfter this, you can start actually porting the code in `modeling_tf_idefics.py`. In general, you can just replace `torch.xxx()` ops with `tf.xxx()`, and layers from `torch.nn` with layers from `tf.keras.layers`. When creating layers, subclass from `tf.keras.layers.Layer` instead of `nn.Module`. Many of the ops are exactly identical, or only have minor name changes. There are a few key changes to watch out for, though:\r\n\r\n6) The TF `forward()` method is called `call()`. NOT `__call__()`!\r\n7) TF layers usually don't have an `input_dim` argument. This value is inferred when they are built with their first input.\r\n8) When creating a layer (any class that is subclassed from `tf.keras.layers.Layer`, which includes TF built-in layers like `Dense`), pass the attribute name as the `name` argument. This ensures the TF weights layout lines up with the PyTorch state dict, like so:\r\n```python\r\nself.layer_norm1 = tf.keras.layers.LayerNormalization(name=\"layer_norm1\")\r\n```\r\n9) In-place modifications of tensors are prohibited in TF (and JAX!). Most neural net code doesn't do this anyway, because it creates problems in backprop. If you need to do it, you can use something like `tf.where()` or `tf.scatter_nd_update()` to create a new tensor with the updates instead. This can be tricky, let me know if you need help!\r\n10) For... reasons... the base stem of the model is contained in `TFIdeficsBaseLayer` and `TFIdeficsModel` is just a wrapper around this. Derived classes all use `TFIdeficsBaseLayer` and don't create any `TFIdeficsModel` layers. This is different from Torch models, where the model stem is contained in `IdeficsModel` and the derived classes use it as a layer.\r\n\r\nWe've actually had some success using GPT-4 to draft a port of the modeling code, so let me know if you'd like me to do that and add the GPT port of `modeling_tf_idefics.py` to this PR as a starting point!\r\n\r\nAfter all this, the last step is to make changes to any processing code and tests needed to also support the TF model. It's a long list, but it's doable!",
"Thank you @Rocketknight1 for the detailed guide. I have 1,2 & 3 done already, i just updated the PR. I will continue to work on the rest.\r\n\r\nRegarding the GPT-4 generated draft, I already started doing some of the work, if you think the generated draft is easier to port to TF, please add it here and I can continue working from that ( I saw a comment about \"auto-translation\" in modeling_tf_sam.py and I was wondering about the details.. :)\r\n\r\nA few questions:\r\n1. What about all the `torch.nn` code inside `perceiver.py` and `vision.py`, do they also need TF porting? (my goal is to port inference code first, if this isn't needed for inference, then maybe i can come back to it)\r\n2. For model_tf_auto.py, what is [this code](https://github.com/huggingface/transformers/blob/c030fc891395d11249046e36b9e0219685b33399/src/transformers/models/auto/modeling_tf_auto.py#L437) doing? It is not clear to me what to add the TF idefics versions, since i don't understand that file\r\n\r\nThanks for all the help!",
"Hi @a8nova, to answer the questions:\r\n\r\n1) You'll probably need to convert those files too - the `IdeficsVisionTransformer` in `vision.py` seems to be a core part of the model. You might be able to skip `perceiver.py` initially, as it's only used in some model configs, but we should probably do it somewhere as part of this PR!\r\n2) That code is there because SAM was the first `ForMaskGeneration` model we added in TF. For Idefics it's easier, because you're not adding a whole new \"category\" of model like that. If you look in `modeling_auto.py` you'll see there are two lines for `IdeficsModel` and `IdeficsForVisionText2Text` - all you need to do is make copies of those lines in `modeling_tf_auto.py` for `TFIdeficsModel` and `TFIdeficsForVisionText2Text`.\r\n\r\nI'll work on generating GPT-4 translations for you, and post them here when they're available! Since you've already started on `modeling_tf_idefics.py` I won't overwrite it. You can just copy pieces from it when you need to.",
"@a8nova I added the three files with `_autotranslate.py` endings! Note that there are likely to be issues (e.g. forgetting the `name` kwarg when initializing layers even though I told it to)",
"Thank you @Rocketknight1, yes that makes sense, taking a closer look, idefics is 2 pre-trained models combined together, so vision.py is a core part.\r\n\r\nI will take a look at the auto-translated files!",
"Hello @Rocketknight1 - Sorry I went MIA, I have been occupied with my 9-5. I just made some progress. I have the tests running, I am running into a weird error, I will attach a file below, any ideas?\r\n[error.txt](https://github.com/huggingface/transformers/files/13373847/error.txt)\r\n\r\n\r\nAlso, regarding:\r\n`\"For... reasons... the base stem of the model is contained in TFIdeficsBaseLayer and TFIdeficsModel is just a wrapper around this. Derived classes all use TFIdeficsBaseLayer and don't create any TFIdeficsModel layers. This is different from Torch models, where the model stem is contained in IdeficsModel and the derived classes use it as a layer.\"\r\n`\r\nhow come I don't see similar stuff for other TF implementations? Is this specific to IDEFICS? ",
"Hi @a8nova!\r\n\r\nFirstly the error: The problem is that TF models don't let you assign to `self.layers`, because TensorFlow reserves that as a special keyword. What you should do is replace `self.layers` with something else, maybe `self.modules` or `self.decoder_layers` or something. However, you should **keep the name kwarg as** `layers_{i}`. We match TF layers to PT layers when doing weight cross-loading using the name attributes of layers, not the actual name like `self.layers`, so as long as you keep the argument the same then weight cross-loading should work.\r\n\r\nSecondly, regarding `TFIdeficsBaseLayer`, that was actually just a typo on my part - it's actually called `TFIdeficsMainLayer`! If you check any of our other TF models, you'll see they have a `MainLayer` class like `TFBertMainLayer` that contains the model stem.",
"Thank you @Rocketknight1! ",
"Hi @Rocketknight1, I have a few questions:\r\n\r\n1. For processing_idefics.py, how do you suggest I handle both pytorch and tf? right now I just have it hacked in my view to only have TF stuff (just to unblock me). \r\n2. I am getting this weird error from [freeze_model](https://github.com/huggingface/transformers/blob/b42fe29af93c70a1b3555af1b87aab32ed822f41/src/transformers/models/idefics/modeling_tf_idefics.py#L255) . I am doing something wrong but not sure, the error is `AttributeError: 'TFIdeficsRMSNorm' object has no attribute 'layers'` (full stacktrace attached). Any ideas?\r\n3. There is also a \"ALL_LAYERNORM_LAYERS\" in the pytorch code, I added it in this commit https://github.com/huggingface/transformers/pull/26870/commits/7e0a35119b4d7a6284d04d8c543fba1b29e573c9, does this look right to you?\r\n\r\nThanks in advance!\r\n\r\n[error.txt](https://github.com/huggingface/transformers/files/13431615/error.txt)\r\n",
"Hi @a8nova, let's see...\r\n\r\nFor 1, we usually add an argument like `return_tensors` which can take values like `tf`, `pt`, etc. You can take a look at e.g. `models/sam/processing_sam.py` for an example - the main thing is that you should guard any imports of `tf` or `torch` behind something like `is_torch_available()` to make sure that the code doesn't crash for people who only have one of them installed!\r\n\r\nFor 2, I think the problem there is that `freeze_text_layers` iterates over multiple layers that include the normalization layer `self.norm`:\r\n```python\r\ndef freeze_text_layers(self, module_exceptions=[]):\r\n for module in [self.decoder_layers, self.norm]:\r\n freeze_model(module, module_exceptions=module_exceptions)\r\n```\r\nAs a result, it tries to iterate over `self.norm.layers`, which doesn't exist because `self.norm` is just a `LayerNormalization` layer, not a model with additional sub-layers. I'll suggest a change in `freeze_model` that should help.\r\n\r\nFor 3, `ALL_LAYERNORM_LAYERS` is a value mainly used by the HuggingFace `Trainer`, which is only used for PyTorch. When training with TF, we just use Keras instead to get the same functionality. You can just skip/remove it!",
"Thank you @Rocketknight1 for the detailed explanation and help! Moving on to other errors..",
"Sorry to bother you again @Rocketknight1, Do i need to worry about gradient checkpointing in the TF training tests? \r\n\r\nI am asking because for TFIdeficsForVisionText2Text there is `test_training` and `test_training_gradient_checkpointing` and they call `model.train()` and `model.gradient_checkpointing_enable()` which fail with `AtributeError`.\r\n\r\nI found gradient_checkpointing_enable() inside modeling_utils.py, i don't see one inside modeling_tf_utils.py, can you guide me if my observation is correct and if I need to add all the gradient_checkpointing_* routines in modeling_tf_utils.py? \r\n\r\nThank you!\r\n",
"@a8nova No, you can skip gradient checkpointing in TF ports!",
"Hi @Rocketknight1, how do i run the integration tests? For example `IdeficsModelIntegrationTest`.\r\n\r\n`test_training` for tf is failing due to `model.train()`. i see it is defined in Trainer class in files trainer.py and trainer_tf.py. I don't think trainer_tf.py is used anywhere or is it? how do you suggest I resolve this? Thanks!",
"Hi @a8nova, you're right - we used to have a `TFTrainer` class but it's now deprecated. We recommend just training our TF models using the Keras API like `model.fit()`, and that means methods like `model.train()` do not really exist for them. I wrote a blogpost about this [here](https://huggingface.co/blog/tensorflow-philosophy) if you're curious, but it's mostly not that relevant to this PR!\r\n\r\nAnyway, as a result of this the integration tests for TF models can be quite different from the integration tests for PT models - I'd recommend copying tests from another TF model instead, rather than trying to exactly copy the PT tests. You may also have to drop some tests entirely - the full IDEFICS tests use 4-bit quantization in PT, which isn't supported for TF, and as a result our CI machines may not be able to run it in TF at all!",
"Thanks @Rocketknight1. I have a few follow up questions. There are some test failures I don't understand.\r\n\r\n1. For `image_processing_idefics.py`, I made changes in [this commit ](https://github.com/huggingface/transformers/pull/26870/commits/0a9fc014d431be78a5c4c2ab11e96e2ff41c47bf) to pass `return_tensors` to preprocess. I am testing it via the integration tests, I will share diff I needed to get it to run below. \r\n\r\nDoes the integration test make sense?\r\n\r\n```/content/transformers# git diff tests/models/idefics/test_modeling_tf_idefics.py\r\ndiff --git a/tests/models/idefics/test_modeling_tf_idefics.py b/tests/models/idefics/test_modeling_tf_idefics.py\r\nindex f9bcec579..50eee25d6 100644\r\n--- a/tests/models/idefics/test_modeling_tf_idefics.py\r\n+++ b/tests/models/idefics/test_modeling_tf_idefics.py\r\n@@ -454,7 +454,7 @@ class TFIdeficsForVisionText2TextTest(TFIdeficsModelTest, unittest.TestCase):\r\n \r\n @require_tf\r\n @require_vision\r\n-class IdeficsModelIntegrationTest(TestCasePlus):\r\n+class TFIdeficsModelIntegrationTest(TestCasePlus):\r\n @cached_property\r\n def default_processor(self):\r\n return (\r\n@@ -463,8 +463,6 @@ class IdeficsModelIntegrationTest(TestCasePlus):\r\n else None\r\n )\r\n \r\n- @require_bitsandbytes\r\n- @slow\r\n def test_inference_natural_language_visual_reasoning(self):\r\n cat_image_path = self.tests_dir / \"fixtures/tests_samples/COCO/000000039769.png\"\r\n cats_image_obj = Image.open(cat_image_path) # 2 cats\r\n@@ -494,9 +492,7 @@ class IdeficsModelIntegrationTest(TestCasePlus):\r\n load_in_4bit=True,\r\n bnb_4bit_compute_dtype=\"float16\",\r\n )\r\n- model = IdeficsForVisionText2Text.from_pretrained(\r\n- \"HuggingFaceM4/idefics-9b\", quantization_config=quantization_config, device_map=\"auto\"\r\n- )\r\n+ model = TFIdeficsForVisionText2Text.from_pretrained(\"HuggingFaceM4/idefics-9b\")\r\n processor = self.default_processor\r\n inputs = processor(prompts, return_tensors=\"tf\")\r\n generated_ids = model.generate(**inputs, max_length=100)\r\n```\r\n\r\nHow do i pass `return_tensors`? I adopted other code where `return_tensors` for image processing code is passed to the preprocess code only and not sure how to pass it here.\r\n\r\n\r\n2. I have attached an error log for another failure I get. This is coming from TFIdeficsDecoupledLinear:\r\n`E assert <transformers.models.idefics.modeling_tf_idefics.TFIdeficsDecoupledLinear object at 0x7d4a9ac41540> is None`, \r\n\r\nfull error in file attached\r\n\r\n\r\nThere are a few other errors I don't understand but some look related to this. Thanks for the help @Rocketknight1 !",
"Hi @a8nova, sorry for the delay!\r\n\r\nFirstly, for `return_tensors`, generally our processors handle it like this:\r\n\r\n```python\r\ndef __init__(self, return_tensors=None):\r\n self.return_tensors = return_tensors\r\n\r\ndef __call__(self, return_tensors=None):\r\n if return_tensors = None:\r\n return_tensors = self.return_tensors\r\n```\r\n\r\nIn other words, you can supply it as an argument to `__call__`, but if you don't supply that argument, then it uses the default value that's set in the `__init__`.\r\n\r\nAlso, I took a look in the attached file but I can't see the error - can you double-check the contents?",
"whoops I attached the wrong one, i have attached the correct one below. thank you @Rocketknight1! \r\nEdit: i can't seem to repro that error now, I will look into it over the weekend again",
"No probs - let me know if it recurs!",
"Thank you @Rocketknight1!. There is another test failure I could use your help on. For `test_resize_token_embeddings`, the input `pixel_values` is `CHW` without the batches N, which code is responsible for resizing this kind of input? Right now it crashes when it tries to build the dummy weights, I have attached full error. The vision_tf.py code is still a little bugy. In the `TFIdeficsVisionEmbeddings` forward pass, the first thing I do is change the `pixel_values` to channels last format so it runs on CPU. what do I do when I get this kind of input? Thank you!\r\n\r\n[test_resize_tokens_embeddings_error.txt](https://github.com/huggingface/transformers/files/13624772/test_resize_tokens_embeddings_error.txt)\r\n",
"Hi @a8nova, the models aren't intended to run with `CHW` input - they should always receive `NCHW`! The `test_resize_token_embeddings` test is mostly designed to work with text models - you might need to override for IDEFICS, but it's also not a critical test, so you can just skip it instead!\r\n\r\nAlso, the thing you did where you transpose to `NHWC` / `channels_last` is normal for TF models, because TF convolutions don't work on CPU in `channels_first` mode. Be careful that you transpose back after convolutional layers/blocks, though, or else the dimensions in the rest of the code will be wrong!",
"Hi @Rocketknight1, Happy holidays! I have a few more questions. I have less test failures now but I can't wait to get this model working end-to-end. I am using the integration test as a start for fully testing the tensorflow implmentation.\r\n\r\n**EDIT**: Update on 1 (Jan 7th): I have figured out the issue, it was due to a bad reshape, I am able to run the model end-to-end now using tiny-random-idefics. I will run it with idefics-9b next! (I still have some follow up questions coming up as there is some weirdness i don't understand)\r\n\r\n~~1. For the integration test, I have( This is from `TFIdeficsModelIntegrationTest` slightly modified as below, I understand this test will fail with `tiny-random-idefics` but I am using it to help me flush other bugs)~~\r\n```\r\n model = TFIdeficsForVisionText2Text.from_pretrained(\"HuggingFaceM4/tiny-random-idefics\", from_pt=True)\r\n processor = self.default_processor\r\n inputs = processor(prompts, return_tensors=\"tf\")\r\n generated_ids = model.generate(**inputs, max_length=100)\r\n generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)\r\n```\r\n~~Right now this fails when calling from_pretrained, it goes into the forward pass and has an invalid size for pixel_values, this goes back to what I asked last time where I said the input is missing batches for some test but it looks like the problem is from my code actually. Right now what I am observing is that `processing_idefics.py::__call__` returns the correct size for `pixel_values` from the call to BatchFeatures() where `pixel_values` is a 5d tensor of [2,2,3,30,30] but the first forward pass inside modeling_idefics and vision_tf.py have `pixel_values` of [3,30,30].\r\nDo you have suggestions how this might be?~~\r\n\r\n2. I tried converting the pytorch weights to tf but failed with some error, do i need to get the tensorflow implementation working before I can convert the weights?\r\n\r\n3. Is `dummy_inputs` still needed for tf implementations? Like [modeling_tf_bert.py#L916](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)\r\n\r\n",
"Hi @a8nova, sorry for the Christmas-related delay. Huge congratulations on getting the tiny-random model working though, that indicates you're probably quite close to getting the whole thing working!\r\n\r\nFirstly, I'd suggest that **you probably do not need to convert the weights**. The reason for this is that our `idefics` checkpoints all have `safetensors` weights already, and those weights can be loaded by any framework - TF, PyTorch or JAX. Once the TF code is working, you should just be able to load those repos with `from_pretrained` with no additional changes to the repo.\r\n\r\nSecondly, dummy inputs are much less important than they used to be, since we're moving to adding explicit `build()` methods on all of our TF models. That's something we'll have to add to this PR as well, but let's leave it to the end - I can help with it, and it shouldn't be too painful.\r\n\r\nFinally, I notice a lot of the test failures are caused by the TF test runner trying to `import torch` - the reason for this is that you're importing `ModelOutput` from `modeling_outputs.py`, but this is a torch file - try importing it from `modeling_tf_outputs` instead!\r\n\r\nStill, I think this PR is getting close now - I should be able to reply much more quickly now the holidays are over, so please let me know if you encounter any other difficulties!",
"Also @alshenkute it seems like the issue there is that the model is stored in `bfloat16`, which is supported in PyTorch on GPU and CPU, but maybe not on M1. I'm not too familiar with developing on M1 though - maybe load with `torch_dtype=torch.float32`?",
"Got it, thank you @Rocketknight1! let me try that. ( i accidentally replied from another email, sorry for confusion, i deleted it)\r\n",
"Hi @Rocketknight1 - I am still having issues loading the idefics-9b model, Main issue right now is in the call to `TFIdeficsForVisionText2Text.from_pretrained`, `image_attention_mask` is None in TFIdeficsMainLayer. It looks like `image_attention_mask` is populated by `prepare_inputs_for_generation` which isn't called for the `from_pretrained`. I have a hack but it is ugly and doesn't feel correct.\r\n\r\nFor pytorch, the `from_pretrained` doesn't seem to invoke the inference code while loading the model weights, so i don't see same issue..\r\n\r\nWhat am i missing here? If none of the stuff I am saying above makes sense then there is a chance I have a bug elsewhere or I am misunderstanding things\r\n\r\nBy the way, it looks like the OOM is killing my process and I am looking into why that is but I need to understand how to handle the issue above with the call to `from_pretrained` \r\n\r\nThanks in advance!",
"Hi @a8nova - I think I have a suspicion of what's going on now, let me explain!\r\n\r\nI made a PR just before the holidays at #27794. This PR replaces our old approach to building TF models, where we used dummy inputs and ran an inference step to build weights, with a new approach where we build weights with proper `build()` methods. We did this because there were several issues with the dummy inputs, including poor performance, and I suspect the issue you're encountering is related to this.\r\n\r\nI think the solution here is twofold:\r\n\r\n1) Rebase this PR on the most recent `transformers` version (so you get the new `build()` PR)\r\n2) Add `build()` methods to IDEFICS.\r\n\r\nSince this is a very new PR that was added while your PR was open, I realize that it's a bit awkward. Let me know if you want me to handle one or both of those steps!"
] | 1,697 | 1,708 | null | NONE | null | # What does this PR do?
This PR ports IDEFICS to tensorflow
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26870/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26870",
"html_url": "https://github.com/huggingface/transformers/pull/26870",
"diff_url": "https://github.com/huggingface/transformers/pull/26870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26870.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26869/comments | https://api.github.com/repos/huggingface/transformers/issues/26869/events | https://github.com/huggingface/transformers/issues/26869 | 1,947,379,401 | I_kwDOCUB6oc50EqbJ | 26,869 | bfloat16/float16 llama model logits are significantly different when the same input is in a batch | {
"login": "soheeyang",
"id": 28291528,
"node_id": "MDQ6VXNlcjI4MjkxNTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/28291528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soheeyang",
"html_url": "https://github.com/soheeyang",
"followers_url": "https://api.github.com/users/soheeyang/followers",
"following_url": "https://api.github.com/users/soheeyang/following{/other_user}",
"gists_url": "https://api.github.com/users/soheeyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soheeyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soheeyang/subscriptions",
"organizations_url": "https://api.github.com/users/soheeyang/orgs",
"repos_url": "https://api.github.com/users/soheeyang/repos",
"events_url": "https://api.github.com/users/soheeyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/soheeyang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just checked that the same issue exists for float16 (fails at atol=1e-2), not only bfloat16 (fails at atol=1e-1) and also for llama v1 7b.\r\nAlso, the issue occurs not only when 'torch_dtype' is passed during initialization, but also when 'model.to(dtype)' is called after initialization.\r\n\r\n```\r\nFalse # at atol=1e-2\r\ntensor([-13.9688, -14.9375, 2.3242, ..., -8.2578, -9.5156, -8.1719],\r\n grad_fn=<SelectBackward0>)\r\ntensor([-13.9766, -14.9375, 2.3242, ..., -8.2578, -9.5078, -8.1719],\r\n grad_fn=<SelectBackward0>)\r\n```",
"Hey! Yes this is know and related to #25681 and #25284 as you are probably using left padding as well. I'll have a look as well thanks for reporting! \r\nThe generation will also be different because of this",
"> Hey! Yes this is know and related to #25681 and #25284 as you are probably using left padding as well. I'll have a look as well thanks for reporting! The generation will also be different because of this\r\n\r\nHi Arthur, thank you so much for looking into this! I just checked that the same issue occurs even when I use right padding.",
"cc @gante if you are working on this! ",
"Hey @soheeyang 👋 \r\n\r\nI've written a large answer about the impact of KV caching and padding [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535), which is related to this issue and explains most of what you see :) \r\n\r\nNonetheless, we still suspect there is a Llama-specific padding bug, which may be amplifying the problems I describe in the link above :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
### system
- `transformers` version: 4.34.0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### bitsandbytes
Name: bitsandbytes
Version: 0.41.1
Summary: k-bit optimizers and matrix multiplication routines.
Home-page: https://github.com/TimDettmers/bitsandbytes
Author: Tim Dettmers
Author-email: [email protected]
License: MIT
Location: /home/soheeyang/.conda/envs/ripple/lib/python3.9/site-packages
Requires:
Required-by:
### nvcc
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_18:49:52_PDT_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0
### nvidia-smi
NVIDIA-SMI 525.85.12 Driver Version: 525.85.12
### Who can help?
@SunMarc @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Code
```python3
model_name_or_path = 'meta-llama/Llama-2-7b-hf'
prompts = [
'The actor of the main character of the movie The Light of Hope',
'The actor of the main character of the movie Roxanne',
]
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, token=HF_TOKEN)
tokenizer.pad_token = tokenizer.eos_token
b16_model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.bfloat16)
batched_inputs = tokenizer(prompts, return_tensors='pt', padding=True)
print(batched_inputs)
single_inputs = tokenizer(prompts[0], return_tensors='pt', padding=True)
print(single_inputs)
# This assertion passes, so the outputs for the 0th item in the batch should be the same
assert (batched_inputs['input_ids'][0] == single_inputs['input_ids']).all()
assert (batched_inputs['attention_mask'][0] == single_inputs['attention_mask']).all()
batched_logit = b16_model(**batched_inputs).logits[0, -1]
single_logit = b16_model(**single_inputs).logits[0, -1]
# The result of this line of code is False
print(torch.allclose(batched_logit, single_logit, atol=1e-1))
print(batched_logit)
print(single_logit)
```
### Outputs
```
{'input_ids': tensor([[ 1, 450, 11339, 310, 278, 1667, 2931, 310, 278, 14064,
450, 12790, 310, 7963],
[ 1, 450, 11339, 310, 278, 1667, 2931, 310, 278, 14064,
1528, 29916, 11276, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]])}
{'input_ids': tensor([[ 1, 450, 11339, 310, 278, 1667, 2931, 310, 278, 14064,
450, 12790, 310, 7963]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
False
tensor([-14.0000, -14.9375, 2.2969, ..., -8.3125, -9.5625, -8.1875],
grad_fn=<SelectBackward0>)
tensor([-13.9375, -14.9375, 2.3125, ..., -8.2500, -9.5000, -8.1875],
grad_fn=<SelectBackward0>)
```
### Expected behavior
**The output logits should pass at least torch.allclose(batched_logit, single_logit, atol=1e-4). However, they are failing at torch.allclose(batched_logit, single_logit, atol=1e-1).**
This issue does not occur for float32 model.
### Code for float32 model
```python
print(torch.allclose(batched_logit, single_logit, atol=1e-5))
print(batched_logit)
print(single_logit)
```
### Outputs of float32 model
```
True
tensor([-13.9725, -14.9407, 2.3230, ..., -8.2574, -9.5058, -8.1696],
grad_fn=<SelectBackward0>)
tensor([-13.9725, -14.9407, 2.3230, ..., -8.2574, -9.5058, -8.1696],
grad_fn=<SelectBackward0>)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26869/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26869/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26868/comments | https://api.github.com/repos/huggingface/transformers/issues/26868/events | https://github.com/huggingface/transformers/issues/26868 | 1,947,307,436 | I_kwDOCUB6oc50EY2s | 26,868 | Dataset 'jsonl' doesn't exist on the Hub in examples/pytorch/translation/run_translation.py | {
"login": "mathiasesn",
"id": 27091759,
"node_id": "MDQ6VXNlcjI3MDkxNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/27091759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathiasesn",
"html_url": "https://github.com/mathiasesn",
"followers_url": "https://api.github.com/users/mathiasesn/followers",
"following_url": "https://api.github.com/users/mathiasesn/following{/other_user}",
"gists_url": "https://api.github.com/users/mathiasesn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathiasesn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathiasesn/subscriptions",
"organizations_url": "https://api.github.com/users/mathiasesn/orgs",
"repos_url": "https://api.github.com/users/mathiasesn/repos",
"events_url": "https://api.github.com/users/mathiasesn/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathiasesn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"A simple fix would be to add:\r\n\r\n```python\r\n if extension == \"jsonl\":\r\n extension = \"json\"\r\n```\r\n\r\nafter [line 373 in examples/pytorch/translation/run_translation.py](https://github.com/huggingface/transformers/blob/4b423e607455a7aca1edc4beaa713da58e78ef0b/examples/pytorch/translation/run_translation.py#L373).",
"cc @lhoestq if you think the script should be udated or if it's a dataset issue ?",
"`datasets` has a number of generic dataset builders: \"text\", \"parquet\", \"json\", \"imagefolder\" etc.\r\nAnd the \"json\" builder supports both json and jsonl.\r\n\r\nSo yes the script should be updated. The fix proposed by @mathiasesn does the job !\r\n\r\nAlso if you want you can improve the variable names to do\r\n\r\n```python\r\nload_dataset(builder_name)\r\n```\r\n\r\ninstead of \r\n\r\n```python\r\nload_dataset(extension)\r\n```\r\n\r\nwhich is incorrect.\r\n",
"Nice, @mathiasesn would you like to open a pr? ",
"> Nice, @mathiasesn would you like to open a pr?\r\n\r\nYes.",
"PR: https://github.com/huggingface/transformers/pull/26924, @ArthurZucker and @lhoestq.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.24.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
To reproduce the issue, follow the steps outlined below:
1. Environment Setup:
- Python >= 3.10
- pip install datasets
2. Dataset Files:
Have train.jsonl and validation.jsonl files present at the paths specified.
3. Code to Reproduce the Issue:
```python
Copy code
import datasets
# Setting data file paths
data_files = {}
data_files["train"] = "/home/mathias/translation_scripts/data/opus/huggingface/train_opus_en-da.jsonl"
data_files["validation"] = "/home/mathias/translation_scripts/data/opus/huggingface/validation_opus_en-da.jsonl"
print(data_files)
# Loading the dataset
from datasets import load_dataset
load_dataset("jsonl", data_files=data_files)
```
4. Error Message:
When running the code above, the following error message is generated:
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mathias/.local/lib/python3.10/site-packages/datasets/load.py", line 2129, in load_dataset
builder_instance = load_dataset_builder(
File "/home/mathias/.local/lib/python3.10/site-packages/datasets/load.py", line 1815, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/mathias/.local/lib/python3.10/site-packages/datasets/load.py", line 1508, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/mathias/translation_scripts/jsonl/jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Dataset 'jsonl' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
### Expected behavior
With the following python code:
```python
data_files = {}
data_files["train"] = "path/to/train.jsonl"
data_files["validation"] = "path/to/validation.jsonl"
load_dataset("jsonl", data_files=data_files)
```
The following output should be expected:
```bash
Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 29746.84it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1740.74it/s]
Generating train split: 1000000 examples [00:00, 3119077.05 examples/s]
Generating validation split: 2000 examples [00:00, 1100578.33 examples/s]
DatasetDict({
train: Dataset({
features: ['translation'],
num_rows: 1000000
})
validation: Dataset({
features: ['translation'],
num_rows: 2000
})
})
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26868/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26867/comments | https://api.github.com/repos/huggingface/transformers/issues/26867/events | https://github.com/huggingface/transformers/issues/26867 | 1,947,220,481 | I_kwDOCUB6oc50EDoB | 26,867 | Benchmark whether we can get better performance out of ZeRO-3 vs. ZeRO-2 on a single GPU | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @rajveer43, what is this issue? Please open issues relevant to what we're trying to do with this repository. These issues are not helpful and do not seem related to `transformers`. I appreciate your understanding.",
"Actually I was working on one of the documentation issues for a while and saw this thing marked as `todo`I understand your concern about it. I obey to follow the contributing guidelines strictly., @LysandreJik "
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26867/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26866/comments | https://api.github.com/repos/huggingface/transformers/issues/26866/events | https://github.com/huggingface/transformers/issues/26866 | 1,947,205,531 | I_kwDOCUB6oc50D_-b | 26,866 | Consider Adding Practical Usage Examples from Issue comment #8771 to Documentation | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think we can keep it as is",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | I was going though the documentation of the file path `docs/source/jen/main_classes/deepspeed.md` and this line
https://github.com/huggingface/transformers/blob/4b423e607455a7aca1edc4beaa713da58e78ef0b/docs/source/en/main_classes/deepspeed.md?plain=1#L206
and saw below comment in documentation
`For some practical usage examples, please, see this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400).`
https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400
so I was wondering if we add those examples in a file or as tutorial.
PS. I did not check if this issue exists or any solution made. thank you for your patience.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26866/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26865/comments | https://api.github.com/repos/huggingface/transformers/issues/26865/events | https://github.com/huggingface/transformers/pull/26865 | 1,947,171,876 | PR_kwDOCUB6oc5dAQNw | 26,865 | Fix `resize_token_embeddings` (#26861) | {
"login": "czy-orange",
"id": 55686423,
"node_id": "MDQ6VXNlcjU1Njg2NDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/55686423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czy-orange",
"html_url": "https://github.com/czy-orange",
"followers_url": "https://api.github.com/users/czy-orange/followers",
"following_url": "https://api.github.com/users/czy-orange/following{/other_user}",
"gists_url": "https://api.github.com/users/czy-orange/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czy-orange/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czy-orange/subscriptions",
"organizations_url": "https://api.github.com/users/czy-orange/orgs",
"repos_url": "https://api.github.com/users/czy-orange/repos",
"events_url": "https://api.github.com/users/czy-orange/events{/privacy}",
"received_events_url": "https://api.github.com/users/czy-orange/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Perhaps I should explain how I found this problem and why this fix looks necessary in my situation. \r\n\r\nLet's say there is a model, `model = AutoModel.from_pretrained('bert-base-uncased')`. The `model` is updated by ordinary `backward` and `step` methods in the training phase. At the same time, there is another momentum model `model_m`, whose parameters' `requires_grad` are set to `False`. The momentum model `model_m` is updated by weighted sum of its parameters and `model`’s parameters. \r\n\r\nWhen transferring `model` and `model_m` to other tasks (where updating two models is still needed), vocabulary tables may change and resizing word embeddings is needed. After applying `resize_token_embeddings` to `model_m`, `model_m.embeddings.word_embeddings.weight.requires_grad` is silently changed from `False` to `True` (as discussed in #26861 ).\r\n\r\nI was able to find this problem only because an error says that “your module has parameters that were not used in producing loss”."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | * Fix `resize_token_embeddings` about `requires_grad`
The method `resize_token_embeddings` should keep `requires_grad` unchanged for all parameters in embeddings.
Previously, `resize_token_embeddings` always set `requires_grad` to `True`. After fixed, `resize_token_embeddings` copy the `requires_grad` attribute in the old embeddings.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26861
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- issue: #26861
- forum: [forum discussion](https://discuss.huggingface.co/t/how-to-freez-a-model/12819/2)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26865/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26865",
"html_url": "https://github.com/huggingface/transformers/pull/26865",
"diff_url": "https://github.com/huggingface/transformers/pull/26865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26865.patch",
"merged_at": 1700589108000
} |
https://api.github.com/repos/huggingface/transformers/issues/26864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26864/comments | https://api.github.com/repos/huggingface/transformers/issues/26864/events | https://github.com/huggingface/transformers/issues/26864 | 1,947,130,755 | I_kwDOCUB6oc50DtuD | 26,864 | CLAP Finetuning not working | {
"login": "LeoPerelli",
"id": 40518973,
"node_id": "MDQ6VXNlcjQwNTE4OTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/40518973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeoPerelli",
"html_url": "https://github.com/LeoPerelli",
"followers_url": "https://api.github.com/users/LeoPerelli/followers",
"following_url": "https://api.github.com/users/LeoPerelli/following{/other_user}",
"gists_url": "https://api.github.com/users/LeoPerelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeoPerelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeoPerelli/subscriptions",
"organizations_url": "https://api.github.com/users/LeoPerelli/orgs",
"repos_url": "https://api.github.com/users/LeoPerelli/repos",
"events_url": "https://api.github.com/users/LeoPerelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeoPerelli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I seems that there is a `return_loss` argument that can be set when calling the forward of the model. I am not really sure how the trainer handle's it internally but you should be able to pass it as an argument I believe",
"Indeed, see CLIP's training script, which is very similar to CLAP: https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py#L237.\r\n\r\nOne needs to additionally pass the `return_loss` argument, which the script does in the `collate_fn`, which is then passed to the `Trainer`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
Hello! I am trying to finetune a CLAP model, without much luck. I am getting the following error:
`ValueError: The model did not return a loss from the inputs, only the following keys: logits_per_audio,logits_per_text,text_embeds,audio_embeds,text_model_output,audio_model_output. For reference, the inputs it received are input_ids,attention_mask,input_features.`
Is this expected? Can you also expand on which checkpoint is this pre-trained CLAP model? In the Github CLAP repo of LAION there are multiple checkpoints.
Thanks for your time! @ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
def preprocess_function(data):
audios = [audio['array'] for audio in data['audio']]
text = [text for text in data['transcription']]
inputs = processor(
text=text,
audios=audios,
padding=True,
return_tensors='pt'
)
return inputs
dataset = load_dataset("audiofolder", data_dir="clap_data")
batch_size = 4
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
processor = AutoProcessor.from_pretrained("laion/clap-htsat-unfused")
dataset_processed_test = dataset['test'].map(preprocess_function, remove_columns=["audio", "transcription"] , batched=True)
dataset_processed_train = dataset['train'].map(preprocess_function,remove_columns=["audio", "transcription"] , batched=True)
dataset_processed_test.set_format("torch", columns=["input_ids","attention_mask","input_features"], output_all_columns=True)
dataset_processed_train.set_format("torch", columns=["input_ids","attention_mask","input_features"], output_all_columns=True)
args = TrainingArguments(
output_dir = "clap_trainings"
"debug",
evaluation_strategy = "epoch",
save_strategy = "epoch",
learning_rate=1e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=50,
warmup_ratio=0.05,
logging_steps=10
)
trainer = Trainer(
model,
args,
train_dataset=dataset_processed_train,
eval_dataset=dataset_processed_test
)
trainer.train()
```
### Expected behavior
The model should train normally. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26864/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26863/comments | https://api.github.com/repos/huggingface/transformers/issues/26863/events | https://github.com/huggingface/transformers/pull/26863 | 1,947,057,577 | PR_kwDOCUB6oc5c_3Br | 26,863 | Generate: improve docstrings for custom stopping criteria | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | # What does this PR do?
Addresses https://github.com/huggingface/transformers/issues/18068, and a few other GH issues with the same problem
When a user defines a custom stopping criteria that makes use of the `scores` input, if `return_dict_in_generate=True, output_scores=True` are not passed, an exception will be thrown (`scores` is `None`). This is because we only keep track of `scores` when those two flags are passed.
I've added this requirement in several places in the docstrings, to reduce the odds of being missed.
For completeness: I've also considered always keeping track of scores. However, this adds a non-negligible overhead -- in BF16 for the 7B Mistral model, we would need an extra ~50MB per 1000 generated tokens (and batching would act as a multiplier here). Ditto for 4-bit BNB. Since custom stopping criteria are seldom used, this didn't look like a good tradeoff :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26863/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26863",
"html_url": "https://github.com/huggingface/transformers/pull/26863",
"diff_url": "https://github.com/huggingface/transformers/pull/26863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26863.patch",
"merged_at": 1697619301000
} |
https://api.github.com/repos/huggingface/transformers/issues/26862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26862/comments | https://api.github.com/repos/huggingface/transformers/issues/26862/events | https://github.com/huggingface/transformers/issues/26862 | 1,947,030,063 | I_kwDOCUB6oc50DVIv | 26,862 | Support reverse_prompt | {
"login": "louisabraham",
"id": 13174805,
"node_id": "MDQ6VXNlcjEzMTc0ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisabraham",
"html_url": "https://github.com/louisabraham",
"followers_url": "https://api.github.com/users/louisabraham/followers",
"following_url": "https://api.github.com/users/louisabraham/following{/other_user}",
"gists_url": "https://api.github.com/users/louisabraham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louisabraham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisabraham/subscriptions",
"organizations_url": "https://api.github.com/users/louisabraham/orgs",
"repos_url": "https://api.github.com/users/louisabraham/repos",
"events_url": "https://api.github.com/users/louisabraham/events{/privacy}",
"received_events_url": "https://api.github.com/users/louisabraham/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey! I am not sure I understand what you are requesting here, the custom logits processors are specifically designed for this! ",
"I'm interested in addressing this issue. Here are two concise solutions:\r\n\r\n1. `stopping_criteria Parameter`: Introduce a stopping_criteria parameter directly in the generate method. Users can provide a list of stop words, and the generation process will halt when any of these words are encountered.\r\n2. `Extend StoppingCriteria Class` : Enhance the StoppingCriteria class to include a predefined set of stop words and logic for checking against the generated text.\r\n\r\nI also encountered a similar issue where it was necessary to stop the chatbot when specific user inputs like \"bye\" or \"thank you\" were detected. Due to the absence of this feature, I had to manually check for these user inputs to end the chatbot with a fixed response. This highlights the importance of implementing a more automated stopping mechanism. \r\n\r\nI would greatly appreciate your guidance in the process. Please assign this to me for implementation.",
"> Hey! I am not sure I understand what you are requesting here, the custom logits processors are specifically designed for this!\r\n\r\nWhat is your idea? Using a custom logit processor to detect the stopping word and output EOS instead? I think this is a bit overkill and such a common use case could benefit from a custom parameter or a pre-defined LogitsProcessor.\r\n\r\n> I'm interested in addressing this issue. Here are two concise solutions:\r\n> \r\n> 1. `stopping_criteria Parameter`: Introduce a stopping_criteria parameter directly in the generate method. Users can provide a list of stop words, and the generation process will halt when any of these words are encountered.\r\n> \r\n> 2. `Extend StoppingCriteria Class` : Enhance the StoppingCriteria class to include a predefined set of stop words and logic for checking against the generated text.\r\n> \r\n> \r\n> I also encountered a similar issue where it was necessary to stop the chatbot when specific user inputs like \"bye\" or \"thank you\" were detected. Due to the absence of this feature, I had to manually check for these user inputs to end the chatbot with a fixed response. This highlights the importance of implementing a more automated stopping mechanism.\r\n> \r\n> I would greatly appreciate your guidance in the process. Please assign this to me for implementation.\r\n\r\nIMO a `StoppingCriteria` is not able to do this for batched generation queries as they don't get the prompt. A `LogitsProcessor` cannot filter words with multiple tokens either as they would need to erase the end of the output. Another filtering step / condition is needed to erase the last words after the `LogitsProcessor` stopped the generation.",
"As I mentioned in #23852, (see [this comment](https://github.com/huggingface/transformers/issues/23852#issuecomment-1578399293)) a StoppingCriteria is a good way to stop on a user specific list of tokens and is not an overkill. \r\nGenerate already has a `stopping_criteria` parameter. See the doc [here](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.StoppingCriteria).\r\n\r\nNow that chat template are becoming a lot more, this simple criteria could make it's way to the library, with a more general `generation_config.eos_sequence`. But from my understanding, the[ `eos_token_id`](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.eos_token_id) can already be a list:\r\n\r\n> eos_token_id (`Union[int, List[int]]`, *optional*):\r\n> The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens.\r\n\r\nso I don't really see the point of creating a new list. \r\n\r\nI don't understand why it would not work for batches could you enlighten me?\r\n",
"Maybe my understanding is wrong but as I said above, `StoppingCriteria` just doesn't get enough information to do the job.\r\n\r\nI think `eos_token_id` can check for multiple _posible_ stopping tokens, but not a _sequence_ of tokens. Please tell me if I'm mistaken. It will work for batches but not for reverse prompts with multiple tokens.\r\n\r\n",
"cc @gante who is more expert on this! \r\n",
"Adding support for multiple tokens as EOS would be useful indeed. I've seen similar requests (often under the name of sentinel token). \r\n\r\nHowever, it requires a bit of engineering, as:\r\na) a single EOS token is not a `StoppingCriteria`, but it should be\r\nb) we could then expand it to accept arbitrary lists of tokens -- please not that it has to be done at a token level, not at a text level, otherwise it will be painfully slow\r\n\r\nI've added this to my todo list :) "
] | 1,697 | 1,698 | null | NONE | null | ### Feature request
Add a simple option to provide a stopping criteria in the generate method. This option could also be a stopping criteria.
### Motivation
Llama.cpp supports an option `--reverse-prompt`. It is very useful for chat models to stop at words like "USER:".
### Your contribution
This is how I implemented it, but it is not ideal:
```
def count_stop_words(text, stop_words):
return Counter({expr: text.count(expr) for expr in stop_words})
class MyStoppingCriteria(StoppingCriteria):
def __init__(self, stop_counter):
self.stop_counter = stop_counter
def __call__(self, input_ids, scores, **kwargs):
# Get the generated text as a string
generated_text = tokenizer.decode(input_ids[0])
counter = count_stop_words(generated_text, self.stop_counter)
if counter - self.stop_counter:
return True
return False
```
Then you can use this parameter:
```
stopping_criteria=StoppingCriteriaList(
[
MyStoppingCriteria(
count_stop_words(PROMPT, ["ASSISTANT:", "USER:", "SYSTEM:"])
)
]
),
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26862/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26861/comments | https://api.github.com/repos/huggingface/transformers/issues/26861/events | https://github.com/huggingface/transformers/issues/26861 | 1,946,713,993 | I_kwDOCUB6oc50CH-J | 26,861 | `resize_token_embeddings` always sets `requires_grad` (of parameters in the resized embedding) to `True` | {
"login": "czy-orange",
"id": 55686423,
"node_id": "MDQ6VXNlcjU1Njg2NDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/55686423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czy-orange",
"html_url": "https://github.com/czy-orange",
"followers_url": "https://api.github.com/users/czy-orange/followers",
"following_url": "https://api.github.com/users/czy-orange/following{/other_user}",
"gists_url": "https://api.github.com/users/czy-orange/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czy-orange/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czy-orange/subscriptions",
"organizations_url": "https://api.github.com/users/czy-orange/orgs",
"repos_url": "https://api.github.com/users/czy-orange/repos",
"events_url": "https://api.github.com/users/czy-orange/events{/privacy}",
"received_events_url": "https://api.github.com/users/czy-orange/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue was raised in forum but was not solved. This is the [original post](https://discuss.huggingface.co/t/how-to-freez-a-model/12819/2).",
"Hey! Seems like this has been the case for quite a while, but I think it makes sense to keep this info as well, preventing surprises. Would you like to open a PR for a fix? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.17.2
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` Python console
In [1]: from transformers import AutoModel
In [2]: model = AutoModel.from_pretrained('bert-base-uncased')
In [3]: model.embeddings.word_embeddings.weight.requires_grad
Out[3]: True
In [4]: model.embeddings.word_embeddings.weight.requires_grad = False
In [5]: model.embeddings.word_embeddings.weight.requires_grad
Out[5]: False
In [6]: model.resize_token_embeddings(30526)
Out[6]: Embedding(30526, 768)
In [7]: model.embeddings.word_embeddings.weight.requires_grad
Out[7]: True
```
### Expected behavior
The `resize_token_embeddings` method is not supposed to change the `requires_grad` attribute of parameters in the embedding layer. However, the current version of `resize_token_embeddings` always sets `requires_grad` to `True`, which is not a desirable behavior if the `requires_grad` in the old embedding is `False`. As far as I'm concerned, the `resize_token_embeddings` should only perform resizing and keep all the other attributes unchanged.
After I read the source code carefully, I found that `resize_token_embedding` internally creates a new embedding, which does not copy attributes from the old embedding except for `dtype` and `device`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26861/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26860/comments | https://api.github.com/repos/huggingface/transformers/issues/26860/events | https://github.com/huggingface/transformers/pull/26860 | 1,946,647,664 | PR_kwDOCUB6oc5c-esb | 26,860 | fix DPR token_type_ids | {
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/26817
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Arthur
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26860/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26860",
"html_url": "https://github.com/huggingface/transformers/pull/26860",
"diff_url": "https://github.com/huggingface/transformers/pull/26860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26860.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26859/comments | https://api.github.com/repos/huggingface/transformers/issues/26859/events | https://github.com/huggingface/transformers/issues/26859 | 1,946,527,580 | I_kwDOCUB6oc50Badc | 26,859 | Running official `apply_chat_template` throws warnings | {
"login": "Boltzmachine",
"id": 56542320,
"node_id": "MDQ6VXNlcjU2NTQyMzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/56542320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Boltzmachine",
"html_url": "https://github.com/Boltzmachine",
"followers_url": "https://api.github.com/users/Boltzmachine/followers",
"following_url": "https://api.github.com/users/Boltzmachine/following{/other_user}",
"gists_url": "https://api.github.com/users/Boltzmachine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Boltzmachine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Boltzmachine/subscriptions",
"organizations_url": "https://api.github.com/users/Boltzmachine/orgs",
"repos_url": "https://api.github.com/users/Boltzmachine/repos",
"events_url": "https://api.github.com/users/Boltzmachine/events{/privacy}",
"received_events_url": "https://api.github.com/users/Boltzmachine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey this is a tokenizer issue and will be fixed by #26570",
"Any local workaround I can make to avoid this message?",
"Yes you can initialize the tokenizer with `verbose = False`:\r\n```python \r\n>> from transformers import AutoTokenizer\r\n>> tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", verbose = False)\r\n```"
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the official scripts from https://huggingface.co/docs/transformers/chat_templating
```python
>> from transformers import AutoTokenizer
>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>> chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
>> tokenizer.use_default_system_prompt = False
>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>[INST] Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST]"
```
This throws warnings on my end
```
>>> tokenizer.apply_chat_template(chat, tokenize=False)
Using sep_token, but it is not set yet.
Using pad_token, but it is not set yet.
Using cls_token, but it is not set yet.
Using mask_token, but it is not set yet.
```
### Expected behavior
No warnings | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26859/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26858/comments | https://api.github.com/repos/huggingface/transformers/issues/26858/events | https://github.com/huggingface/transformers/pull/26858 | 1,946,486,257 | PR_kwDOCUB6oc5c972K | 26,858 | [docstring] Fix docstring for LukeConfig | {
"login": "louietouie",
"id": 80791777,
"node_id": "MDQ6VXNlcjgwNzkxNzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/80791777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louietouie",
"html_url": "https://github.com/louietouie",
"followers_url": "https://api.github.com/users/louietouie/followers",
"following_url": "https://api.github.com/users/louietouie/following{/other_user}",
"gists_url": "https://api.github.com/users/louietouie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louietouie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louietouie/subscriptions",
"organizations_url": "https://api.github.com/users/louietouie/orgs",
"repos_url": "https://api.github.com/users/louietouie/repos",
"events_url": "https://api.github.com/users/louietouie/events{/privacy}",
"received_events_url": "https://api.github.com/users/louietouie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh , thanks! I waited for the 7 CI tests to be green before tagging."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | Part of Docstring Sprint #26638
One small bug I noticed, which can be seen in my first commit, was that check_docstrings.py added a duplicate default specification for use_entity_aware_attention, which I deleted in my second commit.
## Before submitting
- [Y] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [Y] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [N] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [N] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26858/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26858",
"html_url": "https://github.com/huggingface/transformers/pull/26858",
"diff_url": "https://github.com/huggingface/transformers/pull/26858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26858.patch",
"merged_at": 1697563846000
} |
https://api.github.com/repos/huggingface/transformers/issues/26857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26857/comments | https://api.github.com/repos/huggingface/transformers/issues/26857/events | https://github.com/huggingface/transformers/issues/26857 | 1,946,480,811 | I_kwDOCUB6oc50BPCr | 26,857 | Request support about the flash attention 2 for the gpt_bigcode models | {
"login": "DongHande",
"id": 45357817,
"node_id": "MDQ6VXNlcjQ1MzU3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DongHande",
"html_url": "https://github.com/DongHande",
"followers_url": "https://api.github.com/users/DongHande/followers",
"following_url": "https://api.github.com/users/DongHande/following{/other_user}",
"gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DongHande/subscriptions",
"organizations_url": "https://api.github.com/users/DongHande/orgs",
"repos_url": "https://api.github.com/users/DongHande/repos",
"events_url": "https://api.github.com/users/DongHande/events{/privacy}",
"received_events_url": "https://api.github.com/users/DongHande/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Will be fixed by #26479",
"Hope it will be soon fixed and merged. "
] | 1,697 | 1,697 | 1,697 | NONE | null | When I run the code,
```
checkpoint = "bigcode/starcoder"
model = AutoModelForCausalLM.from_pretrained(checkpoint, use_flash_attention_2=True)
```
it shows that "ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new".
The reason is that the gpt_bigcode (https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py) does not support flash attention 2 yet.
I open this issue to request support about the flash attention 2 for the gpt_bigcode models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26857/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26856/comments | https://api.github.com/repos/huggingface/transformers/issues/26856/events | https://github.com/huggingface/transformers/pull/26856 | 1,946,442,304 | PR_kwDOCUB6oc5c9yog | 26,856 | fix set_transform link docs | {
"login": "diegulio",
"id": 68176588,
"node_id": "MDQ6VXNlcjY4MTc2NTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/68176588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diegulio",
"html_url": "https://github.com/diegulio",
"followers_url": "https://api.github.com/users/diegulio/followers",
"following_url": "https://api.github.com/users/diegulio/following{/other_user}",
"gists_url": "https://api.github.com/users/diegulio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diegulio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diegulio/subscriptions",
"organizations_url": "https://api.github.com/users/diegulio/orgs",
"repos_url": "https://api.github.com/users/diegulio/repos",
"events_url": "https://api.github.com/users/diegulio/events{/privacy}",
"received_events_url": "https://api.github.com/users/diegulio/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
This PR simply fix a link to set_transform documentation.
<!-- Remove if not applicable -->
Fixes # (issue)
Broken link to set_transform docs
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26856",
"html_url": "https://github.com/huggingface/transformers/pull/26856",
"diff_url": "https://github.com/huggingface/transformers/pull/26856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26856.patch",
"merged_at": 1697793397000
} |
https://api.github.com/repos/huggingface/transformers/issues/26855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26855/comments | https://api.github.com/repos/huggingface/transformers/issues/26855/events | https://github.com/huggingface/transformers/pull/26855 | 1,946,357,504 | PR_kwDOCUB6oc5c9gaP | 26,855 | Added hindi language code in language list | {
"login": "AaryaBalwadkar",
"id": 142713127,
"node_id": "U_kgDOCIGhJw",
"avatar_url": "https://avatars.githubusercontent.com/u/142713127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AaryaBalwadkar",
"html_url": "https://github.com/AaryaBalwadkar",
"followers_url": "https://api.github.com/users/AaryaBalwadkar/followers",
"following_url": "https://api.github.com/users/AaryaBalwadkar/following{/other_user}",
"gists_url": "https://api.github.com/users/AaryaBalwadkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AaryaBalwadkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AaryaBalwadkar/subscriptions",
"organizations_url": "https://api.github.com/users/AaryaBalwadkar/orgs",
"repos_url": "https://api.github.com/users/AaryaBalwadkar/repos",
"events_url": "https://api.github.com/users/AaryaBalwadkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/AaryaBalwadkar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing since this is fixed in #26837 "
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added hindi language code in language list in .github/workflows/build_documentation.yml and .github/workflows/build_pr_documentation.yml.
Fixes # 26837
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@stevhliu and @MKhalusova please review this PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26855/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26855",
"html_url": "https://github.com/huggingface/transformers/pull/26855",
"diff_url": "https://github.com/huggingface/transformers/pull/26855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26855.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26854/comments | https://api.github.com/repos/huggingface/transformers/issues/26854/events | https://github.com/huggingface/transformers/pull/26854 | 1,946,281,086 | PR_kwDOCUB6oc5c9QQ6 | 26,854 | Add script to convert HF Whisper models back to OpenAI format | {
"login": "zuazo",
"id": 1878434,
"node_id": "MDQ6VXNlcjE4Nzg0MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1878434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zuazo",
"html_url": "https://github.com/zuazo",
"followers_url": "https://api.github.com/users/zuazo/followers",
"following_url": "https://api.github.com/users/zuazo/following{/other_user}",
"gists_url": "https://api.github.com/users/zuazo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zuazo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zuazo/subscriptions",
"organizations_url": "https://api.github.com/users/zuazo/orgs",
"repos_url": "https://api.github.com/users/zuazo/repos",
"events_url": "https://api.github.com/users/zuazo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zuazo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Linking #20953 as it was asked quite a while ago. We don't usually add these in transformers and would rather add it to the `## resource` section, as a link to a your repo with the script. WDYT @sanchit-gandhi ",
"Probably in the Resource section would be best here @zuazo! It becomes a bit difficult to maintain Transformers if we become a one-to-many export library (e.g. export Transformers format to any number of other libraries).\r\n\r\nCurious to hear what parameters you need from the OpenAI implementation that we don't offer in Transformers though! We can certainly discuss on GitHub adding them to Transformers to improve the experience for you. Currently, we're a lot faster than OpenAI: https://twitter.com/reach_vb/status/1714741554030481786",
"Absolutely, it sounds reasonable. I will open a new PR to add it to the `## Resources` section once we finish PR #26834 to avoid any merge conflicts.\r\n\r\nRegarding our usage, we have experimented with [HF fine-tuned Whisper models](https://huggingface.co/blog/fine-tune-whisper) coupled with n-gram LMs. It seemed straightforward in the whisper library due to their existing [`BeamSearchDecoder`](https://github.com/openai/whisper/blob/b38a1f20f4b23f3f3099af2c3e0ca95627276ddf/whisper/decoding.py#L299), making it simple to incorporate a KenLM.\r\n\r\nIf there is a similar feature in Transformers that I overlooked, I apologize. Navigating through such a comprehensive library can sometimes be challenging.",
"Interesting use-case! Did you find that the Whisper decoder was not enough to predict accurate spelling/transcriptions? The Whisper decoder is in effect an \"internal\" LM, since it plays the role of generating the text conditional on the encoder hidden-states. Is your n-gram LM trained with the same vocab size as Whisper, i.e. you use the Whisper logits in combination with the n-gram model to get your final transcription? We have something similar to this with Wav2Vec2 + n-gram here: https://github.com/huggingface/transformers/blob/34a640642be61e0e2eb622e497140be3aea202bc/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L284"
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
This PR introduces a new script named `convert_hf_to_openai.py` that allows for the conversion of Hugging Face Whisper models back to the original OpenAI Whisper format. This just does the opposite of the [`convert_openai_to_hf.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/convert_openai_to_hf.py) script.
While Hugging Face is easier to use, for example, [for fine-tuning](https://huggingface.co/blog/fine-tune-whisper) and has many integrations, the original OpenAI Whisper library provides more fine-grained control over this specific model, facilitating the testing of new approaches and certain algorithms (at least in our case).
## Doctests
I added a doctest at the beginning that passes, but it requires the `openai-whisper` package to be installed, so I left it disabled with the double `>>`. Not sure how do you prefer to handle this case: leave it like that, adding the Whisper package somewhere in the CI (like [`.github/workflows/doctests.yml`](https://github.com/huggingface/transformers/blob/main/.github/workflows/doctests.yml)), or in any other way.
Besides, even though the original `convert_openai_to_hf.py` script did not have them, let me know if you want me to add some tests to this. I have tested it myself to work with all the Whisper model sizes, even the Large V2.
## Before submitting
- [ ] This PR fixes a typo or improves the docs.
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Possible candidates:
- `convert_openai_to_hf.py` script creator: @ArthurZucker
- Speech models: @sanchit-gandhi
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26854/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26854",
"html_url": "https://github.com/huggingface/transformers/pull/26854",
"diff_url": "https://github.com/huggingface/transformers/pull/26854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26854.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26853/comments | https://api.github.com/repos/huggingface/transformers/issues/26853/events | https://github.com/huggingface/transformers/issues/26853 | 1,946,226,997 | I_kwDOCUB6oc50ARE1 | 26,853 | Hey! The warning is a general warning. `Left` padding is the usual recommendation, but the original Llama codebase (and Code-Llama is part of the Llama codebase) use a default `right` padding. Our goal is to have similar results out of the box (so right padding) but still allow users to have the best results and we thus give recommendation on the padding side. | {
"login": "rafa852",
"id": 59406764,
"node_id": "MDQ6VXNlcjU5NDA2NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/59406764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafa852",
"html_url": "https://github.com/rafa852",
"followers_url": "https://api.github.com/users/rafa852/followers",
"following_url": "https://api.github.com/users/rafa852/following{/other_user}",
"gists_url": "https://api.github.com/users/rafa852/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafa852/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafa852/subscriptions",
"organizations_url": "https://api.github.com/users/rafa852/orgs",
"repos_url": "https://api.github.com/users/rafa852/repos",
"events_url": "https://api.github.com/users/rafa852/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafa852/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null | Hey! The warning is a general warning. `Left` padding is the usual recommendation, but the original Llama codebase (and Code-Llama is part of the Llama codebase) use a default `right` padding. Our goal is to have similar results out of the box (so right padding) but still allow users to have the best results and we thus give recommendation on the padding side.
There is a guideline: CodeLlama is the same as Llama. Would it be clearer if the tip section is copied over to CodeLlama?
_Originally posted by @ArthurZucker in https://github.com/huggingface/transformers/issues/26072#issuecomment-1729231672_
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26853/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26852/comments | https://api.github.com/repos/huggingface/transformers/issues/26852/events | https://github.com/huggingface/transformers/pull/26852 | 1,946,124,033 | PR_kwDOCUB6oc5c8tmb | 26,852 | [`FA2`] Fix flash attention 2 fine-tuning with Falcon | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada so the status right now is that falcon 180 b should work because it uses RoPE but 7b won't work until the alibi issue is fixed?\r\n\r\nFalcon 40b instruct also seems to be rope, so I expected that to work but I get this error:\r\n```\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:1270, in PreTrainedModel._check_and_enable_flash_attn_2(cls, config, torch_dtype, device_map)\r\n 1255 \"\"\"\r\n 1256 If you don't know about Flash Attention, check out the official repository of flash attention:\r\n 1257 https://github.com/Dao-AILab/flash-attention\r\n (...)\r\n 1267 can initialize the correct attention module\r\n 1268 \"\"\"\r\n 1269 if not cls._supports_flash_attn_2:\r\n-> 1270 raise ValueError(\r\n 1271 \"The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to \"\r\n 1272 \"request support for this architecture: https://github.com/huggingface/transformers/issues/new\"\r\n 1273 )\r\n 1275 if not is_flash_attn_2_available():\r\n 1276 raise ImportError(\r\n 1277 \"Flash Attention 2 is not available. Please refer to the documentation of https://github.com/Dao-AILab/flash-attention for\"\r\n 1278 \" installing it. Make sure to have at least the version 2.1.0\"\r\n 1279 )\r\n\r\nValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new\r\n```\r\nafter trying:\r\n```\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True, #adds speed with minimal loss of quality.\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16,\r\n)\r\n\r\nconfig = transformers.AutoConfig.from_pretrained(model_name_A, trust_remote_code=True)\r\nconfig.max_seq_len = 4096 # (input + output) tokens can now be up to 4096\r\n\r\n## Model A\r\nmodel_A = AutoModelForCausalLM.from_pretrained(\r\n model_name_A,\r\n config=config,\r\n quantization_config=bnb_config,\r\n # rope_scaling={\"type\": \"linear\", \"factor\": 2.0},\r\n device_map='auto',\r\n trust_remote_code=True,\r\n use_flash_attention_2=True, # works with Llama models and reduces memory reqs\r\n cache_dir=cache_dir)\r\n```",
"Hi @RonanKMcGovern \r\nThe error is expected, you cannot use FA-2 + `trust_remote_code=True`. Can you please try again without `trust_remote_code=True`",
"Thanks @younesbelkada my mistake, that worked.",
"Thanks @RonanKMcGovern !",
"@younesbelkada this issue appears to be back now.\r\n\r\nI had to revert to end of day yesterday commit to fix the issue:\r\n```\r\n!pip install git+https://github.com/huggingface/transformers.git@d39352d12c3f6a301e6c9d99218d1770eb513aca\r\n```\r\n\r\nBTW, seems like there are a lot of commits happening on main. Is that the usual approach? I would have thought the latest release would be somehow stable and new commits go to a dev branch or version.",
"Hmmm that's strange, I just checked and on the main branch https://github.com/huggingface/transformers/blob/main/src/transformers/models/falcon/modeling_falcon.py#L600 the falcon model contains the fix I introduced in this PR. Can you try:\r\n\r\n```bash\r\npip install -U git+https://github.com/huggingface/transformers.git\r\n```",
"Thanks, that's working. Perhaps what was causing the issue was me using:\r\n```\r\npip install -q -U transformers\r\n```\r\nwhich probably only includes the last release but not commits on top?",
"Yes this forces you to use the last version on pypi which do not includes the commits we push on main"
] | 1,697 | 1,698 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes https://github.com/huggingface/trl/issues/832
Fixes https://github.com/huggingface/trl/issues/875
and all issues related with FA-2 + Falcon fine-tuning
Before this PR we were passing a `nn.Dropout` to the flash attention forward method, leading to an error since the dropout argument is expected to be a float.
Also modified the test a bit to cover that usecase for future models
cc @ArthurZucker @lewtun
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26852/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26852/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26852",
"html_url": "https://github.com/huggingface/transformers/pull/26852",
"diff_url": "https://github.com/huggingface/transformers/pull/26852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26852.patch",
"merged_at": 1697549883000
} |
https://api.github.com/repos/huggingface/transformers/issues/26851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26851/comments | https://api.github.com/repos/huggingface/transformers/issues/26851/events | https://github.com/huggingface/transformers/pull/26851 | 1,945,993,866 | PR_kwDOCUB6oc5c8Qom | 26,851 | [docstring] Fix bit image processor | {
"login": "przemL",
"id": 24912415,
"node_id": "MDQ6VXNlcjI0OTEyNDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/24912415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/przemL",
"html_url": "https://github.com/przemL",
"followers_url": "https://api.github.com/users/przemL/followers",
"following_url": "https://api.github.com/users/przemL/following{/other_user}",
"gists_url": "https://api.github.com/users/przemL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/przemL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/przemL/subscriptions",
"organizations_url": "https://api.github.com/users/przemL/orgs",
"repos_url": "https://api.github.com/users/przemL/repos",
"events_url": "https://api.github.com/users/przemL/events{/privacy}",
"received_events_url": "https://api.github.com/users/przemL/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Could you please review the PR?\r\nThank you.",
"Hi @przemL I see you closed this (and another) PR. Any particular reason? Feel free to reopen is you still want to have your contribution being merged. Thank you."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fix docstring for bit image processor.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26851/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26851",
"html_url": "https://github.com/huggingface/transformers/pull/26851",
"diff_url": "https://github.com/huggingface/transformers/pull/26851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26851.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26850/comments | https://api.github.com/repos/huggingface/transformers/issues/26850/events | https://github.com/huggingface/transformers/pull/26850 | 1,945,958,845 | PR_kwDOCUB6oc5c8I0_ | 26,850 | Bring back `set_epoch` for Accelerate-based dataloaders | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
This PR brings back the `set_epoch` logic and solves the last remaining regression from the accelerate integration. Resuming training should now be exactly identical. (Introduced due to an incomplete implementation in https://github.com/huggingface/transformers/pull/24028)
Linked with https://github.com/huggingface/accelerate/pull/2057
Fixes # (issue)
fixes https://github.com/huggingface/transformers/issues/26541
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26850/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26850",
"html_url": "https://github.com/huggingface/transformers/pull/26850",
"diff_url": "https://github.com/huggingface/transformers/pull/26850.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26850.patch",
"merged_at": 1698312012000
} |
https://api.github.com/repos/huggingface/transformers/issues/26849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26849/comments | https://api.github.com/repos/huggingface/transformers/issues/26849/events | https://github.com/huggingface/transformers/pull/26849 | 1,945,947,626 | PR_kwDOCUB6oc5c8GVg | 26,849 | Bugfix device map detr model | {
"login": "pedrogengo",
"id": 27240528,
"node_id": "MDQ6VXNlcjI3MjQwNTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/27240528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pedrogengo",
"html_url": "https://github.com/pedrogengo",
"followers_url": "https://api.github.com/users/pedrogengo/followers",
"following_url": "https://api.github.com/users/pedrogengo/following{/other_user}",
"gists_url": "https://api.github.com/users/pedrogengo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pedrogengo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pedrogengo/subscriptions",
"organizations_url": "https://api.github.com/users/pedrogengo/orgs",
"repos_url": "https://api.github.com/users/pedrogengo/repos",
"events_url": "https://api.github.com/users/pedrogengo/events{/privacy}",
"received_events_url": "https://api.github.com/users/pedrogengo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @SunMarc :) I don't have 2 GPUs unfortunately. Can you run on your side? In the meantime I can remove the tests I added + fix the style issue",
"@SunMarc addressed your comment on my side. Could you test?\r\nI also noticed a bad pattern maybe on `check_copies.py`. At first, I tried this:\r\n\r\n`# Copied from transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrPreTrainedModel with DeformableDetr->Deta,DeformableDetrConvEncoder->DetaBackboneWithPositionalEncodings`\r\n\r\nAnd it was breaking the CI, because it first replace `DeformableDetr` to `Deta`, and after search for the next pair, which was `DeformableDetrConvEncoder->DetaBackboneWithPositionalEncodings`. However, as we first replaced, it turns that in the code `DeformableDetrConvEncoder` was changed to `DetaConvEncoder`. Maybe we can sort by length and start replacing from the largest string, and also replace with a placeholder in both codes to avoid replacing more than one time. Just something that I saw haha I can propose a PR for that, but I don't think it is that much important.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26849). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Fixes #26700 #23145
## Who can review?
@LysandreJik @SunMarc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26849/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26849/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26849",
"html_url": "https://github.com/huggingface/transformers/pull/26849",
"diff_url": "https://github.com/huggingface/transformers/pull/26849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26849.patch",
"merged_at": 1698086067000
} |
https://api.github.com/repos/huggingface/transformers/issues/26848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26848/comments | https://api.github.com/repos/huggingface/transformers/issues/26848/events | https://github.com/huggingface/transformers/pull/26848 | 1,945,935,264 | PR_kwDOCUB6oc5c8Dnq | 26,848 | [docstring] Fix bert japanese tokenizer | {
"login": "przemL",
"id": 24912415,
"node_id": "MDQ6VXNlcjI0OTEyNDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/24912415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/przemL",
"html_url": "https://github.com/przemL",
"followers_url": "https://api.github.com/users/przemL/followers",
"following_url": "https://api.github.com/users/przemL/following{/other_user}",
"gists_url": "https://api.github.com/users/przemL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/przemL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/przemL/subscriptions",
"organizations_url": "https://api.github.com/users/przemL/orgs",
"repos_url": "https://api.github.com/users/przemL/repos",
"events_url": "https://api.github.com/users/przemL/events{/privacy}",
"received_events_url": "https://api.github.com/users/przemL/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Could you please review the PR?\r\nThank you."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fix docstring for bert japanese tokenizer.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26848/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26848",
"html_url": "https://github.com/huggingface/transformers/pull/26848",
"diff_url": "https://github.com/huggingface/transformers/pull/26848.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26848.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26847/comments | https://api.github.com/repos/huggingface/transformers/issues/26847/events | https://github.com/huggingface/transformers/pull/26847 | 1,945,898,367 | PR_kwDOCUB6oc5c77l_ | 26,847 | Fix Mistral OOM again ... | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
missing `torch.no_grad` 😭 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26847/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26847",
"html_url": "https://github.com/huggingface/transformers/pull/26847",
"diff_url": "https://github.com/huggingface/transformers/pull/26847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26847.patch",
"merged_at": 1697489240000
} |
https://api.github.com/repos/huggingface/transformers/issues/26846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26846/comments | https://api.github.com/repos/huggingface/transformers/issues/26846/events | https://github.com/huggingface/transformers/pull/26846 | 1,945,877,903 | PR_kwDOCUB6oc5c73Ir | 26,846 | [`FA-2`] Final fix for FA2 dtype | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26846). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Replaces https://github.com/huggingface/transformers/pull/26560
Fixes https://github.com/huggingface/transformers/issues/26451
Proposes a simpler fix for dealing with FA-2 + PEFT + quantization fine-tuning where users usually cast all other modules (e.g. LayerNorms) in fp32 for training stability.
With https://github.com/huggingface/transformers/pull/26761 being introduced, it is now much simpler to retrieve model's original dtype, note also that `self.config._pre_quantization_dtype` remains the single source of truth as `to` is not supported for quantized models
cc @ArthurZucker @pacman100
Added also a nice test | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26846/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26846",
"html_url": "https://github.com/huggingface/transformers/pull/26846",
"diff_url": "https://github.com/huggingface/transformers/pull/26846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26846.patch",
"merged_at": 1697651336000
} |
https://api.github.com/repos/huggingface/transformers/issues/26845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26845/comments | https://api.github.com/repos/huggingface/transformers/issues/26845/events | https://github.com/huggingface/transformers/issues/26845 | 1,945,807,862 | I_kwDOCUB6oc5z-qv2 | 26,845 | ValueError from NougatImageProcessor using example from docs | {
"login": "lucasmccabe",
"id": 12187602,
"node_id": "MDQ6VXNlcjEyMTg3NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/12187602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasmccabe",
"html_url": "https://github.com/lucasmccabe",
"followers_url": "https://api.github.com/users/lucasmccabe/followers",
"following_url": "https://api.github.com/users/lucasmccabe/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasmccabe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasmccabe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasmccabe/subscriptions",
"organizations_url": "https://api.github.com/users/lucasmccabe/orgs",
"repos_url": "https://api.github.com/users/lucasmccabe/repos",
"events_url": "https://api.github.com/users/lucasmccabe/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasmccabe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Note: this is resolved by explicitly passing `data_format=\"channels_first\"` when calling `NougatProcessor`, although it's not clear to me why this is the case, since this is ostensibly the default anyway.",
"Hello, thanks for reporting, this was fixed in #26608",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: macOS-13.5.2-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am running the example code from the Nougat documentation [here](https://huggingface.co/docs/transformers/v4.34.0/model_doc/nougat) in a Jupyter notebook:
```python
from huggingface_hub import hf_hub_download
import re
from PIL import Image
from transformers import NougatProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = NougatProcessor.from_pretrained("facebook/nougat-base")
model = VisionEncoderDecoderModel.from_pretrained("facebook/nougat-base")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# prepare PDF image for the model
filepath = hf_hub_download(repo_id="hf-internal-testing/fixtures_docvqa", filename="nougat_paper.png", repo_type="dataset")
image = Image.open(filepath)
pixel_values = processor(image, return_tensors="pt").pixel_values
# generate transcription (here we only generate 30 tokens)
outputs = model.generate(
pixel_values.to(device),
min_length=1,
max_new_tokens=30,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
)
sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0]
sequence = processor.post_process_generation(sequence, fix_markdown=False)
# note: we're using repr here such for the sake of printing the \n characters, feel free to just print the sequence
print(repr(sequence))
```
### Expected behavior
**Expected Output**
The documentation indicates the above should return the following string:
_'\n\n# Nougat: Neural Optical Understanding for Academic Documents\n\n Lukas Blecher\n\nCorrespondence to: lblecher@'_
**Observed Output**
Running the following as a code block raises the following:
```bash
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 17
15 filepath = hf_hub_download(repo_id="hf-internal-testing/fixtures_docvqa", filename="nougat_paper.png", repo_type="dataset")
16 image = Image.open(filepath)
---> 17 pixel_values = processor(image, return_tensors="pt").pixel_values
19 # generate transcription (here we only generate 30 tokens)
20 outputs = model.generate(
21 pixel_values.to(device),
22 min_length=1,
23 max_new_tokens=30,
24 bad_words_ids=[[processor.tokenizer.unk_token_id]],
25 )
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/site-packages/transformers/models/nougat/processing_nougat.py:91, in NougatProcessor.__call__(self, images, text, do_crop_margin, do_resize, size, resample, do_thumbnail, do_align_long_axis, do_pad, do_rescale, rescale_factor, do_normalize, image_mean, image_std, data_format, input_data_format, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose)
88 raise ValueError("You need to specify either an `images` or `text` input to process.")
90 if images is not None:
---> 91 inputs = self.image_processor(
92 images,
93 do_crop_margin=do_crop_margin,
94 do_resize=do_resize,
95 size=size,
96 resample=resample,
97 do_thumbnail=do_thumbnail,
98 do_align_long_axis=do_align_long_axis,
99 do_pad=do_pad,
100 do_rescale=do_rescale,
101 rescale_factor=rescale_factor,
102 do_normalize=do_normalize,
103 image_mean=image_mean,
104 image_std=image_std,
105 return_tensors=return_tensors,
106 data_format=data_format,
107 input_data_format=input_data_format,
108 )
109 if text is not None:
110 encodings = self.tokenizer(
111 text,
112 text_pair=text_pair,
(...)
129 verbose=verbose,
130 )
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/site-packages/transformers/image_processing_utils.py:546, in BaseImageProcessor.__call__(self, images, **kwargs)
544 def __call__(self, images, **kwargs) -> BatchFeature:
545 """Preprocess an image or a batch of images."""
--> 546 return self.preprocess(images, **kwargs)
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/site-packages/transformers/models/nougat/image_processing_nougat.py:505, in NougatImageProcessor.preprocess(self, images, do_crop_margin, do_resize, size, resample, do_thumbnail, do_align_long_axis, do_pad, do_rescale, rescale_factor, do_normalize, image_mean, image_std, return_tensors, data_format, input_data_format, **kwargs)
499 if do_normalize:
500 images = [
501 self.normalize(image=image, mean=image_mean, std=image_std, input_data_format=input_data_format)
502 for image in images
503 ]
--> 505 images = [
506 to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images
507 ]
509 data = {"pixel_values": images}
510 return BatchFeature(data=data, tensor_type=return_tensors)
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/site-packages/transformers/models/nougat/image_processing_nougat.py:506, in <listcomp>(.0)
499 if do_normalize:
500 images = [
501 self.normalize(image=image, mean=image_mean, std=image_std, input_data_format=input_data_format)
502 for image in images
503 ]
505 images = [
--> 506 to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images
507 ]
509 data = {"pixel_values": images}
510 return BatchFeature(data=data, tensor_type=return_tensors)
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/site-packages/transformers/image_transforms.py:78, in to_channel_dimension_format(image, channel_dim, input_channel_dim)
75 if input_channel_dim is None:
76 input_channel_dim = infer_channel_dimension_format(image)
---> 78 target_channel_dim = ChannelDimension(channel_dim)
79 if input_channel_dim == target_channel_dim:
80 return image
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/enum.py:385, in EnumMeta.__call__(cls, value, names, module, qualname, type, start)
360 """
361 Either returns an existing member, or creates a new enum class.
362
(...)
382 `type`, if set, will be mixed in as the first base class.
383 """
384 if names is None: # simple value lookup
--> 385 return cls.__new__(cls, value)
386 # otherwise, functional API: we're creating a new Enum type
387 return cls._create_(
388 value,
389 names,
(...)
393 start=start,
394 )
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/enum.py:718, in Enum.__new__(cls, value)
716 if not isinstance(exc, ValueError):
717 exc.__context__ = ve_exc
--> 718 raise exc
719 finally:
720 # ensure all variables that could hold an exception are destroyed
721 exc = None
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/enum.py:700, in Enum.__new__(cls, value)
698 try:
699 exc = None
--> 700 result = cls._missing_(value)
701 except Exception as e:
702 exc = e
File /opt/miniconda3/envs/dogger-dev/lib/python3.10/site-packages/transformers/utils/generic.py:433, in ExplicitEnum._missing_(cls, value)
431 @classmethod
432 def _missing_(cls, value):
--> 433 raise ValueError(
434 f"{value} is not a valid {cls.__name__}, please select one of {list(cls._value2member_map_.keys())}"
435 )
ValueError: ChannelDimension.FIRST is not a valid ChannelDimension, please select one of ['channels_first', 'channels_last']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26845/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26844/comments | https://api.github.com/repos/huggingface/transformers/issues/26844/events | https://github.com/huggingface/transformers/issues/26844 | 1,945,764,193 | I_kwDOCUB6oc5z-gFh | 26,844 | Mask2Former get_num_masks does not average across the nodes | {
"login": "tommiekerssies",
"id": 6392002,
"node_id": "MDQ6VXNlcjYzOTIwMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6392002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tommiekerssies",
"html_url": "https://github.com/tommiekerssies",
"followers_url": "https://api.github.com/users/tommiekerssies/followers",
"following_url": "https://api.github.com/users/tommiekerssies/following{/other_user}",
"gists_url": "https://api.github.com/users/tommiekerssies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tommiekerssies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tommiekerssies/subscriptions",
"organizations_url": "https://api.github.com/users/tommiekerssies/orgs",
"repos_url": "https://api.github.com/users/tommiekerssies/repos",
"events_url": "https://api.github.com/users/tommiekerssies/events{/privacy}",
"received_events_url": "https://api.github.com/users/tommiekerssies/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @rafaelpadilla might be worth checking? ",
"Hi @tommiekerssies ,\r\n\r\nWe appreciate you bringing this to our attention.\r\n\r\nThe original Mask2Former code handles two losses (labels and masks) as seen [here](https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/criterion.py#L204). In HF Transformers, we compute both losses separately using two distinct functions `loss_masks` and `loss_labels`, as shown [here](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/mask2former/modeling_mask2former.py#L782).\r\n\r\nCould you kindly provide an example illustrating when the results might differ between the two implementations? ",
"Hi @tommiekerssies,\r\n\r\nI'm aware of this, I remember we had to pull out distributed-related code when adding Mask2Former to the Transformers library, as otherwise it wouldn't work with libraries like Accelerate or the Trainer class. So to make it work, I'm afraid you will need to fork the library and make the necessary changes.\r\n\r\ncc'ing @ArthurZucker here, wondering whether pulling out distributed-related code makes sense cause it makes it harder for people to do distributed training",
"Yeah I don't think we allow distributed related code nowdays and rather output the loss if ask / leave the computation to the user ! ",
"I think the difference in performance is negligible so it shouldn't be a big deal. However, the docblock is confusing, because the code is not actually computing the average across the batch, but is instead computing the total number of masks in the batch.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### System Info
The original Mask2Former does this:
`# Compute the average number of target boxes accross all nodes, for normalization purposes
num_masks = sum(len(t["labels"]) for t in targets)
num_masks = torch.as_tensor(
[num_masks], dtype=torch.float, device=next(iter(outputs.values())).device
)
if is_dist_avail_and_initialized():
torch.distributed.all_reduce(num_masks)
num_masks = torch.clamp(num_masks / get_world_size(), min=1).item()`
https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/criterion.py
Whereas the HF implementation does this:
` def get_num_masks(self, class_labels: torch.Tensor, device: torch.device) -> torch.Tensor:
"""
Computes the average number of target masks across the batch, for normalization purposes.
"""
num_masks = sum([len(classes) for classes in class_labels])
num_masks_pt = torch.as_tensor([num_masks], dtype=torch.float, device=device)
return num_masks_pt
`https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/mask2former/modeling_mask2former.py
### Who can help?
@alaradirik @shivalikasingh95
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
-
### Expected behavior
- | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26844/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26843/comments | https://api.github.com/repos/huggingface/transformers/issues/26843/events | https://github.com/huggingface/transformers/pull/26843 | 1,945,751,371 | PR_kwDOCUB6oc5c7cTQ | 26,843 | 🚨🚨 Generate: change order of ops in beam sample to avoid nans | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/26332 -- see [this](https://github.com/huggingface/transformers/issues/26332#issuecomment-1764736970) comment for a full explanation.
TL;DR: in `beam_sample`, `logits_warper` are now applied BEFORE adding the beam scores. We have been postponing this change to avoid introducing output differences, but the truth is that the order of operations causes issues (e.g. [1](https://github.com/huggingface/transformers/issues/26332) [2](https://github.com/huggingface/transformers/issues/22914))
This is technically a bug fix (`beam_sample` is unusable for long generations with `temperature < 1.0` before this change), hence the lack of a deprecation cycle. However, it may alter some generated outputs (short generations with low temperature), hence the 🚨🚨.
Please note that other common operators, like `top_p` and `top_k`, are unaltered by this change. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26843/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26843/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26843",
"html_url": "https://github.com/huggingface/transformers/pull/26843",
"diff_url": "https://github.com/huggingface/transformers/pull/26843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26843.patch",
"merged_at": 1697535170000
} |
https://api.github.com/repos/huggingface/transformers/issues/26842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26842/comments | https://api.github.com/repos/huggingface/transformers/issues/26842/events | https://github.com/huggingface/transformers/pull/26842 | 1,945,700,768 | PR_kwDOCUB6oc5c7RTg | 26,842 | Fix TensorFlow pakage check | {
"login": "jayfurmanek",
"id": 8660918,
"node_id": "MDQ6VXNlcjg2NjA5MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8660918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayfurmanek",
"html_url": "https://github.com/jayfurmanek",
"followers_url": "https://api.github.com/users/jayfurmanek/followers",
"following_url": "https://api.github.com/users/jayfurmanek/following{/other_user}",
"gists_url": "https://api.github.com/users/jayfurmanek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayfurmanek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayfurmanek/subscriptions",
"organizations_url": "https://api.github.com/users/jayfurmanek/orgs",
"repos_url": "https://api.github.com/users/jayfurmanek/repos",
"events_url": "https://api.github.com/users/jayfurmanek/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayfurmanek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Taking this one!",
"Hey @jayfurmanek, I checked this out! Adding `tf-nightly-rocm` to the candidate list is appreciated, but I don't think the other changes are required, and they seem to break a lot of the codebase! Can we revert that part of the PR and just keep the package name addition, or did that not work for you?\r\n\r\nUpdate: Part of the issue might be that `importlib.metadata` is typoed as `importlib_metadata` - but I think we still should be okay if we just revert that change!",
"The problem with the check as-is is that it does _is_package_available() with \"tensorflow\" and then does the meta data check there as well. For tensorflow-rocm that part will fail and _is_available will be marked as False, so the candidates list below is skipped altogether.",
"Hi @jayfurmanek I just tested locally and installing with `pip install tensorflow-rocm` works fine, and causes `transformers.is_tf_available()` to correctly return `True`. Can you tell me where I can get a wheel for `tf-nightly-rocm` so I can test that?",
"Try: http://ml-ci.amd.com:21096/job/tensorflow/job/nightly-rocmfork-develop-upstream/job/nightly-build-whl/",
"@jayfurmanek I confirmed after installing one of those wheels that just adding `tf-nightly-rocm` to the list is sufficient! Can we revert the rest of the PR and just keep that bit?",
"If you want to test it yourself, you can run `transformers.is_tf_available()` to see if `transformers` can detect a TF installation. It works fine for me once the list is updated, with no other changes.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26842). All of your documentation changes will be reflected on that endpoint.",
"I was using `is_package_available()` for tf not `is_tf_available`. With `is_tf_available()` it works.\r\nI still think `is_package_available()` is a bit wrong though.\r\nPR updated.\r\n\r\nthx",
"Yeah, that code is a little wonky, and we'll have to do a proper refactor eventually. I think this resolves the issue here for now, though, so thanks for the PR!"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
This fixes the `_is_package_available` check for tensorflow variants (like tensorflow-rocm) where their meta-data differs from the package name. There is an list of available candidate names there, but they are never used since `_is_package_available` will return false preventing the candidate list from being checked.
This also adds `tf-nightly-rocm` to the list of candidates.
Fixes # (issue)
N/A
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
This fixes a small functional error, no documentation updates are required.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- tensorflow review: @gante and @Rocketknight1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26842/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26842",
"html_url": "https://github.com/huggingface/transformers/pull/26842",
"diff_url": "https://github.com/huggingface/transformers/pull/26842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26842.patch",
"merged_at": 1697580950000
} |
https://api.github.com/repos/huggingface/transformers/issues/26841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26841/comments | https://api.github.com/repos/huggingface/transformers/issues/26841/events | https://github.com/huggingface/transformers/pull/26841 | 1,945,656,909 | PR_kwDOCUB6oc5c7HxR | 26,841 | Fix Seq2seqTrainer decoder attention mask | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante and @ydshieh because I see you in the `git blame` near here - let me know if this is okay, or if I'm breaking anything with this fix!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26841). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | MEMBER | null | The Seq2SeqTrainer drops `decoder_input_ids` during the generation step for metrics that expect text generation (like `rouge`) when `labels` is present. However, it doesn't drop `decoder_attention_mask` when it does this, which means that in some cases, we pass `decoder_attention_mask` with no `decoder_input_ids`, resulting in the model getting very confused and throwing a shape error.
This PR fixes the issue.
Fixes #24567 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26841/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26841/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26841",
"html_url": "https://github.com/huggingface/transformers/pull/26841",
"diff_url": "https://github.com/huggingface/transformers/pull/26841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26841.patch",
"merged_at": 1697632096000
} |
https://api.github.com/repos/huggingface/transformers/issues/26840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26840/comments | https://api.github.com/repos/huggingface/transformers/issues/26840/events | https://github.com/huggingface/transformers/issues/26840 | 1,945,405,524 | I_kwDOCUB6oc5z9IhU | 26,840 | log interval | {
"login": "KwangryeolPark",
"id": 48284967,
"node_id": "MDQ6VXNlcjQ4Mjg0OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/48284967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KwangryeolPark",
"html_url": "https://github.com/KwangryeolPark",
"followers_url": "https://api.github.com/users/KwangryeolPark/followers",
"following_url": "https://api.github.com/users/KwangryeolPark/following{/other_user}",
"gists_url": "https://api.github.com/users/KwangryeolPark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KwangryeolPark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KwangryeolPark/subscriptions",
"organizations_url": "https://api.github.com/users/KwangryeolPark/orgs",
"repos_url": "https://api.github.com/users/KwangryeolPark/repos",
"events_url": "https://api.github.com/users/KwangryeolPark/events{/privacy}",
"received_events_url": "https://api.github.com/users/KwangryeolPark/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"@KwangryeolPark Please assign this issue to me"
] | 1,697 | 1,697 | null | NONE | null | ### Feature request
I want you to add "log_interval" argument for logging every N iteration not a epoch.
### Motivation
Some fine-tuning task/dataset does not require much epochs such as CoLA. It only requires 2~3 epochs, which means the log data(ex metric) is produced 2~3 times.
I want more log during training with 2~3 epochs, so I hope you add some feature which logs every N iteration.
I already add the argument in my project just adding below code, but it is borthing to add every project whenever I create new project based on your codes.
`
if completed_steps % args.log_interval == 0:
# presented evaluation code in original code.
`
### Your contribution
`
if completed_steps % args.log_interval == 0:
model.eval()
samples_seen = 0
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
outputs = model(**batch)
predictions = outputs.logits.argmax(dim=-1) if not is_regression else outputs.logits.squeeze()
predictions, references = accelerator.gather((predictions, batch["labels"]))
# If we are in a multiprocess environment, the last batch has duplicates
if accelerator.num_processes > 1:
if step == len(eval_dataloader) - 1:
predictions = predictions[: len(eval_dataloader.dataset) - samples_seen]
references = references[: len(eval_dataloader.dataset) - samples_seen]
else:
samples_seen += references.shape[0]
metric.add_batch(
predictions=predictions,
references=references,
)
eval_metric = metric.compute()
logger.info(f"epoch {epoch}: {eval_metric}")
if args.with_tracking:
accelerator.log(
{
"accuracy" if args.task_name is not None else "glue": eval_metric,
"train_loss": total_loss.item() / args.log_interval,
"train_mem": total_memory / args.log_interval,
"epoch": epoch,
"step": completed_steps,
"lr": optimizer.param_groups[0]["lr"],
},
step=completed_steps,
)
total_loss = 0
` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26840/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26839/comments | https://api.github.com/repos/huggingface/transformers/issues/26839/events | https://github.com/huggingface/transformers/pull/26839 | 1,945,345,300 | PR_kwDOCUB6oc5c6CRl | 26,839 | Idefics: Fix information leak with cross attention gate in modeling | {
"login": "leot13",
"id": 17809020,
"node_id": "MDQ6VXNlcjE3ODA5MDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/17809020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leot13",
"html_url": "https://github.com/leot13",
"followers_url": "https://api.github.com/users/leot13/followers",
"following_url": "https://api.github.com/users/leot13/following{/other_user}",
"gists_url": "https://api.github.com/users/leot13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leot13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leot13/subscriptions",
"organizations_url": "https://api.github.com/users/leot13/orgs",
"repos_url": "https://api.github.com/users/leot13/repos",
"events_url": "https://api.github.com/users/leot13/events{/privacy}",
"received_events_url": "https://api.github.com/users/leot13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Let me run the slow tests to see if this PR also fix them.",
"When I ran `TF_FORCE_GPU_ALLOW_GROWTH=true RUN_SLOW=1 python3 -m pytest -v tests/models/idefics/` on a single T4 GPU machine, there are following errors. The `xxx_offload` could be ignored as it is likely due to the `safetensors` PRs.\r\n\r\n`test_cross_attention_gates` and `test_feed_forward_chunking` need a look: I am not sure about `test_feed_forward_chunking` as I don't know if it was already failing before this PR.\r\n\r\n\r\n```bash\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsModelTest::test_cpu_offload - AssertionError: False is not true\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsModelTest::test_cross_attention_gates - RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsModelTest::test_determinism - ValueError: zero-size array to reduction operation maximum which has no identity\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsModelTest::test_disk_offload_bin - AssertionError: False is not true\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsModelTest::test_disk_offload_safetensors - AssertionError: False is not true\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsModelTest::test_feed_forward_chunking - AssertionError: False is not true\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsForVisionText2TextTest::test_cpu_offload - AssertionError: False is not true\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsForVisionText2TextTest::test_cross_attention_gates - RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsForVisionText2TextTest::test_determinism - ValueError: zero-size array to reduction operation maximum which has no identity\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsForVisionText2TextTest::test_disk_offload_bin - AssertionError: False is not true\r\nFAILED tests/models/idefics/test_modeling_idefics.py::IdeficsForVisionText2TextTest::test_disk_offload_safetensors - AssertionError: False is not true\r\n\r\n```",
"Thanks for running this! Last update should fix `test_cross_attention_gate`. I am not sure about `test_feed_forward_chunking` though",
"test `test_feed_forward_chunking though` starts to fail (on GPU) after we switch to torch 2.1, so I think it's the same issue of SDPA.\r\nCould you check with torch 2.0.1 (on GPU) and see if it pass? If so, this implies either there is more to do in this PR, or the test has to be updated if that makes more sense.\r\n\r\nFor offload tests, it's unclear as there are also the recent changes in `safetensors`.",
"Sorry for the delay @ydshieh, I did the slow tests with both torch 2.1 and torch 2.0.1, and double checked the attention values. \r\nWith torch 2.1 I do get `nans` for both `test_feed_forward_chunking` and `cross_attention_gate`, this is why they both fail in slow tests currently.\r\nWith torch 2.0.1 both tests pass now",
"Hi @leot13 Thank you for double checking.",
"@leot13 I think this PR tries to zero-out some hidden states, right? So intuitively, I expect there should be no more `nan` value (as this happens when a token has no image to attend, if I still remember correctly). Do I miss any detail here - I mean maybe I misunderstand the goal of this PR.",
"This PR zeros-out a given `hidden_state` coming out of the cross-attention if the `image_attention_mask` is filled with zeros for this particular `hidden_state` . However, it won't solve the nan issue, because the gate is multiplying the `hidden_states` by 1 or 0. So if a hidden_state has `nans`, they will still stay `nans`.",
"I see. But maybe we can use the gate as a kind of mask, and set the corresponding places in `hidden_state` to be `zero` if the mask (i.e. the gate) says a places is `image_attention_mask is filled with zeros`. So no multiplication, but instead `simply replace values`. This should avoid the `nan` IMO. Does this sound make sense ?\r\n\r\n",
"The last update should have fixed the slow tests as well. Now the hidden_states are filled with zeros instead of being multiplied by zero when they should attend to no images",
"Félicitations, @leot13 \r\n\r\n<img width=\"212\" alt=\"Screenshot 2023-11-16 165134\" src=\"https://github.com/huggingface/transformers/assets/2521628/16728ce7-e7e3-4a29-b80e-242389d1e4c1\">\r\n\r\n🎉 "
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixes #26428 reported by @momergul
This PR fixes the cross-attention hidden states of tokens attending no images.
This issue had no detrimental effect for inference, however this should be useful for the few tokens attending no images during training.
This PR should also prevent the cross-attentions from generating NaNs when using torch>=2.1
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @amyeroberts @ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26839/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26839",
"html_url": "https://github.com/huggingface/transformers/pull/26839",
"diff_url": "https://github.com/huggingface/transformers/pull/26839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26839.patch",
"merged_at": 1700569562000
} |
https://api.github.com/repos/huggingface/transformers/issues/26838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26838/comments | https://api.github.com/repos/huggingface/transformers/issues/26838/events | https://github.com/huggingface/transformers/issues/26838 | 1,945,204,582 | I_kwDOCUB6oc5z8Xdm | 26,838 | Seeking Guidance on Using Codecov to Transformers Repository | {
"login": "yunbolyu",
"id": 101203664,
"node_id": "U_kgDOBgg-0A",
"avatar_url": "https://avatars.githubusercontent.com/u/101203664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yunbolyu",
"html_url": "https://github.com/yunbolyu",
"followers_url": "https://api.github.com/users/yunbolyu/followers",
"following_url": "https://api.github.com/users/yunbolyu/following{/other_user}",
"gists_url": "https://api.github.com/users/yunbolyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yunbolyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yunbolyu/subscriptions",
"organizations_url": "https://api.github.com/users/yunbolyu/orgs",
"repos_url": "https://api.github.com/users/yunbolyu/repos",
"events_url": "https://api.github.com/users/yunbolyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yunbolyu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @LysandreJik as he was here during the good old times when Codecov was used!",
"Thanks for pinging out. I've seen him a lot in previous discussions and appreciate his ongoing contributions to the open-source community and your timely response!",
"Hey @yunbolyu! Setting up codecov right now for transformers would likely be a significant engineering endeavor given the different flavors of tests that we support. We rarely run the entire test suite at once on all supported hardware, so making sense of the different workflows would likely be tough for codecov.\r\n\r\n@ydshieh, who's managing our testing infra can probably share a bit more than I can.",
"Hi @yunbolyu As Lysandre said, the CI only runs a subset of tests (and would be different subsets in each run, except for the nightly CI run). Furthermore, I never used `Codecov` before and don't have useful insights to help you.\r\n\r\nA quick look tell me we need the following (but I think you know this already) in each CircleCI job\r\n\r\n```\r\nrun: pip install codecov pytest-cov\r\nrun: pytest ... `--cov`\r\nrun: codecov \r\n```\r\nBut I don't know what problems you encountered here. If you mean multiple reports presented, it's probably better to treat them as different CI and report them (well, this probably doesn't make sense...), or you will have to manually to merge those reports.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
Hello Transformers maintainers and community,
In an effort to use the Codecov module and obtain code coverage, I attempted to base my changes on the previous removal process. However, I've encountered challenges since the tests appear to have been divided into multiple segments [PR #7970](https://github.com/huggingface/transformers/pull/7970). I also came across [PR #8093](https://github.com/huggingface/transformers/pull/8093), which suggests merging two slow tests (TF and PT) for coverage. While this seems to be a lead, I'm uncertain about the actual implementation.
### Motivation
I am currently conducting an empirical study focusing on code coverage in deep-learning open-source projects. While trying to use Codecov for assessing the code coverage of Transformers, I observed that Codecov was previously utilized by the project. However, as per [PR #7970](https://github.com/huggingface/transformers/pull/7970), due to the workflow refactoring and the diminishing utility of Codecov reports, the Codecov module was removed.
### Your contribution
Could anyone provide insights or suggestions on how best to use Codecov and achieve correct code coverage for Transformers?
Thank you in advance for your assistance and guidance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26837/comments | https://api.github.com/repos/huggingface/transformers/issues/26837/events | https://github.com/huggingface/transformers/pull/26837 | 1,945,078,083 | PR_kwDOCUB6oc5c5Hcx | 26,837 | Hindi translation of pipeline_tutorial.md | {
"login": "AaryaBalwadkar",
"id": 142713127,
"node_id": "U_kgDOCIGhJw",
"avatar_url": "https://avatars.githubusercontent.com/u/142713127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AaryaBalwadkar",
"html_url": "https://github.com/AaryaBalwadkar",
"followers_url": "https://api.github.com/users/AaryaBalwadkar/followers",
"following_url": "https://api.github.com/users/AaryaBalwadkar/following{/other_user}",
"gists_url": "https://api.github.com/users/AaryaBalwadkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AaryaBalwadkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AaryaBalwadkar/subscriptions",
"organizations_url": "https://api.github.com/users/AaryaBalwadkar/orgs",
"repos_url": "https://api.github.com/users/AaryaBalwadkar/repos",
"events_url": "https://api.github.com/users/AaryaBalwadkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/AaryaBalwadkar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26837). All of your documentation changes will be reflected on that endpoint.",
"Hi @AaryaBalwadkar! Thank you for working on translating the doc page into Hindi. \r\nFrom what I see, this is the first time a doc page has been translated into Hindi. As such, you'll need to modify two more files in order to build the docs in a new language - `.github/workflows/build_documentation.yml` and `.github/workflows/build_pr_documentation.yml`.
\r\n\r\nAdd the two-letter code for your language to the list of languages. You can look up the code [here](https://www.loc.gov/standards/iso639-2/php/code_list.php). \r\n",
"Done the modification in both the files. Please have a look.\r\n",
"Ok\r\n",
"Made the changes. Please have a look",
"Thanks! Can you include the changes you made in https://github.com/huggingface/transformers/pull/26855 in this PR as well?"
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
This PR translates the pipeline_tutorial.md in Hindi language and mentions it in _toctree.yml in doc/source/hi/ folder. Thus Fixes # 26787
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@stevhliu and @MKhalusova please review this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26837/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26837",
"html_url": "https://github.com/huggingface/transformers/pull/26837",
"diff_url": "https://github.com/huggingface/transformers/pull/26837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26837.patch",
"merged_at": 1698258109000
} |
https://api.github.com/repos/huggingface/transformers/issues/26836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26836/comments | https://api.github.com/repos/huggingface/transformers/issues/26836/events | https://github.com/huggingface/transformers/pull/26836 | 1,945,064,637 | PR_kwDOCUB6oc5c5EgO | 26,836 | Fix key dtype in GPTJ and CodeGen | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @ArthurZucker for the notice. Indeed in the original codebase of GPTJ and codegen `sincos` is kept as fp32, and the ROPE computation as well as `_attn` computation is done with the `key` always in fp32. The looks very much like a bug in the original implementation of those models.\r\n\r\nAnyway, in the interest of keeping the compute similar to the original implementation, I simply cast the key cache to the correct dtype.",
"It is still float16 x float32. We simply store the KV cache as float16. This `key = torch.cat((past_key, key), dim=-2)` basically casts everything to float32 (while `past_key` is fp16, `key` is fp32)\r\n\r\n(edit: But I see the point. Maybe casting the key cache to fp16 can bring instabilities. I would be surprised though given that it is not an issue for similar decoder models).",
"it is for llama 🙄 "
] | 1,697 | 1,698 | 1,698 | COLLABORATOR | null | As per title.
The issue is that
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "hf-internal-testing/tiny-random-GPTJForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
inp = tokenizer("this is me", return_tensors="pt").to("cuda")
model = model.to("cuda")
model = model.eval()
with torch.no_grad():
res = model(**inp, use_cache=True)
print("res.past_key_values[0][0].dtype", res.past_key_values[0][0].dtype)
print("res.past_key_values[0][1].dtype", res.past_key_values[0][1].dtype)
```
gives
```
res.past_key_values[0][0].dtype torch.float32
res.past_key_values[0][1].dtype torch.float16
```
the reason being that `sin` and `cos` are not properly casted to the correct dtype contrarily to the implementation in e.g. llama. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26836/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26836",
"html_url": "https://github.com/huggingface/transformers/pull/26836",
"diff_url": "https://github.com/huggingface/transformers/pull/26836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26836.patch",
"merged_at": 1698134114000
} |
https://api.github.com/repos/huggingface/transformers/issues/26835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26835/comments | https://api.github.com/repos/huggingface/transformers/issues/26835/events | https://github.com/huggingface/transformers/pull/26835 | 1,945,037,210 | PR_kwDOCUB6oc5c4-ex | 26,835 | testing doc-builder | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26835). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | testing https://github.com/huggingface/doc-builder/pull/427
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26835/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26835",
"html_url": "https://github.com/huggingface/transformers/pull/26835",
"diff_url": "https://github.com/huggingface/transformers/pull/26835.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26835.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26834/comments | https://api.github.com/repos/huggingface/transformers/issues/26834/events | https://github.com/huggingface/transformers/pull/26834 | 1,945,017,755 | PR_kwDOCUB6oc5c46K- | 26,834 | Fix Whisper Conversion Script: Correct decoder_attention_heads and _download function | {
"login": "zuazo",
"id": 1878434,
"node_id": "MDQ6VXNlcjE4Nzg0MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1878434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zuazo",
"html_url": "https://github.com/zuazo",
"followers_url": "https://api.github.com/users/zuazo/followers",
"following_url": "https://api.github.com/users/zuazo/following{/other_user}",
"gists_url": "https://api.github.com/users/zuazo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zuazo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zuazo/subscriptions",
"organizations_url": "https://api.github.com/users/zuazo/orgs",
"repos_url": "https://api.github.com/users/zuazo/repos",
"events_url": "https://api.github.com/users/zuazo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zuazo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26834). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the review!\r\n\r\n> Seems like the docstring of the config is also wrong as the tiny has 6 heads but 4 layers not 6 layers 4 heads (if you want to update this as well!)\r\n\r\nNice catch! Indeed, the default values are also wrong, including the `d_model` (Width) value.\r\n\r\nI updated the code with the following:\r\n* Fix those default values of the `WhisperConfig` to match the Tiny size.\r\n* Add a docstring on top with a doctest.\r\n* Add a shebang and `+x`.\r\n* Change the `_download()` logic to avoid loading it twice (as you advised).\r\n\r\nLet me know if you don't agree with any of the changes and thanks again for all the feedback!",
"Thanks for the review, @sanchit-gandhi. I think I have made all the changes requested. Please, could you check [our comments on the transcription differences](https://github.com/huggingface/transformers/pull/26834#discussion_r1370567138)?",
"If you can just fix the conflict @zuazo we'll be able to merge! "
] | 1,697 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
This PR addresses two issues in the `convert_openai_whisper_to_hf.py` script for it to work correctly.
1. It corrects the `decoder_attention_heads` value. This did not produce any error, but the models converted did not transcribe correctly.
2. It also fixes the `_download()` function:
* Adds the `root` parameter, previously gave the following error:
```
$ python convert_openai_to_hf.py \
--checkpoint_path tiny \
--pytorch_dump_folder_path pytorch_model_hf.bin
Traceback (most recent call last):
File "convert_openai_to_hf.py", line 184, in <module>
convert_openai_whisper_to_tfms(args.checkpoint_path, args.pytorch_dump_folder_path)
File "convert_openai_to_hf.py", line 133, in convert_openai_whisper_to_tfms
original_checkpoint = _download(_MODELS[checkpoint_path])
TypeError: _download() missing 1 required positional argument: 'root'
```
* Returns the download path instead of the model bytes, it produced the following error before:
```
$ python convert_openai_to_hf.py \
--checkpoint_path tiny \
--pytorch_dump_folder_path pytorch_model_hf.bin
100%|████████████████████████████████| 72.1M/72.1M [00:01<00:00, 41.8MiB/s]
Traceback (most recent call last):
File "convert_openai_to_hf.py", line 185, in <module>
convert_openai_whisper_to_tfms(args.checkpoint_path, args.pytorch_dump_folder_path)
File "convert_openai_to_hf.py", line 137, in convert_openai_whisper_to_tfms
dimensions = original_checkpoint["dims"]
TypeError: byte indices must be integers or slices, not str
```
## Before submitting
- [x] I've read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests).
- [ ] This was discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/). Please add a link to it if that's the case.
- [x] I have updated the documentation with my changes where necessary.
- [x] I have written any new necessary tests.
## Who can review?
- Library:
- tokenizers: @ArthurZucker (based on issue #20600). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26834/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26834",
"html_url": "https://github.com/huggingface/transformers/pull/26834",
"diff_url": "https://github.com/huggingface/transformers/pull/26834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26834.patch",
"merged_at": 1699360782000
} |
https://api.github.com/repos/huggingface/transformers/issues/26833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26833/comments | https://api.github.com/repos/huggingface/transformers/issues/26833/events | https://github.com/huggingface/transformers/pull/26833 | 1,944,953,648 | PR_kwDOCUB6oc5c4sEA | 26,833 | Chore: Typo fixed in multiple files of docs/source/en/model_doc | {
"login": "SusheelThapa",
"id": 83917129,
"node_id": "MDQ6VXNlcjgzOTE3MTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/83917129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SusheelThapa",
"html_url": "https://github.com/SusheelThapa",
"followers_url": "https://api.github.com/users/SusheelThapa/followers",
"following_url": "https://api.github.com/users/SusheelThapa/following{/other_user}",
"gists_url": "https://api.github.com/users/SusheelThapa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SusheelThapa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SusheelThapa/subscriptions",
"organizations_url": "https://api.github.com/users/SusheelThapa/orgs",
"repos_url": "https://api.github.com/users/SusheelThapa/repos",
"events_url": "https://api.github.com/users/SusheelThapa/events{/privacy}",
"received_events_url": "https://api.github.com/users/SusheelThapa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@a-r-r-o-w Can you review again?",
"Hey! Feel free to ping me when this is ready! ",
"@ArthurZucker Can you review this PR and provide me necessary feedback?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26833). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
This PR fix the typo present in the files inside `docs/source/en/model_doc` folder
```pseudo
optimizaton > optimization
Similiar > Similar
exaclty > exactly
conditionning > conditioning
outupt > output
avalable > avaliable
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26833/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26833",
"html_url": "https://github.com/huggingface/transformers/pull/26833",
"diff_url": "https://github.com/huggingface/transformers/pull/26833.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26833.patch",
"merged_at": 1697519408000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.