url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/26831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26831/comments | https://api.github.com/repos/huggingface/transformers/issues/26831/events | https://github.com/huggingface/transformers/pull/26831 | 1,944,858,788 | PR_kwDOCUB6oc5c4XGM | 26,831 | Support custom scheduler in deepspeed training | {
"login": "VeryLazyBoy",
"id": 18899212,
"node_id": "MDQ6VXNlcjE4ODk5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18899212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VeryLazyBoy",
"html_url": "https://github.com/VeryLazyBoy",
"followers_url": "https://api.github.com/users/VeryLazyBoy/followers",
"following_url": "https://api.github.com/users/VeryLazyBoy/following{/other_user}",
"gists_url": "https://api.github.com/users/VeryLazyBoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VeryLazyBoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VeryLazyBoy/subscriptions",
"organizations_url": "https://api.github.com/users/VeryLazyBoy/orgs",
"repos_url": "https://api.github.com/users/VeryLazyBoy/repos",
"events_url": "https://api.github.com/users/VeryLazyBoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/VeryLazyBoy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26831). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"gently pinging @pacman100 ",
"Could you just rebase on main to make sure the latest changes (to the CIs) are incorporated! ",
"Thank you all for the reivew! Hi @ArthurZucker , I have rebased my code on the lasted main branch.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @VeryLazyBoy sorry I think we forgot to merge it π’ Thanks for this! "
] | 1,697 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
PR https://github.com/huggingface/transformers/pull/25863 adds support for using HF built-in scheduler with deepspeed optimizer. This PR goes a step further by enabling users to use custom schedulers with deepspeed optimizer. It achieves this by utilizing the `trainer.create_scheduler` function in the deepspeed setup, allowing users to override it and create their own custom scheduler
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr and @pacman100
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26831/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26831",
"html_url": "https://github.com/huggingface/transformers/pull/26831",
"diff_url": "https://github.com/huggingface/transformers/pull/26831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26831.patch",
"merged_at": 1707100435000
} |
https://api.github.com/repos/huggingface/transformers/issues/26830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26830/comments | https://api.github.com/repos/huggingface/transformers/issues/26830/events | https://github.com/huggingface/transformers/issues/26830 | 1,944,686,359 | I_kwDOCUB6oc5z6Y8X | 26,830 | Possible error in text classification example code for multi-label classification | {
"login": "subhalingamd",
"id": 48081462,
"node_id": "MDQ6VXNlcjQ4MDgxNDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/48081462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhalingamd",
"html_url": "https://github.com/subhalingamd",
"followers_url": "https://api.github.com/users/subhalingamd/followers",
"following_url": "https://api.github.com/users/subhalingamd/following{/other_user}",
"gists_url": "https://api.github.com/users/subhalingamd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhalingamd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhalingamd/subscriptions",
"organizations_url": "https://api.github.com/users/subhalingamd/orgs",
"repos_url": "https://api.github.com/users/subhalingamd/repos",
"events_url": "https://api.github.com/users/subhalingamd/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhalingamd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"True yes, this seems to be a good point - cc @ranchlai who added this script",
"Indeed. This will cause higher precision and lower recall in multi-label classification. Thanks @subhalingamd for pointing out. I can give a quick fix. what do you think? @younesbelkada ",
"That would be great @ranchlai ! Thanks a lot !"
] | 1,697 | 1,698 | 1,698 | NONE | null | ### System Info
N/A *(system config independent)*
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
Shouldn't the default condition be `p > 0` in the following lines for multi-label classification? The sequence classification model returns the logits and sigmoid has not been applied anywhere.
https://github.com/huggingface/transformers/blob/5c081e29930466ecf9a478727039d980131076d9/examples/pytorch/text-classification/run_classification.py#L658
https://github.com/huggingface/transformers/blob/5c081e29930466ecf9a478727039d980131076d9/examples/pytorch/text-classification/run_classification.py#L724
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26830/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26829/comments | https://api.github.com/repos/huggingface/transformers/issues/26829/events | https://github.com/huggingface/transformers/issues/26829 | 1,944,646,282 | I_kwDOCUB6oc5z6PKK | 26,829 | use_safetensors doesn't work for Falcon models? | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @RonanKMcGovern \r\nThanks for the issue\r\nI tried to reproduce but the `Trelis/falcon-7b-chat-llama-style-adapters` does not exists on the Hub (or it is private), would it be possible to make that repo public or push random weights on the Hub? π ",
"Thanks! Public now @younesbelkada ",
"OK thanks! WIll try now and let you know",
"Hi @RonanKMcGovern \r\nI just managed to push the merged weights on the hub: https://huggingface.co/ybelkada/test-falcon-st/tree/main I suspect it is a CPU OOM issue that you are getting. What is the max VRAM of your CPU? Can you perhaps try with smaller shard size?",
"Thanks!\n\nWere you able to push safetensors?\n\nYeah could have been OOM but also could have been an issue with weights format owing to the SFT issue that was recently solved?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This is working now"
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
System Information
Operating System: Linux
OS Version: 5.4.0-155-generic
Machine: x86_64
Processor: x86_64
Python Environment
Python Version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0]
Executable: /usr/bin/python
CUDA & Hardware Information
CUDA Available: True
CUDA Version: 11.8
CUDA Device Count: 1
Current CUDA Device: 0
CUDA Device Name: NVIDIA RTX A6000
Transformers Library Information
Transformers Version: 4.34.0
### Who can help?
@younesbelkada @ArthurZucker I've just tried to push an unloaded and merged model to hub using:
```
model_to_push.push_to_hub(new_model, use_auth_token=True, max_shard_size="10GB", use_safetensors=True)
```
However, the files pushed are .bin . Is there support for Falcon?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
model_id = "tiiuae/falcon-7b"
adapter_to_push = "Trelis/falcon-7b-chat-llama-style-adapters"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='cuda',
torch_dtype=torch.bfloat16,
cache_dir=cache_dir)
model_to_push = PeftModel.from_pretrained(
model,
adapter_to_push,
)
model_to_push = model_to_push.merge_and_unload() # merge adapters with the base model.
model_to_push.push_to_hub(new_model, use_auth_token=True, max_shard_size="10GB", use_safetensors=True)
```
This works but pushes bin rather than safetensors files.
Interestingly, if I try to save before pushing, like this:
```
model_to_push.save_pretrained(new_model, push_to_hub=True)
```
I get this error:
```
Cell In[22], line 1
----> 1 model_to_push.save_pretrained(new_model, push_to_hub=True)
File /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2116, in PreTrainedModel.save_pretrained(self, save_directory, is_main_process, state_dict, save_function, push_to_hub, max_shard_size, safe_serialization, variant, token, save_peft_format, **kwargs)
2114 safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
2115 else:
-> 2116 save_function(shard, os.path.join(save_directory, shard_file))
2118 if index is None:
2119 path_to_weights = os.path.join(save_directory, _add_variant(WEIGHTS_NAME, variant))
File /usr/local/lib/python3.10/dist-packages/torch/serialization.py:618, in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization, _disable_byteorder_record)
615 _check_save_filelike(f)
617 if _use_new_zipfile_serialization:
--> 618 with _open_zipfile_writer(f) as opened_zipfile:
619 _save(obj, opened_zipfile, pickle_module, pickle_protocol, _disable_byteorder_record)
620 return
File /usr/local/lib/python3.10/dist-packages/torch/serialization.py:466, in _open_zipfile_writer_file.__exit__(self, *args)
465 def __exit__(self, *args) -> None:
--> 466 self.file_like.write_end_of_file()
467 if self.file_stream is not None:
468 self.file_stream.close()
RuntimeError: [enforce fail at inline_container.cc:424] . unexpected pos 1199971712 vs 1199971600
```
### Expected behavior
Typically this approach results in safetensors, not bin being pushed. Also, there is typically no issues saving the model before pushing.
As a side note, inference on the peft model works well (both pre and post merge). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26829/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26828/comments | https://api.github.com/repos/huggingface/transformers/issues/26828/events | https://github.com/huggingface/transformers/pull/26828 | 1,944,498,357 | PR_kwDOCUB6oc5c3I_9 | 26,828 | Added Telugu [te] translations | {
"login": "hakunamatata1997",
"id": 24734119,
"node_id": "MDQ6VXNlcjI0NzM0MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/24734119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hakunamatata1997",
"html_url": "https://github.com/hakunamatata1997",
"followers_url": "https://api.github.com/users/hakunamatata1997/followers",
"following_url": "https://api.github.com/users/hakunamatata1997/following{/other_user}",
"gists_url": "https://api.github.com/users/hakunamatata1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hakunamatata1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hakunamatata1997/subscriptions",
"organizations_url": "https://api.github.com/users/hakunamatata1997/orgs",
"repos_url": "https://api.github.com/users/hakunamatata1997/repos",
"events_url": "https://api.github.com/users/hakunamatata1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/hakunamatata1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! feel free to ping @stevhliu for a review once this is ready! ",
"@stevhliu It's ready for initial PR",
"@stevhliu Done!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26828). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26786
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26828",
"html_url": "https://github.com/huggingface/transformers/pull/26828",
"diff_url": "https://github.com/huggingface/transformers/pull/26828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26828.patch",
"merged_at": 1697840875000
} |
https://api.github.com/repos/huggingface/transformers/issues/26827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26827/comments | https://api.github.com/repos/huggingface/transformers/issues/26827/events | https://github.com/huggingface/transformers/issues/26827 | 1,944,336,867 | I_kwDOCUB6oc5z5Dnj | 26,827 | Scheduler num_warmup_steps and num_training_steps | {
"login": "KwangryeolPark",
"id": 48284967,
"node_id": "MDQ6VXNlcjQ4Mjg0OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/48284967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KwangryeolPark",
"html_url": "https://github.com/KwangryeolPark",
"followers_url": "https://api.github.com/users/KwangryeolPark/followers",
"following_url": "https://api.github.com/users/KwangryeolPark/following{/other_user}",
"gists_url": "https://api.github.com/users/KwangryeolPark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KwangryeolPark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KwangryeolPark/subscriptions",
"organizations_url": "https://api.github.com/users/KwangryeolPark/orgs",
"repos_url": "https://api.github.com/users/KwangryeolPark/repos",
"events_url": "https://api.github.com/users/KwangryeolPark/events{/privacy}",
"received_events_url": "https://api.github.com/users/KwangryeolPark/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for reporting, it would be great if you can isolate the bug by creating a very small reproducer with a dummy model for example! This way we can be sure that indeed the problem is from the lr scheduler on multi GPU and not the custom training loop / dataset / model π€ ",
"> Hey! Thanks for reporting, it would be great if you can isolate the bug by creating a very small reproducer with a dummy model for example! This way we can be sure that indeed the problem is from the lr scheduler on multi GPU and not the custom training loop / dataset / model π€\r\n\r\nThanks for reply. What is reproducer and dummy model? Do you mean the whole code and corresponding args?",
"Sorry, actually it's a bit expected that the example codes should be updated to each's custom usage. By isolating the bug I mean testing with your proposed changes and see if that fixes the problem or if the issue is from the scheduler itself, in which case creating a small reproducer is usually easier for us to debug! \r\nIf your proposed fix works as expected you can open a PR π \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
transformers: 4.34.0
Ubuntu
torch:2.1
2 * RTX3090 GPUs
t5-small, translation on wmt14 en -> de task
metric: bleu every 1,000 training iteration.
### Who can help?
@ArthurZucker
I'm runing run_translation_no_trainer.py code and using 2 GPUs with accelerate package provided in the code.
I'm using linear scheduler.
I find that the learning rate is 0 after half of args.max_train_steps. Also the warmup schedule looks work until the half of args.num_warmup_steps. The related code is here:
`
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps
num_training_steps=args.max_train_steps
)
`
I think the problem is caused from multi GPU settings and the scheduler does not consider it.
So I think the code should be modified like below
`
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps * accelerator.num_processes,
num_training_steps=args.max_train_steps * accelerator.num_processes,
)
`
Here is my learning graph. As you can see, the learning rate becomes zero after half max iteration.(And also the at warmup steps) I logged

And the below is bleu scores every 1000 iteration.

### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Change the example code like below for logging every 1000 iteration. (There is nothing much changed. I just modified the logging interval from epoch -> 1000 iteration)
`
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We skip the first `n` batches in the dataloader when resuming from a checkpoint
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
loss = loss / args.gradient_accumulation_steps
accelerator.backward(loss)
if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
completed_steps += 1
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if args.with_tracking:
total_memory += torch.cuda.max_memory_allocated() / 1024.0 / 1024.0
if completed_steps >= args.max_train_steps:
break
# Log metrics
if completed_steps % 1000 == 0:
model.eval()
if args.val_max_target_length is None:
args.val_max_target_length = args.max_target_length
gen_kwargs = {
"max_length": args.val_max_target_length if args is not None else config.max_length,
"num_beams": args.num_beams,
}
samples_seen = 0
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
generated_tokens = accelerator.unwrap_model(model).generate(
batch["input_ids"],
attention_mask=batch["attention_mask"],
**gen_kwargs,
)
generated_tokens = accelerator.pad_across_processes(
generated_tokens, dim=1, pad_index=tokenizer.pad_token_id
)
labels = batch["labels"]
if not args.pad_to_max_length:
# If we did not pad to max length, we need to pad the labels too
labels = accelerator.pad_across_processes(batch["labels"], dim=1, pad_index=tokenizer.pad_token_id)
generated_tokens = accelerator.gather(generated_tokens).cpu().numpy()
labels = accelerator.gather(labels).cpu().numpy()
if args.ignore_pad_token_for_loss:
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
# If we are in a multiprocess environment, the last batch has duplicates
if accelerator.num_processes > 1:
if step == len(eval_dataloader) - 1:
decoded_preds = decoded_preds[: len(eval_dataloader.dataset) - samples_seen]
decoded_labels = decoded_labels[: len(eval_dataloader.dataset) - samples_seen]
else:
samples_seen += len(decoded_labels)
metric.add_batch(predictions=decoded_preds, references=decoded_labels)
eval_metric = metric.compute()
logger.info({"bleu": eval_metric["score"]})
if args.with_tracking:
accelerator.log(
{
"bleu": eval_metric["score"],
"train_loss": total_loss.item() / args.log_interval,
"epoch": epoch,
"step": completed_steps,
"train_mem": total_memory / args.log_interval,
"lr": optimizer.param_groups[0]["lr"],
},
step=completed_steps,
)
logger.info("Upload complete")
total_loss = 0
total_memory = 0
if args.push_to_hub and epoch < args.num_train_epochs - 1:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
)
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
repo.push_to_hub(
commit_message=f"Training in progress epoch {epoch}", blocking=False, auto_lfs_prune=True
)
if args.checkpointing_steps == "epoch":
output_dir = f"epoch_{epoch}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
model.train()
`
RTX 3090 x 2 without quantization and without any kinds of additional features like DeepSpeed.
Args:
`
accelerate launch run_translation.py \
--model_name_or_path t5-small\
--dataset_name wmt14 \
--source_lang en \
--target_lang de \
--source_prefix "translate English to German: " \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--max_train_steps 262144 \
--dataset_config_name de-en \
--with_tracking \
--log_interval 1000 \
--max_length 512 \
--num_beams 7 \
--report_to wandb \
--learning_rate 0.001 \
--lr_scheduler_type linear \
--num_warmup_steps 10000 \
--wandb_id fasldkfh \
--output_dir ./
`
### Expected behavior
I hope that the scheduler works as I intend | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26826/comments | https://api.github.com/repos/huggingface/transformers/issues/26826/events | https://github.com/huggingface/transformers/issues/26826 | 1,944,311,180 | I_kwDOCUB6oc5z49WM | 26,826 | Pipelines should yield results incrementally even when their input is a list | {
"login": "uyhcire",
"id": 27897696,
"node_id": "MDQ6VXNlcjI3ODk3Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/27897696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uyhcire",
"html_url": "https://github.com/uyhcire",
"followers_url": "https://api.github.com/users/uyhcire/followers",
"following_url": "https://api.github.com/users/uyhcire/following{/other_user}",
"gists_url": "https://api.github.com/users/uyhcire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uyhcire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uyhcire/subscriptions",
"organizations_url": "https://api.github.com/users/uyhcire/orgs",
"repos_url": "https://api.github.com/users/uyhcire/repos",
"events_url": "https://api.github.com/users/uyhcire/events{/privacy}",
"received_events_url": "https://api.github.com/users/uyhcire/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Makes sense to me, however this changes the default behaviour, and does not take into account batching, which is what's recommended if you are running with a lot of inputs. \r\nAlso depending on the pipeline this might already be implemented ! \r\nI am not sure how @Narsil feels about it ? (he is off for now but this can wait, in the meantime feel free to open a PR to see what kind of changes are required for this!)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.13.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu)
- Jax version: 0.4.16
- JaxLib version: 0.4.16
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@nar
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
pipe = pipeline("text-classification")
def input_generator():
for _ in range(100000):
yield "This restaurant is awesome"
# Finishes quickly
for result in pipe(input_generator()):
print(result)
break
# Takes basically forever
for result in pipe(["This restaurant is awesome"] * 100000):
print(result)
break
```
### Expected behavior
Pipelines should yield results as soon as they are available, regardless of whether the inputs to be processed come in the form of a list or a generator. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26826/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26825/comments | https://api.github.com/repos/huggingface/transformers/issues/26825/events | https://github.com/huggingface/transformers/issues/26825 | 1,944,079,616 | I_kwDOCUB6oc5z4E0A | 26,825 | RuntimeError: Failed to import transformers.data.data_collator | {
"login": "Soumyajyotidutta",
"id": 47978486,
"node_id": "MDQ6VXNlcjQ3OTc4NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/47978486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Soumyajyotidutta",
"html_url": "https://github.com/Soumyajyotidutta",
"followers_url": "https://api.github.com/users/Soumyajyotidutta/followers",
"following_url": "https://api.github.com/users/Soumyajyotidutta/following{/other_user}",
"gists_url": "https://api.github.com/users/Soumyajyotidutta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Soumyajyotidutta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Soumyajyotidutta/subscriptions",
"organizations_url": "https://api.github.com/users/Soumyajyotidutta/orgs",
"repos_url": "https://api.github.com/users/Soumyajyotidutta/repos",
"events_url": "https://api.github.com/users/Soumyajyotidutta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Soumyajyotidutta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Create New Conda Environment (Ubuntu 20.04 LTS)
2. Install PyTorch, Transformers, Datasets, Evaluate
3. Run Script that Contains **from transformers import DataCollatorForSeq2Seq**
### Expected behavior
RuntimeError: Failed to import transformers.data.data_collator | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26825/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26824/comments | https://api.github.com/repos/huggingface/transformers/issues/26824/events | https://github.com/huggingface/transformers/pull/26824 | 1,944,017,497 | PR_kwDOCUB6oc5c1h9o | 26,824 | adding smaller or equal rather than greater | {
"login": "MostHumble",
"id": 56939432,
"node_id": "MDQ6VXNlcjU2OTM5NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MostHumble",
"html_url": "https://github.com/MostHumble",
"followers_url": "https://api.github.com/users/MostHumble/followers",
"following_url": "https://api.github.com/users/MostHumble/following{/other_user}",
"gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions",
"organizations_url": "https://api.github.com/users/MostHumble/orgs",
"repos_url": "https://api.github.com/users/MostHumble/repos",
"events_url": "https://api.github.com/users/MostHumble/events{/privacy}",
"received_events_url": "https://api.github.com/users/MostHumble/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`num_beam_groups` has to be smaller than `num_beams`, thus if `num_beam_groups > num_beams` the error should be thrown -- I believe the original code is correct :)"
] | 1,697 | 1,698 | 1,698 | NONE | null | according to description : "`num_beam_groups` has to be an integer smaller or equal than `num_beams` and `num_beams` has to be"
f" divisible by `num_beam_groups`, but is {num_beam_groups} with `num_beams` being {num_beams}."
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ -] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [- ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26824/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26824",
"html_url": "https://github.com/huggingface/transformers/pull/26824",
"diff_url": "https://github.com/huggingface/transformers/pull/26824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26824.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26823/comments | https://api.github.com/repos/huggingface/transformers/issues/26823/events | https://github.com/huggingface/transformers/issues/26823 | 1,943,986,864 | I_kwDOCUB6oc5z3uKw | 26,823 | Segmentation fault when initializing training arguments with imported tensorflow | {
"login": "zeionara",
"id": 22661961,
"node_id": "MDQ6VXNlcjIyNjYxOTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/22661961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zeionara",
"html_url": "https://github.com/zeionara",
"followers_url": "https://api.github.com/users/zeionara/followers",
"following_url": "https://api.github.com/users/zeionara/following{/other_user}",
"gists_url": "https://api.github.com/users/zeionara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zeionara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zeionara/subscriptions",
"organizations_url": "https://api.github.com/users/zeionara/orgs",
"repos_url": "https://api.github.com/users/zeionara/repos",
"events_url": "https://api.github.com/users/zeionara/events{/privacy}",
"received_events_url": "https://api.github.com/users/zeionara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Rocketknight1 as this might be a TF issue ",
"Hi @zeionara, I ran that code here on TF 2.14 and did not get a segfault. Can you try replacing `TrainingArguments` with `TFTrainingArguments` to see if the issue persists? The `TrainingArguments` class is usually used for Torch examples, and might do some torch initializations, which could cause a segfault if Torch is incorrectly configured and TF has allocated GPU memory already.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-6.2.0-33-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.2
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): 2.14.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@muellerzr @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import tensorflow as tf
from transformers import TrainingArguments
print('Listing tf gpus...')
print(tf.config.list_physical_devices("GPU"))
print('Initializing training arguments...')
args = TrainingArguments(output_dir="output")
print('Initialization has completed')
```
### Expected behavior
Expected output:
```
Listing tf gpus...
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU')]
Initializing training arguments...
Initialization has completed
```
Observed output:
```
Listing tf gpus...
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU')]
Initializing training arguments...
Segmentation fault (core dumped)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26823/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26822/comments | https://api.github.com/repos/huggingface/transformers/issues/26822/events | https://github.com/huggingface/transformers/pull/26822 | 1,943,972,084 | PR_kwDOCUB6oc5c1ZA_ | 26,822 | [OWL-ViT, OWLv2] Add resources | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
This PR improves the docs of OWL-ViT and OWLv2 by including a figure as well as a link to demo notebooks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26822/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26822",
"html_url": "https://github.com/huggingface/transformers/pull/26822",
"diff_url": "https://github.com/huggingface/transformers/pull/26822.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26822.patch",
"merged_at": 1697464065000
} |
https://api.github.com/repos/huggingface/transformers/issues/26821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26821/comments | https://api.github.com/repos/huggingface/transformers/issues/26821/events | https://github.com/huggingface/transformers/pull/26821 | 1,943,933,852 | PR_kwDOCUB6oc5c1Rje | 26,821 | [docstring] Fix docstrings for `CodeGenConfig`, `CodeGenTokenizer`, `CodeGenTokenizerFast` | {
"login": "daniilgaltsev",
"id": 75201860,
"node_id": "MDQ6VXNlcjc1MjAxODYw",
"avatar_url": "https://avatars.githubusercontent.com/u/75201860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniilgaltsev",
"html_url": "https://github.com/daniilgaltsev",
"followers_url": "https://api.github.com/users/daniilgaltsev/followers",
"following_url": "https://api.github.com/users/daniilgaltsev/following{/other_user}",
"gists_url": "https://api.github.com/users/daniilgaltsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniilgaltsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniilgaltsev/subscriptions",
"organizations_url": "https://api.github.com/users/daniilgaltsev/orgs",
"repos_url": "https://api.github.com/users/daniilgaltsev/repos",
"events_url": "https://api.github.com/users/daniilgaltsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniilgaltsev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Hi, can you review this?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26821). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes the docstrings for CodeGen following #26638
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26821/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26821",
"html_url": "https://github.com/huggingface/transformers/pull/26821",
"diff_url": "https://github.com/huggingface/transformers/pull/26821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26821.patch",
"merged_at": 1697718100000
} |
https://api.github.com/repos/huggingface/transformers/issues/26820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26820/comments | https://api.github.com/repos/huggingface/transformers/issues/26820/events | https://github.com/huggingface/transformers/pull/26820 | 1,943,916,390 | PR_kwDOCUB6oc5c1OFL | 26,820 | [docstring] Fix bert generation tokenizer | {
"login": "przemL",
"id": 24912415,
"node_id": "MDQ6VXNlcjI0OTEyNDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/24912415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/przemL",
"html_url": "https://github.com/przemL",
"followers_url": "https://api.github.com/users/przemL/followers",
"following_url": "https://api.github.com/users/przemL/following{/other_user}",
"gists_url": "https://api.github.com/users/przemL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/przemL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/przemL/subscriptions",
"organizations_url": "https://api.github.com/users/przemL/orgs",
"repos_url": "https://api.github.com/users/przemL/repos",
"events_url": "https://api.github.com/users/przemL/events{/privacy}",
"received_events_url": "https://api.github.com/users/przemL/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Could you please review the PR?\r\nThank you.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26820). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fix docstring for bert generation tokenizer.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26820/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26820",
"html_url": "https://github.com/huggingface/transformers/pull/26820",
"diff_url": "https://github.com/huggingface/transformers/pull/26820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26820.patch",
"merged_at": 1697473616000
} |
https://api.github.com/repos/huggingface/transformers/issues/26819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26819/comments | https://api.github.com/repos/huggingface/transformers/issues/26819/events | https://github.com/huggingface/transformers/pull/26819 | 1,943,902,671 | PR_kwDOCUB6oc5c1LXS | 26,819 | [docstring] Fix docstring for bert generation tokenizer | {
"login": "przemL",
"id": 24912415,
"node_id": "MDQ6VXNlcjI0OTEyNDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/24912415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/przemL",
"html_url": "https://github.com/przemL",
"followers_url": "https://api.github.com/users/przemL/followers",
"following_url": "https://api.github.com/users/przemL/following{/other_user}",
"gists_url": "https://api.github.com/users/przemL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/przemL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/przemL/subscriptions",
"organizations_url": "https://api.github.com/users/przemL/orgs",
"repos_url": "https://api.github.com/users/przemL/repos",
"events_url": "https://api.github.com/users/przemL/events{/privacy}",
"received_events_url": "https://api.github.com/users/przemL/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
It fixes docstring for bert generation tokenizer.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26819/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26819",
"html_url": "https://github.com/huggingface/transformers/pull/26819",
"diff_url": "https://github.com/huggingface/transformers/pull/26819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26819.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26818/comments | https://api.github.com/repos/huggingface/transformers/issues/26818/events | https://github.com/huggingface/transformers/issues/26818 | 1,943,817,421 | I_kwDOCUB6oc5z3EzN | 26,818 | Distributed predictions are generated repeatedly | {
"login": "Sakurakdx",
"id": 48399040,
"node_id": "MDQ6VXNlcjQ4Mzk5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sakurakdx",
"html_url": "https://github.com/Sakurakdx",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions",
"organizations_url": "https://api.github.com/users/Sakurakdx/orgs",
"repos_url": "https://api.github.com/users/Sakurakdx/repos",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sakurakdx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @SunMarc if you have some time have a look! ",
"Hi @Sakurakdx, were you able to fix the issue ? If not, could you provide a minimal reproducer ? It will help me a lot to debug the issue I see that you are using your own trainer. Does this issue also happen on HF Trainer ? Thanks. ",
"@SunMarc, I didn't actually solve the problem; I just did some post-processing after the prediction was completed. The trainer I used was inherited from Hugging Face's trainer, and theoretically, it's a bug in the native trainer. I have already left the project, so I'm unable to provide a reproducer, sorry.",
"Thanks for your reply and sorry to not have answered earlier. I'm closing this issue if you are fine if that. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.28
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'deepspeed_config_file': 'none', 'zero3_init_flag': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR', 'dynamo_mode': 'default', 'dynamo_use_dynamic': False, 'dynamo_use_fullgraph': False}
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada @muellerz muller @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Loading Lora fine-tuned LLAMA for prediction, when I predict on multiple GPUs, there are duplicate predictions, but on a single GPU this problem does not occur.
Mytrainer:
```python
class CausalFtTrainer(Seq2SeqTrainer):
# https://discuss.huggingface.co/t/trainer-vs-seq2seqtrainer/3145/2
def __init__(
self,
params: Dict[str, Any],
model: torch.nn.Module,
train_dataset: Optional[Dataset] = None,
eval_dataset: Optional[Dataset] = None,
data_collator: Optional[Callable] = None,
tokenizer: Optional[PreTrainedTokenizerBase] = None,
callback: Optional[LLMCallback] = None,
compute_metrics: Optional[Callable] = None,
optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None),
) -> None:
# TODO: change the base class to Trainer
self.combine_causal_loss_factor = params.pop("combine_causal_loss_factor", 0)
self.params = params
if train_dataset and not eval_dataset:
# generation_config should be a generation_config instance or None,
# not a dict, so if eval_dataset is None, set this key to None
self.params["generation_config"] = None
if eval_dataset:
# https://huggingface.co/docs/transformers/main/main_classes/text_generation
generation_config = GenerationConfig(**self.params.pop("generation_config"))
self.params["generation_config"] = generation_config
self.arguments = CausalFtTrainingArguments(**self.params)
super().__init__(
model=model,
args=self.arguments,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
tokenizer=tokenizer,
callbacks=[LLMCallback] if not callback else None,
compute_metrics=compute_metrics,
optimizers=optimizers,
)
def predict(
self,
test_dataset: Dataset,
ignore_keys: Optional[List[str]] = None,
metric_key_prefix: str = "test",
**gen_kwargs,
) -> PredictionOutput:
data_size = len(test_dataset)
# for arnold
if os.environ.get("ARNOLD_WORKER_GPU", None):
gpu_number = int(os.environ.get("ARNOLD_WORKER_GPU", 1)) * int(os.environ.get("ARNOLD_WORKER_NUM", 1))
else:
gpu_number = torch.cuda.device_count()
self.total_step = math.ceil(data_size // (self.params["per_device_eval_batch_size"] * gpu_number))
self.current_step = 0
return super().predict(test_dataset, ignore_keys, metric_key_prefix, **gen_kwargs)
def prediction_step(
self, model: torch.nn.Module, inputs: Dict[str, torch.Tensor], prediction_loss_only: bool, ignore_keys: Optional[List[str]] = None, **kwargs
) -> Tuple[float, torch.Tensor, torch.Tensor]:
y_true = inputs.pop("ground_truth_labels", None)
if not self.args.predict_with_generate or prediction_loss_only:
return super().prediction_step(model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys)
has_labels = "labels" in inputs
inputs = self._prepare_inputs(inputs)
gen_kwargs = self._gen_kwargs.copy()
if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None:
gen_kwargs["max_length"] = self.model.config.max_length
gen_kwargs["num_beams"] = gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.model.config.num_beams
default_synced_gpus = True if is_deepspeed_zero3_enabled() else False
gen_kwargs["synced_gpus"] = gen_kwargs["synced_gpus"] if gen_kwargs.get("synced_gpus") is not None else default_synced_gpus
# If the `decoder_input_ids` was created from `labels`, evict the former, so that the model can freely generate
# (otherwise, it would continue generating from the padded `decoder_input_ids`)
if "labels" in inputs and "decoder_input_ids" in inputs and inputs["labels"].shape == inputs["decoder_input_ids"].shape:
inputs = {k: v for k, v in inputs.items() if k != "decoder_input_ids"}
# extra params for generation_config
if not isinstance(self.tokenizer, CasterTokenizer):
self.params["generation_config"].bos_token_id = self.tokenizer.bos_token_id
self.params["generation_config"].eos_token_id = self.tokenizer.eos_token_id
self.params["generation_config"].pad_token_id = self.tokenizer.pad_token_id
gen_kwargs["generation_config"] = self.params["generation_config"]
scores = None
if not getattr(self.params["generation_config"], "output_scores", False):
generated_tokens = self.model.generate(**inputs, **gen_kwargs)
else:
# set return_dict_in_generate to True in order to get the output dict
self.params["generation_config"].return_dict_in_generate = True
outputs = self.model.generate(**inputs, **gen_kwargs)
generated_tokens, scores = outputs["sequences"], outputs["scores"]
# generated_tokens: [batch_size, seq_len]
# scores: ([1, vocab_size], [1, vocab_size], ...)
generated_str_length = generated_tokens.size(0)
generated_token_probs = [[] for _ in range(generated_str_length)]
if scores:
generated_token_length = len(scores)
for index_ in range(-generated_token_length, 0):
probs = (
torch.nn.functional.softmax(scores[index_], dim=1).type(torch.float32).cpu().numpy()
) # bf16 should change type to torch.float32, so use torch.float32 to convert
for _ in range(generated_str_length):
index_probs = generated_tokens[_][index_]
if self.args.care_tokens:
index_probs = self.tokenizer.encode(self.args.care_tokens)
generated_token_probs[_].append(probs[_][index_probs])
generated_token_probs = np.array(generated_token_probs).tolist()
del scores
# Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop
# TODO: remove this hack when the legacy code that initializes generation_config from a model config is
# removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183
if self.model.generation_config._from_model_config:
self.model.generation_config._from_model_config = False
# Retrieves GenerationConfig from model.generation_config
gen_config = self.model.generation_config
# in case the batch is shorter than max length, the output should be padded
if generated_tokens.shape[-1] < gen_config.max_length:
generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_length)
elif gen_config.max_new_tokens is not None and generated_tokens.shape[-1] < gen_config.max_new_tokens + 1:
generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_new_tokens + 1)
with torch.no_grad():
if has_labels:
with self.compute_loss_context_manager():
outputs = model(**inputs)
if self.label_smoother is not None:
loss = self.label_smoother(outputs, inputs["labels"]).mean().detach()
else:
loss = (outputs["loss"] if isinstance(outputs, dict) else outputs[0]).mean().detach()
else:
loss = None
if self.args.prediction_loss_only:
return loss, None, None
if has_labels:
labels = inputs["labels"]
if labels.shape[-1] < gen_config.max_length:
labels = self._pad_tensors_to_max_len(labels, gen_config.max_length)
elif gen_config.max_new_tokens is not None and labels.shape[-1] < gen_config.max_new_tokens + 1:
labels = self._pad_tensors_to_max_len(labels, gen_config.max_new_tokens + 1)
else:
labels = None
# process the generated tokens
if dist.get_rank() == 0 and not self.args.do_eval:
if self.current_step % self.params["logging_steps"] == 0:
ground_truth_ = self.decode([y_true[0]]) if y_true is not None else None
output = self.decode([generated_tokens[0]])
logger.info(f"[GPU_{dist.get_rank()}, {self.current_step}/{self.total_step}]: \n ground_truth: {ground_truth_[0]} \n output: {output[0]}")
self.current_step += 1
self._output_generate_results(inputs, generated_tokens, y_true, generated_token_probs)
# generated_tokens = torch.tensor([1]).cuda(f"cuda:{dist.get_rank()}")
return loss, generated_tokens, labels
```
main function
```python
predictor = CausalFtTrainer(
params=trainer_params,
model=causal_model,
eval_dataset=dataset,
tokenizer=tokenizer,
data_collator=data_collator,
)
predictor.predict(dataset, **predict_params)
```
Another problem is that when running on multiple cards, it will get stuck in the pred_gather function or report an error.


### Expected behavior


| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26818/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26817/comments | https://api.github.com/repos/huggingface/transformers/issues/26817/events | https://github.com/huggingface/transformers/issues/26817 | 1,943,812,394 | I_kwDOCUB6oc5z3Dkq | 26,817 | Possible bug for DPRContextEncoderTokenizer | {
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq Could you please check this?",
"Also, I found another mismatched point where original DPR would pad to `max length`(256) and insert an `[SEP]` token at the end:\r\nhttps://github.com/facebookresearch/DPR/blob/a31212dc0a54dfa85d8bfa01e1669f149ac832b7/dpr/models/hf_models.py#L310-L315",
"Hello, would you like to open a PR for a fix? π ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0,1,2,3
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, the original DPR code use `all zeros` for `token_type_ids` while HF version uses 0 and 1 (original BERT implementation) for tokens before and after `[SEP]` token.
https://github.com/facebookresearch/DPR/blob/a31212dc0a54dfa85d8bfa01e1669f149ac832b7/dpr/models/biencoder.py#L230-L231
### Expected behavior
give all zeros for `token_type_ids` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26817/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26817/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26816/comments | https://api.github.com/repos/huggingface/transformers/issues/26816/events | https://github.com/huggingface/transformers/issues/26816 | 1,943,801,770 | I_kwDOCUB6oc5z3A-q | 26,816 | problem on finetuning llama and baichuan with new version transformers | {
"login": "gxy-gxy",
"id": 57594446,
"node_id": "MDQ6VXNlcjU3NTk0NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/57594446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gxy-gxy",
"html_url": "https://github.com/gxy-gxy",
"followers_url": "https://api.github.com/users/gxy-gxy/followers",
"following_url": "https://api.github.com/users/gxy-gxy/following{/other_user}",
"gists_url": "https://api.github.com/users/gxy-gxy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gxy-gxy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gxy-gxy/subscriptions",
"organizations_url": "https://api.github.com/users/gxy-gxy/orgs",
"repos_url": "https://api.github.com/users/gxy-gxy/repos",
"events_url": "https://api.github.com/users/gxy-gxy/events{/privacy}",
"received_events_url": "https://api.github.com/users/gxy-gxy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also observed this phenomenon when I tried to fine-tune baichuan model.\r\nhere is the loss curve trained with transformers 4.32:\r\n\r\n\r\nthis is the loss curve trained with transformers 4.28:\r\n\r\n",
"I finetuned all the models above with the code in FastChat repository on A100-80G.\r\nhere is my code:\r\n```bash\r\ntorchrun --nproc_per_node=8 --master_port=20001 fastchat/train/train_xformers.py \\\r\n --model_name_or_path llama-7b \\\r\n --data_path fschat.json \\\r\n --bf16 True \\\r\n --output_dir output\\\r\n --num_train_epochs 3 \\\r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps 8 \\\r\n --save_strategy \"epoch\" \\\r\n --learning_rate 2e-5 \\\r\n --weight_decay 0. \\\r\n --warmup_ratio 0.04 \\\r\n --lr_scheduler_type \"cosine\" \\\r\n --logging_steps 1 \\\r\n --fsdp \"full_shard auto_wrap\" \\\r\n --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \\\r\n --model_max_length 4096 \\\r\n --gradient_checkpointing True \\\r\n --lazy_preprocess True \\\r\n --report_to wandb\r\n```",
"Hey π€ thanks for opening an issue! We try to keep the github issues for bugs/feature requests. We had a similar issue being tracked here #26498 where you can find good tips!\r\n\r\nOtherwise could you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I was using transformers 4.33.2 (along with fsdp implemented in pytorch and the accelerate package from HF) and also observed the issue when pretraining llama from scratch: a quickly failing loss when using fsdp+bf16. There's no issue with fsdp+fp32 or ddp+bf16. I upgraded to 4.35.2 and the issue seems to be resolved. I don't know the exact reason behind this though.\r\n\r\nBefore upgrading transformers, I incorporated many tips from #26498 but they didn't help much in my case.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,703 | 1,703 | NONE | null | When I tried to finetune llama model with sharegpt dataset, I got these loss curves:

the green loss curve is trained with transformers 4.33.2 version and the orange loss curve is trained with transformers 4.28.1.
obviously, the green one is abnormal and the orange one is correct. I wonder why this happens? The only thing I do is changing the Transformers version. Is this some bugs in transformers or I made something wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26816/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26815/comments | https://api.github.com/repos/huggingface/transformers/issues/26815/events | https://github.com/huggingface/transformers/issues/26815 | 1,943,797,624 | I_kwDOCUB6oc5z2_94 | 26,815 | Trainer for devided model | {
"login": "yeonju7kim",
"id": 95571735,
"node_id": "U_kgDOBbJPFw",
"avatar_url": "https://avatars.githubusercontent.com/u/95571735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeonju7kim",
"html_url": "https://github.com/yeonju7kim",
"followers_url": "https://api.github.com/users/yeonju7kim/followers",
"following_url": "https://api.github.com/users/yeonju7kim/following{/other_user}",
"gists_url": "https://api.github.com/users/yeonju7kim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeonju7kim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeonju7kim/subscriptions",
"organizations_url": "https://api.github.com/users/yeonju7kim/orgs",
"repos_url": "https://api.github.com/users/yeonju7kim/repos",
"events_url": "https://api.github.com/users/yeonju7kim/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeonju7kim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @muellerzr ",
"No, you cannot use the big model inference API to do training in this way (hence training in the name). You can use methods like DeepSpeed or FSDP to split the work between multiple GPUs or perform offloading during training, which the trainer already supports.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,702 | 1,702 | NONE | null | ### Feature request
Trainer class settings for divided model.
If I use trainer.predict, it loads the model on the device in the code of the trainer package.
It's hard to separately load the model manually.
I think it'll be better to implement to avoid loading the model twice.
### Motivation
https://huggingface.co/docs/accelerate/usage_guides/big_modeling
I read the above. Then I divided the LLM and loaded the model parts on the different GPUs.
My code used Trainer for training.
Because of the line below, the already loaded model loads again here on args.device.
https://github.com/huggingface/transformers/blob/21dc5859421cf0d7d82d374b10f533611745a8c5/src/transformers/trainer.py#L3072-L3076
I manually erased the line to use Trainer with the divided LLM.
Is there any other way to use Trainer with a divided model?
### Your contribution
I'm a novice for trainer class. I'm afraid that my question seems stupid. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26815/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26814/comments | https://api.github.com/repos/huggingface/transformers/issues/26814/events | https://github.com/huggingface/transformers/issues/26814 | 1,943,680,181 | I_kwDOCUB6oc5z2jS1 | 26,814 | Loading tokenized dataset stucks for multi-gpu but works for multi-node single-gpu on some computing nodes | {
"login": "BiEchi",
"id": 60613238,
"node_id": "MDQ6VXNlcjYwNjEzMjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/60613238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BiEchi",
"html_url": "https://github.com/BiEchi",
"followers_url": "https://api.github.com/users/BiEchi/followers",
"following_url": "https://api.github.com/users/BiEchi/following{/other_user}",
"gists_url": "https://api.github.com/users/BiEchi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BiEchi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BiEchi/subscriptions",
"organizations_url": "https://api.github.com/users/BiEchi/orgs",
"repos_url": "https://api.github.com/users/BiEchi/repos",
"events_url": "https://api.github.com/users/BiEchi/events{/privacy}",
"received_events_url": "https://api.github.com/users/BiEchi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for reporting. \r\nI am not really sure we can debug this for you, but if you are looking for tips on how to debug this I would recommend you to ask this on [the forum](https://discuss.huggingface.co/)! π€ ",
"I've solved it by prepending a NCCL environment varaible to disable the p2p communication. Thanks!",
"Thanks for sharing the fix π "
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
[2023-10-14 20:15:10,690] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.32.0.dev0
- Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.0
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Single-node Multi-GPU
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I run the `run_mlm.py` example provided in the latest `transformers` release, and the process stucks at this bunch of code: https://github.com/huggingface/transformers/blob/21dc5859421cf0d7d82d374b10f533611745a8c5/examples/pytorch/language-modeling/run_mlm.py#L513
For example, if I run using 2 GPUs on a single node using this code:
```
function pretrain_cont(){
python -m torch.distributed.launch \
--nproc_per_node=8 \
--nnodes=1 \
--node_rank=0 \
--master_port=19500 \
--use_env \
transformers/examples/pytorch/language-modeling/run_mlm.py \
--config_name $MODEL \
--tokenizer_name $TOKENIZER \
--dataset_name $PT_DATASET \
--max_steps $MAX_STEPS \
--preprocessing_num_workers 32 \
--ddp_timeout 180000 \
--save_strategy steps \
--save_steps 0.05 \
--fp16 \
--cache_dir $CACHE_DIR \
--per_device_train_batch_size $BATCH_SIZE \
--per_device_eval_batch_size $BATCH_SIZE \
--gradient_accumulation_steps $ACCUMULATE \
--adam_epsilon $PT_ADAM_EPS \
--adam_beta1 $PT_ADAM_BETA1 \
--adam_beta2 $PT_ADAM_BETA2 \
--weight_decay $PT_ADAM_WEIGHT_DECAY \
--warmup_steps $WARMUP_STEPS \
--learning_rate $PT_PEAK_LR \
--lr_scheduler_type $PT_LR_DECAY \
--max_seq_length 512 \
--do_train \
--do_eval \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir
}
```
The tokenization process will get stuck at the line I mentioned above and give such results (as I've tokenized before, it simply loads the preprocessed dataset):
```
...
.cache/User___bert_pretrain_datasets/default/0.0.0/9107755b15521c04/cache-f6e24321e136c7cb_*_of_00032.arrow
Concatenating 32 shards
10/14/2023 20:15:51 - INFO - datasets.arrow_dataset - Concatenating 32 shards
```
### Expected behavior
The same code works on all of other of my machines (it doesn't get stuck), but don't work on one of my machine, so I'm writing to ask how I can debug this issue. Looking forward to your support! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26814/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26814/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26813/comments | https://api.github.com/repos/huggingface/transformers/issues/26813/events | https://github.com/huggingface/transformers/issues/26813 | 1,943,632,517 | I_kwDOCUB6oc5z2XqF | 26,813 | Port Jax weights to PyTorch | {
"login": "jxiw",
"id": 16102460,
"node_id": "MDQ6VXNlcjE2MTAyNDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/16102460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxiw",
"html_url": "https://github.com/jxiw",
"followers_url": "https://api.github.com/users/jxiw/followers",
"following_url": "https://api.github.com/users/jxiw/following{/other_user}",
"gists_url": "https://api.github.com/users/jxiw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxiw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxiw/subscriptions",
"organizations_url": "https://api.github.com/users/jxiw/orgs",
"repos_url": "https://api.github.com/users/jxiw/repos",
"events_url": "https://api.github.com/users/jxiw/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxiw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! This is most probably because the weights for this are shared with the embedding layer. Would recommend you to dive in the code and check for `tied_weigth_keys` or if the config has `tie_word_embedding` attribute π \r\nSome [info on the forum](https://discuss.huggingface.co/t/what-is-the-tie-word-embeddings-option-exactly-doing/8483) should also be easy to access! ",
"See here how we initialise a linear LM head under the attribute `self.decoder`: https://github.com/huggingface/transformers/blob/ab0ddc99e853c974949d823dbfaa732202696f3e/src/transformers/models/bert/modeling_flax_bert.py#L707\r\nHowever if we pass `shared_embedding`, then we use the kernel/bias weights from this shared embedding that is passed as an argument: https://github.com/huggingface/transformers/blob/ab0ddc99e853c974949d823dbfaa732202696f3e/src/transformers/models/bert/modeling_flax_bert.py#L713-L714\r\nNow if you look at `FlaxBertForMaskedLMModule`, you see that we pass the input word embeddings as our `shared_embedding`: https://github.com/huggingface/transformers/blob/ab0ddc99e853c974949d823dbfaa732202696f3e/src/transformers/models/bert/modeling_flax_bert.py#L1176-L1177\r\n=> this means that if we tie the word embeddings (`config.tie_word_embeddings=True)`, then we never use the `self.decoder` kernel/bias weights in the LM head, but rather the kernel/bias weights from our input word embeddings!\r\n\r\nSo in short, if you have the word embeddings tied (`config.tie_word_embeddings=True`), then the `self.decoder` weights can be ignored, since we use the input word embedding ones here.",
"Closing since this was discussed offline with @jxiw and the issue is now resolved π€",
"you said it is solved . so could you tell me how to convert a .msgpack (flax/jax format) model weights to .bin (huggingface format) or .pth (pytorch format)? thanks a lot !!!!!!!",
"You just need to instantiate a model like `FlaxBertModel` using `from_pretrained(...., from_pt = True)` or `BertModel.from_pretrained(....)` which will automatically convert the `msg` checkpoint and then save the model. ",
"Load a PyTorch model (`.bin`) into Flax:\r\n\r\n```python\r\nfrom transformers import FlaxBertModel\r\n\r\nmodel = FlaxBertModel(\"hf-internal-testing/tiny-random-bert\", from_pt=True)\r\n\r\n# save the flax model\r\nmodel.save_pretrained(\"./tiny-random-bert-flax\")\r\n```\r\n\r\nLoad the Flax model (`.msgpack`) into PyTorch:\r\n```python\r\nfrom transformers import BertModel\r\n\r\nmodel = BertModel(\"./tiny-random-bert-flax\", from_flax=True)\r\n```"
] | 1,697 | 1,702 | 1,697 | NONE | null | I have a question when convert JAX weights to torch weights.
I follow this to convert the weights.
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_pytorch_utils.py#L331-L441
However, there are two parameters that i cannot find in JAX weights [βcls.predictions.decoder.biasβ, βcls.predictions.decoder.weightβ] in my model msgpack
And I found BERT also has this problem.
```
bert=FlaxBertForMaskedLM.from_pretrained("bert-large-uncased")
bert.params['cls']['predictions'].keys()
dict_keys(['bias', 'transform'])
```
So there is no decoder variable here. But i can find this in the class
https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_flax_bert.py#L707
Do you know why it is? And do i need to also copy the weights of this decoder into pytorch model convert JAX MaskedLM to Pytorch MaskedLM?
@sanchit-gandhi helped me to fix this! I just post it here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26813/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26812/comments | https://api.github.com/repos/huggingface/transformers/issues/26812/events | https://github.com/huggingface/transformers/pull/26812 | 1,943,631,286 | PR_kwDOCUB6oc5c0TUc | 26,812 | Add PvT-v2 Model | {
"login": "FoamoftheSea",
"id": 50897218,
"node_id": "MDQ6VXNlcjUwODk3MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/50897218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FoamoftheSea",
"html_url": "https://github.com/FoamoftheSea",
"followers_url": "https://api.github.com/users/FoamoftheSea/followers",
"following_url": "https://api.github.com/users/FoamoftheSea/following{/other_user}",
"gists_url": "https://api.github.com/users/FoamoftheSea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FoamoftheSea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FoamoftheSea/subscriptions",
"organizations_url": "https://api.github.com/users/FoamoftheSea/orgs",
"repos_url": "https://api.github.com/users/FoamoftheSea/repos",
"events_url": "https://api.github.com/users/FoamoftheSea/events{/privacy}",
"received_events_url": "https://api.github.com/users/FoamoftheSea/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"FYI @rafaelpadilla ",
"@FoamoftheSea - awesome work! Let us know when you're ready for review. \r\n\r\nFor the code quality checks, running `make fixup` and pushing the changes should resolve them. ",
"@amyeroberts - Thanks! Let me try that and do one final sweep over things, then I will get back to you shortly for review :)",
"@amyeroberts I believe this is ready for review now. I made changes so all the quality checks pass, and I also added integration with AutoBackbone so that the PVTv2 can be used with Deformable DETR and other models that use AutoBackbone.",
"Hey @amyeroberts - just adding a new comment here since the others are pretty well buried. This PR should be ready to go after removing the two lines from the check_docstrings.py. Let me know if you agree and we can make that last commit and merge π ",
"@amyeroberts is off I'll be reviewing this! π€ ",
"Great, thanks @ArthurZucker! Let me know if there's any action needed from my side, do you want me to address the failure from the documentation build?",
"Sorry I started reviewing and did not finish yet\r\n",
"@ArthurZucker Thanks for the review! I've gone through and made commits to address most of your comments, and I've responded to a couple that await your review. Let me know when you have a chance to go through the updates.",
"Hey! Sure, can you first make sure the CIs are green? π€ ",
"Thanks, will review ",
"Hey @ArthurZucker - CIs are green! I went through and made all of the adjustments, as well as filling out the documentation to be much more comprehensive. I marked all the obviously fixed conversations as \"resolved\", but I left a few open that are more opinion-based. Let me know what you think.",
"Alright! Sorry running a bit late but on my TODO for the day π "
] | 1,697 | 1,708 | null | CONTRIBUTOR | null | ## Description
- **Motivation** - The PvT is a useful backbone for computer vision tasks, but only the outdated v1 is available in Hugging Face.
- **What this PR Does** - Full integration of PvT-v2 model (works with AutoModel and AutoBackbone).
- **Notes** - Like the original implementation, the config allows for using either Spatial Reduction "SR" or average pooling "AP" to reduce complexity in the attention layer, default is using SRA (as in the original code).
@amyeroberts
## Resources
**Model paper**
- [PVT v2: Improved Baselines with Pyramid Vision Transformer](https://arxiv.org/abs/2106.13797)
**Open Source Implementations**
- [Original](https://github.com/whai362/PVT/blob/v2/classification/pvt_v2.py)
- [Panoptic Segformer](https://github.com/zhiqi-li/Panoptic-SegFormer/blob/master/easymd/models/backbones/pvt_v2.py)
## Checks
- Add PvT-v2 to model docs β
- Build pytests and have them pass β
- Formatting (make fix-copies, make fixup) β
- Convert open-source weights and test expected logits, uploaded to hub β
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26812/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26812",
"html_url": "https://github.com/huggingface/transformers/pull/26812",
"diff_url": "https://github.com/huggingface/transformers/pull/26812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26812.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26811/comments | https://api.github.com/repos/huggingface/transformers/issues/26811/events | https://github.com/huggingface/transformers/pull/26811 | 1,943,543,850 | PR_kwDOCUB6oc5c0BdT | 26,811 | Add function that can get models by prefix | {
"login": "Ujcgjlm",
"id": 48871356,
"node_id": "MDQ6VXNlcjQ4ODcxMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/48871356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ujcgjlm",
"html_url": "https://github.com/Ujcgjlm",
"followers_url": "https://api.github.com/users/Ujcgjlm/followers",
"following_url": "https://api.github.com/users/Ujcgjlm/following{/other_user}",
"gists_url": "https://api.github.com/users/Ujcgjlm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ujcgjlm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ujcgjlm/subscriptions",
"organizations_url": "https://api.github.com/users/Ujcgjlm/orgs",
"repos_url": "https://api.github.com/users/Ujcgjlm/repos",
"events_url": "https://api.github.com/users/Ujcgjlm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ujcgjlm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Feel free to ping me or @ydshieh for a review when this is ready! ",
"> Hey! Feel free to ping me or @ydshieh for a review when this is ready!\r\n\r\nI apologize for not responding. It's done",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | A useful feature has been added:
A function can retrieve models with a matching prefix in a particular module
Reduced cyclomatic complexity of the existing function: get_all_auto_configured_models | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26811/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26811",
"html_url": "https://github.com/huggingface/transformers/pull/26811",
"diff_url": "https://github.com/huggingface/transformers/pull/26811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26811.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26810/comments | https://api.github.com/repos/huggingface/transformers/issues/26810/events | https://github.com/huggingface/transformers/pull/26810 | 1,943,470,884 | PR_kwDOCUB6oc5czzfS | 26,810 | Fixed typos | {
"login": "Zhreyu",
"id": 96978606,
"node_id": "U_kgDOBcfGrg",
"avatar_url": "https://avatars.githubusercontent.com/u/96978606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhreyu",
"html_url": "https://github.com/Zhreyu",
"followers_url": "https://api.github.com/users/Zhreyu/followers",
"following_url": "https://api.github.com/users/Zhreyu/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhreyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhreyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhreyu/subscriptions",
"organizations_url": "https://api.github.com/users/Zhreyu/orgs",
"repos_url": "https://api.github.com/users/Zhreyu/repos",
"events_url": "https://api.github.com/users/Zhreyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhreyu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | e.g. > e.g.,
image image > image
Numpy > NumPy
cropping image image files > cropping image files
Log-Mel Spectrogram features, feature extraction from > generate Log-Mel Spectrogram features, feature extraction from | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26810/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26810",
"html_url": "https://github.com/huggingface/transformers/pull/26810",
"diff_url": "https://github.com/huggingface/transformers/pull/26810.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26810.patch",
"merged_at": 1697442750000
} |
https://api.github.com/repos/huggingface/transformers/issues/26809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26809/comments | https://api.github.com/repos/huggingface/transformers/issues/26809/events | https://github.com/huggingface/transformers/issues/26809 | 1,943,428,097 | I_kwDOCUB6oc5z1lwB | 26,809 | Add Mistral Models to Flax | {
"login": "kiansierra",
"id": 47116198,
"node_id": "MDQ6VXNlcjQ3MTE2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47116198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiansierra",
"html_url": "https://github.com/kiansierra",
"followers_url": "https://api.github.com/users/kiansierra/followers",
"following_url": "https://api.github.com/users/kiansierra/following{/other_user}",
"gists_url": "https://api.github.com/users/kiansierra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiansierra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiansierra/subscriptions",
"organizations_url": "https://api.github.com/users/kiansierra/orgs",
"repos_url": "https://api.github.com/users/kiansierra/repos",
"events_url": "https://api.github.com/users/kiansierra/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiansierra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"I think this could be interesting! Feel free to open a PR and ping @sanchit-gandhi π ",
"Hey @kiansierra - there's already a PR for Flax LLaMA that is pretty much ready to be merged: https://github.com/huggingface/transformers/pull/24587 Feel free to check it out!\r\n\r\nBut we'd love contributions for other LLM's in the library where there's only PyTorch support and not Flax π€ If there are particular checkpoints on the HF Hub that you see getting a lot of usage (downloads) where there's only PyTorch support but not Flax, definitely let us know here and we can get going with a PR! π",
"Thansk for the Heads up @sanchit-gandhi, I'll see if there is any other model I think I can add to Flax and tag you on the next issue",
"Oups I even reviewed the PR π
sorry @kiansierra π€ ",
"@kiansierra sorry to scoop Flax Llama from you! If you want any suggestions, I think [Mistral](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py) is a pretty popular model right now without a Flax port.",
"Hey no worries, I think I will give Mistral a go, it seems some of the work can be ported",
"Happy to see that a couple of people are interested in porting these models to flax! I was also interested in contributing! Is there any other model that would be interesting? On a side note: I guess flash-attention only works for the pytorch models atm (?) Is there any fundamental reason why porting the flash-attention implementation to jax would be difficult?",
"hello, guys I have created both llama and mistral models in flax if you want you can use them [modelling_mistral_flax.py](https://github.com/erfanzar/EasyDeL/blob/main/lib/python/EasyDel/modules/mistral/modelling_mistral_flax.py)",
"Yes Flash Attention relies on dispatching optimised CUDA kernels, which as far as I'm aware haven't been implemented in JAX. You could look into Pallas and see if someone's written Flash Attention kernels for JAX using this library? https://jax.readthedocs.io/en/latest/pallas/design.html",
"Indeed there's an effort to write FlashAttention in Pallas, https://github.com/google/jax/blob/main/jax/experimental/pallas/ops/attention.py although it's still a work in progress https://github.com/google/jax/pull/17328 . @sanchit-gandhi I'd be happy to try to port another model. For example, Yarn-Mistral seems to have some traction, though it's not part of the transformers library atm. Any other suggestions are welcome!"
] | 1,697 | 1,701 | 1,701 | CONTRIBUTOR | null | ### Feature request
I would like to implement the ~~Llama~~ Mistral model in flax
### Motivation
I've been trying to get familiar with jax and as such I started migrating the llama model, and I think I am at a point where both models are quite comparable in outcome
### Your contribution
Yes I could submit a PR with the model implementation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26809/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26808/comments | https://api.github.com/repos/huggingface/transformers/issues/26808/events | https://github.com/huggingface/transformers/pull/26808 | 1,943,388,911 | PR_kwDOCUB6oc5czkEP | 26,808 | Update activations.py | {
"login": "sheetalneeraj",
"id": 42382485,
"node_id": "MDQ6VXNlcjQyMzgyNDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/42382485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sheetalneeraj",
"html_url": "https://github.com/sheetalneeraj",
"followers_url": "https://api.github.com/users/sheetalneeraj/followers",
"following_url": "https://api.github.com/users/sheetalneeraj/following{/other_user}",
"gists_url": "https://api.github.com/users/sheetalneeraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sheetalneeraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sheetalneeraj/subscriptions",
"organizations_url": "https://api.github.com/users/sheetalneeraj/orgs",
"repos_url": "https://api.github.com/users/sheetalneeraj/repos",
"events_url": "https://api.github.com/users/sheetalneeraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/sheetalneeraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null |
# What does this PR do?
Code Simplification by removing redundant checks and using more concise syntax.
Avoiding Redundant Instantiations by directly using the classes rather than instantiating them in separate variables.
Consistent Variable Naming
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26808",
"html_url": "https://github.com/huggingface/transformers/pull/26808",
"diff_url": "https://github.com/huggingface/transformers/pull/26808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26808.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26807/comments | https://api.github.com/repos/huggingface/transformers/issues/26807/events | https://github.com/huggingface/transformers/issues/26807 | 1,943,280,315 | I_kwDOCUB6oc5z1Bq7 | 26,807 | Getting error "TypeError: 'NoneType' object is not callable" while using pretrained model checkpoints in TFAutoModelForQuestionAnswering | {
"login": "pksX01",
"id": 26363494,
"node_id": "MDQ6VXNlcjI2MzYzNDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/26363494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pksX01",
"html_url": "https://github.com/pksX01",
"followers_url": "https://api.github.com/users/pksX01/followers",
"following_url": "https://api.github.com/users/pksX01/following{/other_user}",
"gists_url": "https://api.github.com/users/pksX01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pksX01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pksX01/subscriptions",
"organizations_url": "https://api.github.com/users/pksX01/orgs",
"repos_url": "https://api.github.com/users/pksX01/repos",
"events_url": "https://api.github.com/users/pksX01/events{/privacy}",
"received_events_url": "https://api.github.com/users/pksX01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! As you mention that this works on colab, I would recommend you to make sure your mac install of tensorflow is compatible. Running this on my M1 works as expected. I used `pip install tensforflow-macos tensorflow-metal` ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): 2.15.0-dev20231007 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run following code in Python 3 terminal.
```
from transformers import AutoTokenizer
from transformers import TFAutoModelForQuestionAnswering
model_checkpoint = "distilbert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = TFAutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
```
### Expected behavior
Model safetensors shall be downloaded. But getting below error:
```
>>> model = TFAutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
Using TensorFlow backend
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rudra/mambaforge/envs/keras_tf/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "/Users/rudra/mambaforge/envs/keras_tf/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 2912, in from_pretrained
model.build() # build the network with dummy inputs
File "/Users/rudra/mambaforge/envs/keras_tf/lib/python3.10/site-packages/keras/src/layers/layer.py", line 222, in build_wrapper
original_build_method(*args, **kwargs)
File "/Users/rudra/mambaforge/envs/keras_tf/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 1131, in build
if self.built or call_context().in_call:
TypeError: 'NoneType' object is not callable
```
Note: Same code is working in Google colab but failing in local on Mac M1. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26807/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26806/comments | https://api.github.com/repos/huggingface/transformers/issues/26806/events | https://github.com/huggingface/transformers/issues/26806 | 1,943,250,045 | I_kwDOCUB6oc5z06R9 | 26,806 | BLIP2 inference error: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:2 | {
"login": "YongLD",
"id": 33275244,
"node_id": "MDQ6VXNlcjMzMjc1MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/33275244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YongLD",
"html_url": "https://github.com/YongLD",
"followers_url": "https://api.github.com/users/YongLD/followers",
"following_url": "https://api.github.com/users/YongLD/following{/other_user}",
"gists_url": "https://api.github.com/users/YongLD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YongLD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YongLD/subscriptions",
"organizations_url": "https://api.github.com/users/YongLD/orgs",
"repos_url": "https://api.github.com/users/YongLD/repos",
"events_url": "https://api.github.com/users/YongLD/events{/privacy}",
"received_events_url": "https://api.github.com/users/YongLD/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"pinging @SunMarc and @younesbelkada as well! ",
"Hi @YongLD, please make sure to have the latest version of transformers. We fixed a similar issue to this one in the [past](https://github.com/huggingface/transformers/pull/25735). On my side, I'm able to run on 2 GPUs. LMK how it goes. If it doesn't work, please provide you environnement config. ",
"Environment Config\r\n```shell\r\ntransformers== 4.34.0\r\naccelerate== 0.23.0\r\ntorch== 2.0.1+cu117\r\n```\r\nBeside, I found a warning when I run with `device_map=\"auto\"`:\r\n```shell \r\nThe `language_model` is not in the `hf_device_map` dictionary and you are running your script in a multi-GPU environment. \r\nthis may lead to unexpected behavior when using `accelerate`. Please pass a `device_map` that contains `language_model` to remove this warning.\r\n```\r\nDoes `accelerate-large-model` support `blip2-flan-t5-xl` or `blip2-flan-t5-xxl`?\r\n\r\nAnorther Question (Although it's a CUDA bug.)\r\nI found DeferredCudaCallError When I use `to(\"cuda\")` with multi-gpu, Do you know why?\r\n```python\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,1\"\r\nimport torch\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-opt-2.7b\").to(\"cuda\")\r\n```\r\n```shell\r\nError:torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus\r\n```\r\n",
"@SunMarc How to lock the usage of `device_map=\"auto\"` to a specific GPU?\r\nI have used `os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,1,2\"`, but it does not work;\r\nThe command `torchrun test.py --nproc_per_node=3` does not work, either.",
"I think it is a problem with torch and cuda. In the past, we had a similar [case](https://github.com/huggingface/accelerate/issues/1927). Can you reinstall and try again ? \r\n\r\nAlso, the following code snippet works on my side: \r\n```\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,1\"\r\nimport torch\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-opt-2.7b\").to(\"cuda\")\r\n```\r\nAs for the warning, this is something we need to fix. It shouldn't show the warning. ",
"@SunMarc yes, I can use `Salesforce/blip2-opt-2.7b` with `to(\"cuda\")`, but I can not use `Salesforce--blip2-flan-t5-xxl` in 1 gpu with 16GB.\r\nThere is always a RuntimeError when I use `device_map=\"auto\"` for Blip2 multi-gpu test, but I can use it for T5 model\r\n```python\r\ntokenizer = T5Tokenizer.from_pretrained(\"google/flan-t5-xxl\")\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"google/flan-t5-xxl\", device_map=\"auto\")\r\n```\r\nSo I wonder is it a problem with `Salesforce--blip2-flan-t5-xxl` or other Blip2 model? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
**Describe the bug**
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:2.
**Screenshots**
```shell
Traceback (most recent call last):
File "/home/cike/ldy/ner/test-blip2-1.py", line 18, in <module>
out = model.generate(**inputs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/cike/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
......
^^^^^^^^^^^^^^^^^^^^^
File "/home/cike/.local/lib/python3.11/site-packages/transformers/generation/utils.py", line 2494, in greedy_search
next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)
~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:2
```
**System info (please complete the following information):**
- OS: 18.04.2 LTS
- One mechine with 8x tesla p100-pcie-16gb
How can I fix this bug?
### Who can help?
@pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**To Reproduce**
I am trying to enable multi-gpus inference on the BLIP2 model.
I tried the following code snippet:
```python
model_path = "/home/cike/.cache/huggingface/hub/models--Salesforce--blip2-flan-t5-xl/snapshots/cc2bb7bce2f7d4d1c37753c7e9c05a443a226614/"
processor = Blip2Processor.from_pretrained(model_path)
model = Blip2ForConditionalGeneration.from_pretrained(model_path, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
print("model: ",model.hf_device_map)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
### Expected behavior
The BLIP2 model loads and runs successfully on multi-GPUs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26806/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26806/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26805/comments | https://api.github.com/repos/huggingface/transformers/issues/26805/events | https://github.com/huggingface/transformers/pull/26805 | 1,943,221,589 | PR_kwDOCUB6oc5czH6v | 26,805 | Added Typing | {
"login": "Siddhesh-Agarwal",
"id": 68057995,
"node_id": "MDQ6VXNlcjY4MDU3OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/68057995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Siddhesh-Agarwal",
"html_url": "https://github.com/Siddhesh-Agarwal",
"followers_url": "https://api.github.com/users/Siddhesh-Agarwal/followers",
"following_url": "https://api.github.com/users/Siddhesh-Agarwal/following{/other_user}",
"gists_url": "https://api.github.com/users/Siddhesh-Agarwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Siddhesh-Agarwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Siddhesh-Agarwal/subscriptions",
"organizations_url": "https://api.github.com/users/Siddhesh-Agarwal/orgs",
"repos_url": "https://api.github.com/users/Siddhesh-Agarwal/repos",
"events_url": "https://api.github.com/users/Siddhesh-Agarwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Siddhesh-Agarwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the review. I'll do the necessary changes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | # What does this PR do?
- Added Typing
- Fixes #26745
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26805/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26805",
"html_url": "https://github.com/huggingface/transformers/pull/26805",
"diff_url": "https://github.com/huggingface/transformers/pull/26805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26805.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26804/comments | https://api.github.com/repos/huggingface/transformers/issues/26804/events | https://github.com/huggingface/transformers/issues/26804 | 1,943,184,140 | I_kwDOCUB6oc5z0qMM | 26,804 | transformers.onnx support mistral | {
"login": "xiaoyaolangzhi",
"id": 15037766,
"node_id": "MDQ6VXNlcjE1MDM3NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaoyaolangzhi",
"html_url": "https://github.com/xiaoyaolangzhi",
"followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers",
"following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions",
"organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs",
"repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos",
"events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I think you should ask this on the Optimum repo where the onnx conversio should already be supported no @fxmarty ",
"Thank you @xiaoyaolangzhi. As indicated in the documentation (https://huggingface.co/docs/transformers/v4.34.0/en/serialization#export-to-onnx), the ONNX export through transformers.onnx has been deprecated and moved to the Optimum library. The support for Mistral ONNX export has been added here: https://github.com/huggingface/optimum/pull/1425, but is not yet available in a release of Optimum, so you would need to install th lib from source.\r\n\r\nThen you can use:\r\n```\r\noptimum-cli export onnx --help\r\noptimum-cli export onnx --model mistralai/Mistral-7B-v0.1 mistral_onnx\r\n```",
"@fxmarty \r\n\r\nI ran the as stated by you in Colab. But execution halts \r\n\r\npip install optimum[exporters-tf]\r\n!optimum-cli export onnx --model mistralai/Mistral-7B-v0.1 mistral_onnx\r\n\r\n<img width=\"659\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/141845438/8ac1f077-0d34-43c8-af2d-85cb5d7b7963\">\r\n"
] | 1,697 | 1,707 | 1,697 | NONE | null | ### Feature request
mistral is not supported yet
### Motivation
*
### Your contribution
* | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26804/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26804/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26803/comments | https://api.github.com/repos/huggingface/transformers/issues/26803/events | https://github.com/huggingface/transformers/issues/26803 | 1,943,109,278 | I_kwDOCUB6oc5z0X6e | 26,803 | [i18n-zh] Translating docs to Chinese (Simplified) | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"**Here is a simple summary of finished works**\r\n\r\n- _it's just a simple summary, thansk all contributors translating these docs. You can check the file to see who did the work._\r\n- _I list those files for everyone who want to do translation work. It may not follow PRs in time. So just double check before starting work._\r\n\r\n[accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/accelerate.md)\r\n[autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/autoclass_tutorial.md)\r\n[big_models.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/big_models.md)\r\n[create_a_model.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/create_a_model.md)\r\n[custom_models.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/custom_models.md)\r\n[debugging.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/debugging.md)\r\n[fast_tokenizers.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/fast_tokenizers.md)\r\n[index.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/index.md)\r\n[installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/installation.md)\r\n[llm_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/llm_tutorial.md)\r\n[model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/model_sharing.md)\r\n[multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/multilingual.md)\r\n[peft.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/peft.md)\r\n[perf_torch_compile.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/perf_torch_compile.md)\r\n[performance.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/performance.md)\r\n[pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/pipeline_tutorial.md)\r\n[preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/preprocessing.md)\r\n[quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/quicktour.md)\r\n[run_scripts.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/run_scripts.md)\r\n[serialization.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/serialization.md)\r\n[task_summary.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/task_summary.md)\r\n[tf_xla.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/tf_xla.md)\r\n[tflite.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/tflite.md)\r\n[tokenizer_summary.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/tokenizer_summary.md)\r\n[training.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/training.md)\r\n[transformers_agents.md](https://github.com/huggingface/transformers/blob/main/docs/source/zh/transformers_agents.md)",
"@stevhliu\r\n\r\nI will continue to work following translation work:\r\n- transformers_agents.md\r\n- troubleshooting.md\r\n\r\nBest",
"I'd like to work on `testing.md`"
] | 1,697 | 1,699 | null | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
And I can see some part of the work has be done (Great work!). I just list all files below.
## Get Started section(keep updating)
Here is what I am working now, you can just skip these docs.
[perf_train_gpu_one.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_one.md)
[perf_train_gpu_many.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_many.md)
[perf_train_cpu.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu.md)
[perf_train_cpu_many.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu_many.md)
[perf_train_tpu.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_tpu.md)
[perf_train_tpu_tf.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_tpu_tf.md)
[perf_train_special.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_special.md)
[perf_infer_cpu.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_cpu.md)
[deepspeed.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/deepspeed.md)
[trainer.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md)
<!--
Keep on adding more as you go π₯
-->
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26803/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26803/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26802/comments | https://api.github.com/repos/huggingface/transformers/issues/26802/events | https://github.com/huggingface/transformers/issues/26802 | 1,943,026,864 | I_kwDOCUB6oc5z0Dyw | 26,802 | [Feature Request] We might need a function to change the sampler used in trainer dataloader | {
"login": "dumpmemory",
"id": 64742282,
"node_id": "MDQ6VXNlcjY0NzQyMjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/64742282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dumpmemory",
"html_url": "https://github.com/dumpmemory",
"followers_url": "https://api.github.com/users/dumpmemory/followers",
"following_url": "https://api.github.com/users/dumpmemory/following{/other_user}",
"gists_url": "https://api.github.com/users/dumpmemory/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dumpmemory/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumpmemory/subscriptions",
"organizations_url": "https://api.github.com/users/dumpmemory/orgs",
"repos_url": "https://api.github.com/users/dumpmemory/repos",
"events_url": "https://api.github.com/users/dumpmemory/events{/privacy}",
"received_events_url": "https://api.github.com/users/dumpmemory/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,697 | 1,697 | null | CONTRIBUTOR | null | ### Feature request
we need a function to custom the sampler used in trainer dataloader.
### Motivation
for sometimes, we might want to use https://github.com/imoneoi/multipack_sampler/blob/master/multipack_sampler.py#L94 for efficient reasoning
another example is DoReMi paperοΌ we might also change the sample distribution for different task source
### Your contribution
I | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26802/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26802/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26801/comments | https://api.github.com/repos/huggingface/transformers/issues/26801/events | https://github.com/huggingface/transformers/issues/26801 | 1,943,003,736 | I_kwDOCUB6oc5zz-JY | 26,801 | Assertion Error during Text Generation (model inference): assert quant_state is not None | {
"login": "akjindal53244",
"id": 5215386,
"node_id": "MDQ6VXNlcjUyMTUzODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5215386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akjindal53244",
"html_url": "https://github.com/akjindal53244",
"followers_url": "https://api.github.com/users/akjindal53244/followers",
"following_url": "https://api.github.com/users/akjindal53244/following{/other_user}",
"gists_url": "https://api.github.com/users/akjindal53244/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akjindal53244/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akjindal53244/subscriptions",
"organizations_url": "https://api.github.com/users/akjindal53244/orgs",
"repos_url": "https://api.github.com/users/akjindal53244/repos",
"events_url": "https://api.github.com/users/akjindal53244/events{/privacy}",
"received_events_url": "https://api.github.com/users/akjindal53244/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada and @SunMarc ",
"I verified that it doesn't fail, if I remove `quantization_config` parameter while calling `AutoModelForCausalLM.from_pretrained()` to load the model.\r\nSpecifically: \r\n\r\n- `model = AutoModelForCausalLM.from_pretrained(model_id, device_map=\"{\"\": \"cpu\"}\")` works \r\n- But `model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map=\"{\"\": \"cpu\"}\")` fails with error mentioned above.",
"Hi @akjindal53244 \r\nThanks for the issue, you are facing that issue because quantization is not supported on CPU. Please make sure to have access to a GPU device to use quantization feature. \r\nWe should probably add a stronger check on `from_pretrained` to properly guide users quantization is not supported on CPU.",
"I got the same error when I set the device to \"cuda\", without specifying the index of the gpu. I updated it to \"cuda:0\" and it worked after.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### System Info
(tempPy39) minimalist@minimalist-pc:~$ transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.35.0.dev0
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0.dev20231013+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: RTX 4090
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Steps to Reproduce:**
1. **Installations**:
```
conda create -n <env_name> python=3.9
conda activate <env_name>
pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118
pip install scipy
pip install -q -U bitsandbytes
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git
```
2. **Code**
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_id = "mistralai/Mistral-7B-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="{"": "cpu"}")
tokenizer = AutoTokenizer.from_pretrained(model_id)
PROMPT= """ ### Instruction: Act as a data science expert.
### Question:
Explain to me what is Large Language Model. Assume that I am a 5-year-old child.
### Answer:
"""
model_inputs = tokenizer(PROMPT, return_tensors="pt", add_special_tokens=True)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.eos_token_id)
```
### Expected behavior
It shouldn't fail.
Error log:
```FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/transformers/generation/utils.py", line 1693, in generate
return self.sample(
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/transformers/generation/utils.py", line 2775, in sample
outputs = self(
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/accelerate/hooks.py", line 164, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/transformers/models/mistral/modeling_mistral.py", line 1042, in forward
outputs = self.model(
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/accelerate/hooks.py", line 164, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/transformers/models/mistral/modeling_mistral.py", line 929, in forward
layer_outputs = decoder_layer(
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/accelerate/hooks.py", line 164, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/transformers/models/mistral/modeling_mistral.py", line 618, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/accelerate/hooks.py", line 164, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/transformers/models/mistral/modeling_mistral.py", line 258, in forward
query_states = self.q_proj(hidden_states)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1519, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1528, in _call_impl
return forward_call(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/accelerate/hooks.py", line 164, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 248, in forward
out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state)
File "/home/minimalist/miniconda3/envs/tempPy39/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 567, in matmul_4bit
assert quant_state is not None
AssertionError
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26801/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26800/comments | https://api.github.com/repos/huggingface/transformers/issues/26800/events | https://github.com/huggingface/transformers/pull/26800 | 1,942,730,140 | PR_kwDOCUB6oc5cx0xb | 26,800 | fix: when window_size is passes as array | {
"login": "dotneet",
"id": 370602,
"node_id": "MDQ6VXNlcjM3MDYwMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/370602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dotneet",
"html_url": "https://github.com/dotneet",
"followers_url": "https://api.github.com/users/dotneet/followers",
"following_url": "https://api.github.com/users/dotneet/following{/other_user}",
"gists_url": "https://api.github.com/users/dotneet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dotneet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dotneet/subscriptions",
"organizations_url": "https://api.github.com/users/dotneet/orgs",
"repos_url": "https://api.github.com/users/dotneet/repos",
"events_url": "https://api.github.com/users/dotneet/events{/privacy}",
"received_events_url": "https://api.github.com/users/dotneet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker \r\nThanks for the reply.\r\n\r\nSee the existing code here.\r\nhttps://github.com/huggingface/transformers/blob/21dc5859421cf0d7d82d374b10f533611745a8c5/src/transformers/models/swinv2/modeling_swinv2.py#L437\r\n\r\nOriginally, window_size was obviously an implementation that could be passed as an array.\r\nHowever, due to some code, it was not possible to actually pass the array. I fixed that.\r\n\r\nI want to train optimized for horizontal images, not square images. To do this, I believe image_size, patch_size, and window_size would all need to be set to separate values for height and width. I don't think this is a rare case.\r\n\r\nIn fact, there are people who need it here in the repository.\r\n\r\nhttps://github.com/microsoft/Swin-Transformer/issues/44\r\nhttps://github.com/microsoft/Swin-Transformer/issues/141\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26800). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
The window_size should have been implemented to allow separate values for height and width, but it was not working. This fixes that problem.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @nandwalritik @SatyaJandhyalaAtMS
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26800/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26800",
"html_url": "https://github.com/huggingface/transformers/pull/26800",
"diff_url": "https://github.com/huggingface/transformers/pull/26800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26800.patch",
"merged_at": 1697527563000
} |
https://api.github.com/repos/huggingface/transformers/issues/26799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26799/comments | https://api.github.com/repos/huggingface/transformers/issues/26799/events | https://github.com/huggingface/transformers/pull/26799 | 1,942,665,484 | PR_kwDOCUB6oc5cxoWc | 26,799 | Add Japanese translation | {
"login": "shinshin86",
"id": 8216064,
"node_id": "MDQ6VXNlcjgyMTYwNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8216064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shinshin86",
"html_url": "https://github.com/shinshin86",
"followers_url": "https://api.github.com/users/shinshin86/followers",
"following_url": "https://api.github.com/users/shinshin86/following{/other_user}",
"gists_url": "https://api.github.com/users/shinshin86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shinshin86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shinshin86/subscriptions",
"organizations_url": "https://api.github.com/users/shinshin86/orgs",
"repos_url": "https://api.github.com/users/shinshin86/repos",
"events_url": "https://api.github.com/users/shinshin86/events{/privacy}",
"received_events_url": "https://api.github.com/users/shinshin86/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26799). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Hello!
I found a description in `README_ja.md` that is not translated into Japanese.
Therefore, I am only translated the relevant sections.
Thank you!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26799/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26799",
"html_url": "https://github.com/huggingface/transformers/pull/26799",
"diff_url": "https://github.com/huggingface/transformers/pull/26799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26799.patch",
"merged_at": 1697443824000
} |
https://api.github.com/repos/huggingface/transformers/issues/26798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26798/comments | https://api.github.com/repos/huggingface/transformers/issues/26798/events | https://github.com/huggingface/transformers/pull/26798 | 1,942,636,455 | PR_kwDOCUB6oc5cxh-v | 26,798 | Enable split_batches through TrainingArguments | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
This PR passes `split_batches` in the `TrainingArguments` to the `Accelerator()`, the same way that `dispatch_batches` is currently done.
In the future we should consider whether we want to make a dataclass with the args for `Accelerator` that a user can pass instead. (Or a raw Accelerator potentially with guards)
Fixes # (issue)
Solves https://github.com/huggingface/accelerate/issues/2023
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @LysandreJik @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26798/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26798",
"html_url": "https://github.com/huggingface/transformers/pull/26798",
"diff_url": "https://github.com/huggingface/transformers/pull/26798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26798.patch",
"merged_at": 1698864158000
} |
https://api.github.com/repos/huggingface/transformers/issues/26797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26797/comments | https://api.github.com/repos/huggingface/transformers/issues/26797/events | https://github.com/huggingface/transformers/issues/26797 | 1,942,597,033 | I_kwDOCUB6oc5zya2p | 26,797 | Streaming text output from a pipeline. | {
"login": "lukolszewski",
"id": 43611481,
"node_id": "MDQ6VXNlcjQzNjExNDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/43611481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukolszewski",
"html_url": "https://github.com/lukolszewski",
"followers_url": "https://api.github.com/users/lukolszewski/followers",
"following_url": "https://api.github.com/users/lukolszewski/following{/other_user}",
"gists_url": "https://api.github.com/users/lukolszewski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukolszewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukolszewski/subscriptions",
"organizations_url": "https://api.github.com/users/lukolszewski/orgs",
"repos_url": "https://api.github.com/users/lukolszewski/repos",
"events_url": "https://api.github.com/users/lukolszewski/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukolszewski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Would recommend you to read [this section](https://huggingface.co/docs/transformers/generation_strategies#streaming) of the documentation about streaming when generating, I think it should help! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
Implement a way to obtain streaming text output from a pipeline. One token at a time.
### Motivation
To be able to see the response as it is being generated instead of having to wait for the entire thing.
### Your contribution
I can test it if it gets implemented. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26797/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26796/comments | https://api.github.com/repos/huggingface/transformers/issues/26796/events | https://github.com/huggingface/transformers/issues/26796 | 1,942,562,098 | I_kwDOCUB6oc5zySUy | 26,796 | (False?) warning about weight_g/weight_v missing on WeightNorm on PyTorch | {
"login": "sterlind",
"id": 55418321,
"node_id": "MDQ6VXNlcjU1NDE4MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/55418321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sterlind",
"html_url": "https://github.com/sterlind",
"followers_url": "https://api.github.com/users/sterlind/followers",
"following_url": "https://api.github.com/users/sterlind/following{/other_user}",
"gists_url": "https://api.github.com/users/sterlind/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sterlind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sterlind/subscriptions",
"organizations_url": "https://api.github.com/users/sterlind/orgs",
"repos_url": "https://api.github.com/users/sterlind/repos",
"events_url": "https://api.github.com/users/sterlind/events{/privacy}",
"received_events_url": "https://api.github.com/users/sterlind/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hey @sterlind - sorry for the delay in getting back to you! You are indeed correct in that the warning shouldn't be triggered. The state dict is copied correctly with the PyTorch weight norm refactoring, but the warning thrown in `from_pretrained` since this hasn't yet been updated. I'll open a PR to fix this!",
"I see similar warning when importing Wav2Vec2.0: `facebook/wav2vec2-base`:\r\n\r\n```\r\nSome weights of ClassifierModel were not initialized from the model checkpoint at facebook/wav2vec2-base and are newly initialized: ['classifier.out_proj.weight', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'classifier.out_proj.bias', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original1']\r\n```\r\n\r\nIn the end it works correctly and I should just ignore the warning, right?",
"Just to follow up on this, it may be related. When trying to convert a _wav2vec2-conformer_ from the `fairseq` version to `transformers` I got an error with transformers version > 4.29.2 (4.29.2 works fine). I report the error below:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"MY_UTILITIES_PATH/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py\", line 308, in <module>\r\n convert_wav2vec2_conformer_checkpoint(\r\n File \"MY_ENV_PATH//lib/python3.9/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"MY_UTILITIES_PATH/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py\", line 293, in convert_wav2vec2_conformer_checkpoint\r\n recursively_load_weights(model, hf_wav2vec, not is_finetuned)\r\n File \"MY_UTILITIES_PATH/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py\", line 167, in recursively_load_weights\r\n set_recursively(hf_model, mapped_key, value, name, weight_type)\r\n File \"MY_UTILITIES_PATH/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py\", line 87, in set_recursively\r\n hf_shape = getattr(hf_pointer, weight_type).shape\r\n File \"MY_ENV_PATH//lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1695, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'ParametrizedConv1d' object has no attribute 'weight_g'\r\n```\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @sanchit-gandhi we can close this now that the PR was merged no? "
] | 1,697 | 1,706 | null | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.15.90.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0.dev20231005 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simply running:
```python
from transformers import AutoProcessor, HubertModel
model = HubertModel.from_pretrained("facebook/hubert-base-ls960")
```
Produces the following warning:
```
Some weights of the model checkpoint at facebook/hubert-base-ls960 were not used when initializing HubertModel: ['encoder.pos_conv_embed.conv.weight_v', 'encoder.pos_conv_embed.conv.weight_g']
- This IS expected if you are initializing HubertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing HubertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of HubertModel were not initialized from the model checkpoint at facebook/hubert-base-ls960 and are newly initialized: ['encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'encoder.pos_conv_embed.conv.parametrizations.weight.original1']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
What I gather from the [PyTorch documentation](https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html)
and [updated code](https://pytorch.org/docs/stable/_modules/torch/nn/utils/parametrizations) is that the PyTorch folks decided to migrate the `weight_v` and `weight_g` params of WeightNorm to `original0` and `original1`.
Initially I thought the model was simply broken by this breaking change in PyTorch, however I was confused since I saw discussions that it should have been fixed [by this PR in transformers](https://github.com/huggingface/transformers/pull/24030), as discussed here: https://github.com/huggingface/transformers/issues/24692
So I attached my debugger to `_weight_norm_compat_hook`, and sure enough it activated and seems to have migrated the state:
(during debug)
```
> state_dict[g_key]
tensor([[[0.3022, 0.1198, 0.1031, 0.1000, 0.0945, 0.0891, 0.0939, 0.0933, ...
```
(after model load, in Jupyter):
```
> model.encoder.pos_conv_embed.conv.parametrizations.weight.original0
Parameter containing:
tensor([[[0.3022, 0.1198, 0.1031, 0.1000, 0.0945, 0.0891, 0.0939, 0.0933, ...
```
So I'm pretty sure the warning is a false alarm, but I'm also confused since the migration happens *before* the warning is traced, so I wanted to check.
### Expected behavior
No warning should have appeared. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26796/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26796/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26795/comments | https://api.github.com/repos/huggingface/transformers/issues/26795/events | https://github.com/huggingface/transformers/pull/26795 | 1,942,200,457 | PR_kwDOCUB6oc5cwLAg | 26,795 | Conversation pipeline fixes | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Also cc @lewtun - now you should actually be able to just use this pipeline in the docstrings instead of needing to do it manually in the docstrings with `text-generation` and `apply_chat_template`!"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | This PR makes a couple of fixes to `ConversationalPipeline` to make it a lot easier to use:
- Inputs can now just be conversations in standard list-of-dicts format. I think the `Conversation` class is quite hard for users to discover, and this is a lot more intuitive.
- We no longer read `max_length` because very few models set this parameter, and so it's almost always the default `PretrainedConfig` value of 20, which is very low. Before this change, most calls to `ConversationalPipeline` produced no output or unnecessarily truncated the input because this limit was hit. We change the pipeline to use `max_new_tokens` instead, which is more modern.
cc @arthurzucker for pipeline review and @gante if he has any comments about setting the generation parameters properly! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26795/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26795",
"html_url": "https://github.com/huggingface/transformers/pull/26795",
"diff_url": "https://github.com/huggingface/transformers/pull/26795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26795.patch",
"merged_at": 1697473665000
} |
https://api.github.com/repos/huggingface/transformers/issues/26794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26794/comments | https://api.github.com/repos/huggingface/transformers/issues/26794/events | https://github.com/huggingface/transformers/issues/26794 | 1,942,164,704 | I_kwDOCUB6oc5zwxTg | 26,794 | [i18n-<kan>] Translating docs to <Kannada> | {
"login": "praj-pawar",
"id": 99607337,
"node_id": "U_kgDOBe_jKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99607337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praj-pawar",
"html_url": "https://github.com/praj-pawar",
"followers_url": "https://api.github.com/users/praj-pawar/followers",
"following_url": "https://api.github.com/users/praj-pawar/following{/other_user}",
"gists_url": "https://api.github.com/users/praj-pawar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praj-pawar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praj-pawar/subscriptions",
"organizations_url": "https://api.github.com/users/praj-pawar/orgs",
"repos_url": "https://api.github.com/users/praj-pawar/repos",
"events_url": "https://api.github.com/users/praj-pawar/events{/privacy}",
"received_events_url": "https://api.github.com/users/praj-pawar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Hey! I'd like to translate the index.md file first."
] | 1,697 | 1,697 | null | NONE | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <Kannada>-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<kan>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<kan>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go π₯
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26794/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26793/comments | https://api.github.com/repos/huggingface/transformers/issues/26793/events | https://github.com/huggingface/transformers/issues/26793 | 1,942,026,700 | I_kwDOCUB6oc5zwPnM | 26,793 | Failing to load 'Llama-2-70b-chat-hf' in transformers.LlamaForCausalLM | {
"login": "mmc31",
"id": 7207683,
"node_id": "MDQ6VXNlcjcyMDc2ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7207683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmc31",
"html_url": "https://github.com/mmc31",
"followers_url": "https://api.github.com/users/mmc31/followers",
"following_url": "https://api.github.com/users/mmc31/following{/other_user}",
"gists_url": "https://api.github.com/users/mmc31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmc31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmc31/subscriptions",
"organizations_url": "https://api.github.com/users/mmc31/orgs",
"repos_url": "https://api.github.com/users/mmc31/repos",
"events_url": "https://api.github.com/users/mmc31/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmc31/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I should also mention that this works fine for 7b and 13b",
"cc @SunMarc as well!",
"hi @mmc31 \r\nOut of curiosity does the issue still persists with the latest transformers and accelerate?\r\n\r\n```bash\r\npip install -U accelerate transformers\r\n```",
"Still the same error. Now i have:\r\ntransformers.__version__ = '4.34.0'\r\naccelerate.__version__ = '0.23.0'",
"@younesbelkada any other ideas why this layer's parameter is empty?",
"I'm going to close this as it seems to be related to your setup and I cannot reproduce it on `main`. Your local files are probably corrupted"
] | 1,697 | 1,705 | 1,705 | NONE | null | ### System Info
Ubuntu 22.04
Python 3.9.18
torch.__version__ = '2.0.1+cu117'
transformers.__version__ = '4.31.0'
dowloaded files from here:
https://huggingface.co/meta-llama/Llama-2-70b-chat-hf/tree/main
run this:
```
import torch
from transformers import LlamaForCausalLM
base_model = "Llama-2-70b-chat-hf"
model = LlamaForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
local_files_only=True,
)
```
and the error:
*** ValueError: weight is on the meta device, we need a `value` to put in on 1.
/lib/python3.9/site-packages/accelerate/hooks.py(253)
`set_module_tensor_to_device(module, name, self.execution_device)`
module is a LlamaDecoderLayer
name = 'mlp.up_proj.weight'
self.execution_device = 1
```
module.mlp.up_proj.weight
Parameter containing:
Parameter(Int8Params(..., device='meta', size=(28672, 8192), dtype=torch.float16))
```
so it appears this parameter is empty and has no data. Why is this not happening to anyone else??
I have enough GPU memory
nvidia-smi
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.199.02 Driver Version: 470.199.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A6000 Off | 00000000:18:00.0 Off | Off |
| 30% 41C P0 83W / 300W | 0MiB / 48685MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX A6000 Off | 00000000:C3:00.0 Off | Off |
| 30% 42C P0 86W / 300W | 0MiB / 48685MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
@ArthurZucker @younesbelkada
Any ideas?
thanks!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import LlamaForCausalLM
base_model = "Llama-2-70b-chat-hf"
model = LlamaForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
local_files_only=True,
)
```
### Expected behavior
The model should load rather than error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26793/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26792/comments | https://api.github.com/repos/huggingface/transformers/issues/26792/events | https://github.com/huggingface/transformers/pull/26792 | 1,941,960,537 | PR_kwDOCUB6oc5cvWf- | 26,792 | Remove ambiguous `padding_mask` and instead use a 2D->4D Attn Mask Mapper | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thank you! I am happy with it, just wondering whether changing the `attention_mask` input from being 4D to 2D in `LlamaDecoderLayer` & `LlamaAttention` is considered a breaking change or not.\r\n\r\nTo me they are internal classes and thus changing the format of the attention_mask is ok. @LysandreJik @younesbelkada @ArthurZucker what do you think? \r\n\r\n\r\n",
"This PR should help make the following PRs nicer / cleaner:\r\n- https://github.com/huggingface/transformers/pull/26722\r\n- https://github.com/huggingface/transformers/pull/26572",
"> LGTM, we might have to keep some logic to pop the padding mask for 1 release for BC. let's do a deprecation cycle no?\r\n\r\nThey are all internal classes so BC doesn't really apply here IMO. Also it's only been in one release, so not sure we need to have a deprecation cycle here",
"Ok, it's true that it has only been 2 weeks, good for me\r\n",
"@patrickvonplaten I disagree that the non-top-level classes are internal. They are publicly exposed (no `_`), and have docstrings. There are a few libraries that were broken by the addition of padding_mask here: https://github.com/huggingface/transformers/pull/25598\r\n\r\ne.g.\r\nhttps://github.com/casper-hansen/AutoAWQ/pull/88\r\nhttps://github.com/mosaicml/llm-foundry/pull/643\r\nhttps://github.com/hpcaitech/ColossalAI/pull/4908\r\n\r\nso IMO modifying the non-top level signature / expected args (or args format, e.g. changing an arg from 4D to 2D) classes should be done with great care.",
"> @patrickvonplaten I disagree that the non-top-level classes are internal. They are publicly exposed (no `_`), and have docstrings. There are a few libraries that were broken by the addition of padding_mask here: #25598\r\n> \r\n> e.g. [casper-hansen/AutoAWQ#88](https://github.com/casper-hansen/AutoAWQ/pull/88) [mosaicml/llm-foundry#643](https://github.com/mosaicml/llm-foundry/pull/643) [hpcaitech/ColossalAI#4908](https://github.com/hpcaitech/ColossalAI/pull/4908)\r\n> \r\n> so IMO modifying the non-top level signature / expected args (or args format, e.g. changing an arg from 4D to 2D) classes should be done with great care.\r\n\r\nThis PRs all kind of show that no-one understood really how to use the padding_mask if it's just set to None in all the PRs. \r\n\r\nIf we force ourselves to deprecate every internal function that doesn't have a `_` prefix (which many people still use in their codebases BTW) , it'll will be very hard to stay agile/dynamic. Ok for me to deprecate (just adding `kwargs` to all forward methods) even though I prefer not to and just state that we correct the `attention_mask` here.\r\n\r\n@LysandreJik wdyt? \r\n\r\n",
"Anything that isn't in the public API is private to us as maintainers, but after spending a bit of time looking in the documentation, I can't find where we have written it (but I remember us writing it down).\r\n\r\nIn any case the deprecation cycle doesn't cost much and I would keep it for 1-2 versions max before removing it. It was present for very few releases (one) and concerns very few users, so let's do a deprecation cycle with very explicit `v4.37.0` as a cutoff date.",
"This PR is ready for a review. All models that implemented FA2 are now refactored.\r\n\r\nI've tested:\r\nAll slow tests for Falcon, Mistral, Llama they all pass on GPU.\r\n```\r\nCUDA_VISIBLE_DEVICES=\"0\" RUN_SLOW=1 pytest tests/models/llama/test_modeling_llama.py\r\n```\r\n\r\nAll flash attn tests:\r\n```\r\nCUDA_VISIBLE_DEVICES=\"0\" RUN_SLOW=1 pytest -m flash_attn_test tests/models\r\n```\r\n\r\nThere is one slow test that fails for Falcon, but it also fails on \"main\":\r\n```\r\nFAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_left_padding_compatibility - AssertionError: False is not true\r\n```\r\n(might also be just my GPU)\r\n\r\nAs explained here: https://github.com/huggingface/transformers/pull/26792#discussion_r1366021714 this PR also fixes a bug with the sliding window mask for Mistral.\r\n\r\n\r\n@ArthurZucker @younesbelkada @LysandreJik @fxmarty would be great if you could do a final review and run some tests to be sure everything works as expected! ",
"**Update**: I removed all the cache logic and instead just pass the attention_mask in the format that's needed. This is cleaner than caching tensors according to their shape, memory_id, etc...\r\n\r\nAll the benefits are kept including much improved readability and comprehensive attention mask class that can be copied / re-used by other models.\r\n\r\nAll tests pass just like before!",
"Then I am not sure what is the point of the class AttnMaskConverter? By the way, for SDPA, ideally we need both the information of 1/ is padding used 2/ transformers custom attention mask. This is because if custom masking is not used, we may dispatch on flash attention. So passing only a 4D mask for SDPA is suboptimal in my opinion.\r\n\r\nOr I could just always pass the 4D attention mask to SDPA, but that kind of defeats the point given that dispatch to FA is then impossible.",
"> Are we adding the sliding window as a new feature for these models? Otherwise would just use two different classes for Mistral and the other\r\n\r\nIt's not possible really to use sliding window in Llama because it's hardcoded at initialization \"sliding_window=....\" for Mistral. So the user can't (and should not use) `sliding_window` for Llama via any config parameters (in the same way `is_causal` is hardcoded to `True`). It is true that that we only have a couple of architectures that use sliding windows (Mistral, Longformer, ...) so we could move it out of the attention mask converter class and instead put it directly into the forward for Mistral. I think it's better though to leave as is because:\r\n- It's just a one-liner to add it to the attention converter and can be very nicely tested (which allowed us to spot the bug in Mistral)\r\n- There is a high chance that we'll have more models with windowed attention if models build on Mistral\r\n- We don't allow the user to configure windowed attention, so it's not like we adding a windowed attention feature to Llama or Falcon & thus making them more complicated.\r\n\r\nBut I do see how sliding window is arguably a bit exotic for the mask converter and if people feel strongly I can put it in Mistral's forward method instead.\r\n\r\nOverall, we do move away a bit from \"single-file\" policy here as the attention converter is is a general class that has more features that needed for some models. But it does make sense here since there is really not much variation for attention mask across models and it greatly helps with readability.\r\n\r\n",
"No problem for me to leave the sliding window in the mask converter class, I indeed think we'll get to see more models leveraging the sliding window (or users that want it supported) in other architectures."
] | 1,697 | 1,698 | 1,698 | MEMBER | null | # What does this PR do?
For models that have Flash Attention 2 (FA2) implemented we currently pass both `padding_mask` and `attention_mask` to the respective vanilla attention class, *e.g.* `LlamaAttention` and to the FA2 class, *e.g.* `LlamaFlashAttention2`.
**However**, `padding_mask` is **not** used for `LlamaAttention` and `attention_mask` is not used for `LlamaFlashAttention2`. Conceptually the two masks are the same, only that `attention_mask` is a 4D mask while `padding_mask` is a 2D mask.
Passing around both masks and having both masks as concepts in our codebase is ambiguous and hurts readability. In this PR, I propose to remove the concept of `padding_mask` completely and instead just pass either a 2D or 4D `attention_mask` depending on whether we use FA2 or not.
**Note**: An additional benefit of this PR is that it will improve the performance when using FA2 as we will not create a 4D attention mask anymore.
**Benchmarks**:
The following script was used to benchmark the effect this mask implementation has on forward and generate.
```py
#!/usr/bin/env python3
from transformers import AutoTokenizer, AutoModelForCausalLM
import time
import torch
DEVICE = "cuda:1"
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16, low_cpu_mem_usage=True)
model.to(DEVICE)
# forward
print("Forward benchmarks")
print(50 * "=")
for batch_size in (1, 4, 16):
for input_seq in (4, 16, 256):
input_ids = torch.ones((batch_size, input_seq), dtype=torch.long, device=DEVICE)
attention_mask = torch.ones_like(input_ids)
attention_mask[0, 3] = 0
times = []
for _ in range(3):
start_time = time.time()
with torch.no_grad():
logits = model(input_ids=input_ids, attention_mask=attention_mask).logits
times.append(time.time() - start_time)
result = min(times)
print(f"Forward bsz={batch_size}, input_seq={input_seq}: {result}")
# generate
print("Generate benchmarks")
print(50 * "=")
for batch_size in (1, 16):
for input_seq in (4, 256):
input_ids = torch.ones((batch_size, input_seq), dtype=torch.long, device=DEVICE)
attention_mask = torch.ones_like(input_ids)
attention_mask[0, 3] = 0
times = []
for _ in range(3):
start_time = time.time()
out = model.generate(input_ids=input_ids, max_new_tokens=256, do_sample=False)
times.append(time.time() - start_time)
result = min(times)
print(f"Generate bsz={batch_size}, input_seq={input_seq}: {result}")
```
**This PR:**
```
Forward benchmarks
==================================================
Forward bsz=1, input_seq=4: 0.012479066848754883
Forward bsz=1, input_seq=16: 0.011297464370727539
Forward bsz=1, input_seq=256: 0.01240849494934082
Forward bsz=4, input_seq=4: 0.011190414428710938
Forward bsz=4, input_seq=16: 0.013025283813476562
Forward bsz=4, input_seq=256: 0.03526663780212402
Forward bsz=16, input_seq=4: 0.01126551628112793
Forward bsz=16, input_seq=16: 0.012389421463012695
Forward bsz=16, input_seq=256: 0.1560053825378418
Generate benchmarks
==================================================
Generate bsz=1, input_seq=4: 4.527426719665527
Generate bsz=1, input_seq=256: 4.667049169540405
Generate bsz=16, input_seq=4: 5.524803400039673
Generate bsz=16, input_seq=256: 7.931211709976196
```
**Current main:**
```
Forward benchmarks
==================================================
Forward bsz=1, input_seq=4: 0.017528295516967773
Forward bsz=1, input_seq=16: 0.012105464935302734
Forward bsz=1, input_seq=256: 0.01315617561340332
Forward bsz=4, input_seq=4: 0.011912107467651367
Forward bsz=4, input_seq=16: 0.013910531997680664
Forward bsz=4, input_seq=256: 0.035504817962646484
Forward bsz=16, input_seq=4: 0.012083053588867188
Forward bsz=16, input_seq=16: 0.012537956237792969
Forward bsz=16, input_seq=256: 0.15653300285339355
Generate benchmarks
==================================================
Generate bsz=1, input_seq=4: 4.554980516433716
Generate bsz=1, input_seq=256: 4.695344686508179
Generate bsz=16, input_seq=4: 5.55778431892395
Generate bsz=16, input_seq=256: 7.969247102737427
```
=> We don't see any drop in performance at all.
I've verified that the following tests all pass on a single GPU (RTX4090):
FA2:
```
RUN_SLOW=1 pytest -m flash_attn_test tests/models/llama/test_modeling_llama.py
```
and all Llama fast tests:
```
CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest tests/models/llama/test_modeling_llama.py
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26792/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26792",
"html_url": "https://github.com/huggingface/transformers/pull/26792",
"diff_url": "https://github.com/huggingface/transformers/pull/26792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26792.patch",
"merged_at": 1698080040000
} |
https://api.github.com/repos/huggingface/transformers/issues/26791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26791/comments | https://api.github.com/repos/huggingface/transformers/issues/26791/events | https://github.com/huggingface/transformers/pull/26791 | 1,941,858,121 | PR_kwDOCUB6oc5cu_qt | 26,791 | [docs] Performance docs refactor p.2 | {
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | This PR continues performance docs refactor in the transformers docs. It focuses mainly on the "Efficient Training on Multiple GPUs" doc and contains the following changes:
* Improves clarity and readability
* Adds links to Accelerate where relevant
* Removes a duplicated chunk of content
* Resolves some formatting issues | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26791",
"html_url": "https://github.com/huggingface/transformers/pull/26791",
"diff_url": "https://github.com/huggingface/transformers/pull/26791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26791.patch",
"merged_at": 1698167407000
} |
https://api.github.com/repos/huggingface/transformers/issues/26790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26790/comments | https://api.github.com/repos/huggingface/transformers/issues/26790/events | https://github.com/huggingface/transformers/issues/26790 | 1,941,774,786 | I_kwDOCUB6oc5zvSHC | 26,790 | I want to convert DINOv2 model to onnx, but error occur. | {
"login": "PeterKim1",
"id": 57930520,
"node_id": "MDQ6VXNlcjU3OTMwNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57930520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterKim1",
"html_url": "https://github.com/PeterKim1",
"followers_url": "https://api.github.com/users/PeterKim1/followers",
"following_url": "https://api.github.com/users/PeterKim1/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterKim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterKim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterKim1/subscriptions",
"organizations_url": "https://api.github.com/users/PeterKim1/orgs",
"repos_url": "https://api.github.com/users/PeterKim1/repos",
"events_url": "https://api.github.com/users/PeterKim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterKim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Check This! \r\n\r\nhttps://github.com/facebookresearch/dinov2/issues/19"
] | 1,697 | 1,697 | 1,697 | NONE | null | Hi.
Thanks for your great works.
I want to use DINOv2 for segmentation task, so I try to use DINOv2 in HF, transformers.
I use https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DINOv2/Train_a_linear_classifier_on_top_of_DINOv2_for_semantic_segmentation.ipynb <- this ipynb notebook.
If you run that ipynb notebook, than can make same model as me.
I need to convert this model to onnx, so I use these codes:
```
torch.onnx.export(model,
torch.randn(1, 3, 448, 448, device = 'cuda'),
'./huggingface_DINOv2.onnx',
input_names = ['input_0'],
output_names = ['output_0'],
opset_version=11)
```
But this errors occur.
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[101], line 1
----> 1 torch.onnx.export(model,
2 torch.randn(1, 3, 448, 448, device = 'cuda'),
3 './huggingface_DINOv2.onnx',
4 input_names = ['input_0'],
5 output_names = ['output_0'],
6 opset_version=11)
File /usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:506, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions)
188 @_beartype.beartype
189 def export(
190 model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],
(...)
206 export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,
207 ) -> None:
208 r"""Exports a model into ONNX format.
209
210 If ``model`` is not a :class:`torch.jit.ScriptModule` nor a
(...)
503 All errors are subclasses of :class:`errors.OnnxExporterError`.
504 """
--> 506 _export(
507 model,
508 args,
509 f,
510 export_params,
511 verbose,
512 training,
513 input_names,
514 output_names,
515 operator_export_type=operator_export_type,
516 opset_version=opset_version,
517 do_constant_folding=do_constant_folding,
518 dynamic_axes=dynamic_axes,
519 keep_initializers_as_inputs=keep_initializers_as_inputs,
520 custom_opsets=custom_opsets,
521 export_modules_as_functions=export_modules_as_functions,
522 )
File /usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:1548, in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, onnx_shape_inference, export_modules_as_functions)
1545 dynamic_axes = {}
1546 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
-> 1548 graph, params_dict, torch_out = _model_to_graph(
1549 model,
1550 args,
1551 verbose,
1552 input_names,
1553 output_names,
1554 operator_export_type,
1555 val_do_constant_folding,
1556 fixed_batch_size=fixed_batch_size,
1557 training=training,
1558 dynamic_axes=dynamic_axes,
1559 )
1561 # TODO: Don't allocate a in-memory string for the protobuf
1562 defer_weight_export = (
1563 export_type is not _exporter_states.ExportTypes.PROTOBUF_FILE
1564 )
File /usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:1113, in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, dynamic_axes)
1110 args = (args,)
1112 model = _pre_trace_quant_model(model, args)
-> 1113 graph, params, torch_out, module = _create_jit_graph(model, args)
1114 params_dict = _get_named_param_dict(graph, params)
1116 try:
File /usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:989, in _create_jit_graph(model, args)
984 graph = _C._propagate_and_assign_input_shapes(
985 graph, flattened_args, param_count_list, False, False
986 )
987 return graph, params, torch_out, None
--> 989 graph, torch_out = _trace_and_get_graph_from_model(model, args)
990 _C._jit_pass_onnx_lint(graph)
991 state_dict = torch.jit._unique_state_dict(model)
File /usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:893, in _trace_and_get_graph_from_model(model, args)
891 prev_autocast_cache_enabled = torch.is_autocast_cache_enabled()
892 torch.set_autocast_cache_enabled(False)
--> 893 trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
894 model,
895 args,
896 strict=False,
897 _force_outplace=False,
898 _return_inputs_states=True,
899 )
900 torch.set_autocast_cache_enabled(prev_autocast_cache_enabled)
902 warn_on_static_input_change(inputs_states)
File /usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py:1268, in _get_trace_graph(f, args, kwargs, strict, _force_outplace, return_inputs, _return_inputs_states)
1266 if not isinstance(args, tuple):
1267 args = (args,)
-> 1268 outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
1269 return outs
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py:127, in ONNXTracedModule.forward(self, *args)
124 else:
125 return tuple(out_vars)
--> 127 graph, out = torch._C._create_graph_by_tracing(
128 wrapper,
129 in_vars + module_state,
130 _create_interpreter_name_lookup_fn(),
131 self.strict,
132 self._force_outplace,
133 )
135 if self._return_inputs:
136 return graph, outs[0], ret_inputs[0]
File /usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py:118, in ONNXTracedModule.forward.<locals>.wrapper(*args)
116 if self._return_inputs_states:
117 inputs_states.append(_unflatten(in_args, in_desc))
--> 118 outs.append(self.inner(*trace_inputs))
119 if self._return_inputs_states:
120 inputs_states[0] = (inputs_states[0], trace_inputs)
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1488, in Module._slow_forward(self, *input, **kwargs)
1486 recording_scopes = False
1487 try:
-> 1488 result = self.forward(*input, **kwargs)
1489 finally:
1490 if recording_scopes:
Cell In[91], line 32, in Dinov2ForSemanticSegmentation.forward(self, pixel_values, output_hidden_states, output_attentions, labels)
28 def forward(self, pixel_values, output_hidden_states=False, output_attentions=False, labels=None):
29 #print(pixel_values.shape)
30 #print(labels.shape)
31 # use frozen features
---> 32 outputs = self.dinov2(pixel_values,
33 output_hidden_states=output_hidden_states,
34 output_attentions=output_attentions)
36 #print("outputs shape? :", outputs.shape)
37 #print("?? : ", type(outputs))
38 # get the patch embeddings - so we exclude the CLS token
39 patch_embeddings = outputs.last_hidden_state[:,1:,:]
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1488, in Module._slow_forward(self, *input, **kwargs)
1486 recording_scopes = False
1487 try:
-> 1488 result = self.forward(*input, **kwargs)
1489 finally:
1490 if recording_scopes:
File ~/.local/lib/python3.8/site-packages/transformers/models/dinov2/modeling_dinov2.py:645, in Dinov2Model.forward(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict)
638 # Prepare head mask if needed
639 # 1.0 in head_mask indicate we keep the head
640 # attention_probs has shape bsz x n_heads x N x N
641 # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
642 # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
643 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
--> 645 embedding_output = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos)
647 encoder_outputs = self.encoder(
648 embedding_output,
649 head_mask=head_mask,
(...)
652 return_dict=return_dict,
653 )
654 sequence_output = encoder_outputs[0]
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1488, in Module._slow_forward(self, *input, **kwargs)
1486 recording_scopes = False
1487 try:
-> 1488 result = self.forward(*input, **kwargs)
1489 finally:
1490 if recording_scopes:
File ~/.local/lib/python3.8/site-packages/transformers/models/dinov2/modeling_dinov2.py:131, in Dinov2Embeddings.forward(self, pixel_values, bool_masked_pos)
128 embeddings = torch.cat((cls_tokens, embeddings), dim=1)
130 # add positional encoding to each token
--> 131 embeddings = embeddings + self.interpolate_pos_encoding(embeddings, height, width)
133 embeddings = self.dropout(embeddings)
135 return embeddings
File ~/.local/lib/python3.8/site-packages/transformers/models/dinov2/modeling_dinov2.py:106, in Dinov2Embeddings.interpolate_pos_encoding(self, embeddings, height, width)
104 patch_pos_embed = patch_pos_embed.reshape(1, int(math.sqrt(num_positions)), int(math.sqrt(num_positions)), dim)
105 patch_pos_embed = patch_pos_embed.permute(0, 3, 1, 2)
--> 106 patch_pos_embed = nn.functional.interpolate(
107 patch_pos_embed,
108 scale_factor=(height / math.sqrt(num_positions), width / math.sqrt(num_positions)),
109 mode="bicubic",
110 align_corners=False,
111 )
112 if int(height) != patch_pos_embed.shape[-2] or int(width) != patch_pos_embed.shape[-1]:
113 raise ValueError("Width or height does not match with the interpolated position embeddings")
File /usr/local/lib/python3.8/dist-packages/torch/nn/functional.py:3967, in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias)
3965 if antialias:
3966 return torch._C._nn._upsample_bicubic2d_aa(input, output_size, align_corners, scale_factors)
-> 3967 return torch._C._nn.upsample_bicubic2d(input, output_size, align_corners, scale_factors)
3969 if input.dim() == 3 and mode == "bilinear":
3970 raise NotImplementedError("Got 3D input, but bilinear mode needs 4D input")
TypeError: upsample_bicubic2d() received an invalid combination of arguments - got (Tensor, NoneType, bool, tuple), but expected one of:
* (Tensor input, tuple of ints output_size, bool align_corners, tuple of floats scale_factors)
didn't match because some of the arguments have invalid types: (Tensor, !NoneType!, bool, !tuple of (Tensor, Tensor)!)
* (Tensor input, tuple of ints output_size, bool align_corners, float scales_h, float scales_w, *, Tensor out)
I don't know well about DINOv2's codes.
but what i understand about this error message is some codes need to modify.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/dinov2/modeling_dinov2.py#L106:L111
```
patch_pos_embed = nn.functional.interpolate
patch_pos_embed,
scale_factor=(height / math.sqrt(num_positions), width / math.sqrt(num_positions)),
mode="bicubic",
align_corners=False,
)
```
I don't know if I'm right, but can you review the code snippets I pointed out?
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26790/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26789/comments | https://api.github.com/repos/huggingface/transformers/issues/26789/events | https://github.com/huggingface/transformers/pull/26789 | 1,941,674,712 | PR_kwDOCUB6oc5cuXb- | 26,789 | [`Flava`] Fix flava doc | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://github.com/huggingface/transformers/issues/26078
In fact the current flava docstring fails, running:
```bash
pytest --doctest-modules src/transformers/models/flava/modeling_flava.py::transformers.models.flava.modeling_flava.FlavaModel.forward
```
Leads to a failure. This PR fixes it
`contrastive_logits_per_image` cannot be retrieved from `FlavaModel` as detailed here: https://github.com/huggingface/transformers/issues/26078#issuecomment-1713369568 , therefore I have decided to modify the docstring to reflect the correct way of retrieving outputs from `FlavaModel`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26789/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26789",
"html_url": "https://github.com/huggingface/transformers/pull/26789",
"diff_url": "https://github.com/huggingface/transformers/pull/26789.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26789.patch",
"merged_at": 1697215117000
} |
https://api.github.com/repos/huggingface/transformers/issues/26788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26788/comments | https://api.github.com/repos/huggingface/transformers/issues/26788/events | https://github.com/huggingface/transformers/pull/26788 | 1,941,590,124 | PR_kwDOCUB6oc5cuE9G | 26,788 | Llama tokenizer: remove space in template comment | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Our template gives identical results as llama 2's original tokenization, I just double-checked by comparing the tokens using a three-turn conversation. The code has always been correct, there's no space between the `eos`/`bos` tokens.\r\n\r\nNot sure what you mean by _linking to meta's original code_, should we include a link in the docs? This would be the place where the user/assistant turns are parsed: \r\nhttps://github.com/facebookresearch/llama/blob/556949fdfb72da27c2f4a40b7f0e4cf0b8153a28/llama/generation.py#L342\r\n",
"Yep if you feel like it! This way they can compare with their own eyes the source of truth! (even if it's code)",
"Thanks to you both for your quick reviews! Feel free to merge when you see fit, I don't have write permissions in this repo :)",
"Done!"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | I think the space between the eos and bos tokens is not present in the actual template output. I'm using this documentation as a reference for everyone asking about prompting, so would like to clarify whether there's a space or not :)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ Rocketknight1, @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26788/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26788",
"html_url": "https://github.com/huggingface/transformers/pull/26788",
"diff_url": "https://github.com/huggingface/transformers/pull/26788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26788.patch",
"merged_at": 1697465764000
} |
https://api.github.com/repos/huggingface/transformers/issues/26787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26787/comments | https://api.github.com/repos/huggingface/transformers/issues/26787/events | https://github.com/huggingface/transformers/issues/26787 | 1,941,565,705 | I_kwDOCUB6oc5zufEJ | 26,787 | [i18n-<hi>] Translating docs to <Hindi> | {
"login": "hakunamatata1997",
"id": 24734119,
"node_id": "MDQ6VXNlcjI0NzM0MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/24734119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hakunamatata1997",
"html_url": "https://github.com/hakunamatata1997",
"followers_url": "https://api.github.com/users/hakunamatata1997/followers",
"following_url": "https://api.github.com/users/hakunamatata1997/following{/other_user}",
"gists_url": "https://api.github.com/users/hakunamatata1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hakunamatata1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hakunamatata1997/subscriptions",
"organizations_url": "https://api.github.com/users/hakunamatata1997/orgs",
"repos_url": "https://api.github.com/users/hakunamatata1997/repos",
"events_url": "https://api.github.com/users/hakunamatata1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/hakunamatata1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"@hakunamatata1997 assign this issue to me ",
"hey i am willing to work on translating [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md)",
"I am interested to translate the pipeline_tutorial.md. Please assign it to me.\r\n",
"@shubhusion @ciaokitty @AaryaBalwadkar fork this [repo](https://github.com/hakunamatata1997/transformers.git) and work on contributions and checklist what you have done and after open a pull request to the mentioned repo. Later I'll open it to official repo.",
"I am interested to translate. Please assign this to me."
] | 1,697 | 1,697 | null | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go π₯
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26787/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26787/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26786/comments | https://api.github.com/repos/huggingface/transformers/issues/26786/events | https://github.com/huggingface/transformers/issues/26786 | 1,941,550,709 | I_kwDOCUB6oc5zubZ1 | 26,786 | [i18n-<te>] Translating docs to <Telugu> | {
"login": "hakunamatata1997",
"id": 24734119,
"node_id": "MDQ6VXNlcjI0NzM0MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/24734119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hakunamatata1997",
"html_url": "https://github.com/hakunamatata1997",
"followers_url": "https://api.github.com/users/hakunamatata1997/followers",
"following_url": "https://api.github.com/users/hakunamatata1997/following{/other_user}",
"gists_url": "https://api.github.com/users/hakunamatata1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hakunamatata1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hakunamatata1997/subscriptions",
"organizations_url": "https://api.github.com/users/hakunamatata1997/orgs",
"repos_url": "https://api.github.com/users/hakunamatata1997/repos",
"events_url": "https://api.github.com/users/hakunamatata1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/hakunamatata1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community π (currently 0 out of 267 complete)
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [x] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go π₯
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26786/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26785/comments | https://api.github.com/repos/huggingface/transformers/issues/26785/events | https://github.com/huggingface/transformers/pull/26785 | 1,941,549,812 | PR_kwDOCUB6oc5ct8Ft | 26,785 | [`core`] Fix fa-2 import | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Can confirm on my end that installing an old version of FA now does not lead to an error! Merging !"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://github.com/huggingface/transformers/issues/26778
For users that have FA-1 installed in their environment importing some modules will lead to errors, making `transformers` unusable. This PR fixes this issue by changing `is_flash_attn_available()` to `is_flash_attn_2_available()`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26785/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26785",
"html_url": "https://github.com/huggingface/transformers/pull/26785",
"diff_url": "https://github.com/huggingface/transformers/pull/26785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26785.patch",
"merged_at": 1697194610000
} |
https://api.github.com/repos/huggingface/transformers/issues/26784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26784/comments | https://api.github.com/repos/huggingface/transformers/issues/26784/events | https://github.com/huggingface/transformers/pull/26784 | 1,941,501,197 | PR_kwDOCUB6oc5ctxm7 | 26,784 | Update logits_process.py docstrings to clarify penalty and reward cases (attempt #2) | {
"login": "larekrow",
"id": 127832774,
"node_id": "U_kgDOB56Sxg",
"avatar_url": "https://avatars.githubusercontent.com/u/127832774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larekrow",
"html_url": "https://github.com/larekrow",
"followers_url": "https://api.github.com/users/larekrow/followers",
"following_url": "https://api.github.com/users/larekrow/following{/other_user}",
"gists_url": "https://api.github.com/users/larekrow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larekrow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larekrow/subscriptions",
"organizations_url": "https://api.github.com/users/larekrow/orgs",
"repos_url": "https://api.github.com/users/larekrow/repos",
"events_url": "https://api.github.com/users/larekrow/events{/privacy}",
"received_events_url": "https://api.github.com/users/larekrow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you run `make fixup` to pass the failing test",
"Hey @ArthurZucker, I tried to run `make fixup` but encountered an error in `repo-consistency`. \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/users/user/temp/transformers/utils/update_metadata.py\", line 337, in <module>\r\n check_pipeline_tags()\r\n File \"/home/users/user/temp/transformers/utils/update_metadata.py\", line 316, in check_pipeline_tags\r\n model = model[0]\r\nIndexError: tuple index out of range\r\nmake: *** [Makefile:44: repo-consistency] Error 1\r\n```\r\n\r\nAnyway I thought I should not be encountering this in the first place since I only modified docstrings in a single file. So, I ran `black src/transformers/generation/logits_process.py` which gave:\r\n```\r\nAll done! β¨ π° β¨\r\n1 file left unchanged.\r\n```\r\n\r\nEven `ruff src/transformers/generation/logits_process.py --fix` does nothing. Not sure what I should do to pass the failing test? I'm using `black==23.9.1` and `ruff==0.0.292` if that helps.",
"Hey! we have \"ruff>=0.0.241,<=0.0.259\"` so the version you are using is probably too far! Try downgrading or doing something like `pip install -e \".[quality]\"` and then run `make style`",
"Thanks for the guidance @ArthurZucker! The test has passed π ",
"Cool thanks for the contribution ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26784). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | This PR fixes point 3 of https://github.com/huggingface/transformers/issues/25970 by clarifying the penalty and reward cases for RepetitionPenaltyLogitsProcessor and EncoderRepetitionPenaltyLogitsProcessor within the docstrings.
PR https://github.com/huggingface/transformers/pull/26129 was the original copy, but I have accidentally deleted my repo that submitted the PR, so I cannot reopen that PR π
@gante, for your review please. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26784/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26784",
"html_url": "https://github.com/huggingface/transformers/pull/26784",
"diff_url": "https://github.com/huggingface/transformers/pull/26784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26784.patch",
"merged_at": 1697530417000
} |
https://api.github.com/repos/huggingface/transformers/issues/26783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26783/comments | https://api.github.com/repos/huggingface/transformers/issues/26783/events | https://github.com/huggingface/transformers/issues/26783 | 1,941,354,106 | I_kwDOCUB6oc5ztrZ6 | 26,783 | Inconsistent input signatures for gpt2-medium tensorflow | {
"login": "Spycsh",
"id": 39623753,
"node_id": "MDQ6VXNlcjM5NjIzNzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/39623753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spycsh",
"html_url": "https://github.com/Spycsh",
"followers_url": "https://api.github.com/users/Spycsh/followers",
"following_url": "https://api.github.com/users/Spycsh/following{/other_user}",
"gists_url": "https://api.github.com/users/Spycsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Spycsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Spycsh/subscriptions",
"organizations_url": "https://api.github.com/users/Spycsh/orgs",
"repos_url": "https://api.github.com/users/Spycsh/repos",
"events_url": "https://api.github.com/users/Spycsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Spycsh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Spycsh, this was caused by a refactor in our TF input signatures, where they're now autodetected in almost all cases. As a result, GPT2 got `token_type_ids` in its input signature because the model does actually support them as an input. However, it's easy to save with a different signature if required!\r\n\r\nTry this code snippet:\r\n\r\n```python\r\nimport tensorflow as tf\r\nimport transformers\r\n\r\nmodel = transformers.TFAutoModelForCausalLM.from_pretrained(\"gpt2\")\r\ndefault_signature = model.input_signature # This is just a dict\r\ndel default_signature['token_type_ids'] # Remove any keys you don't want\r\nserving_default = model.serving.get_concrete_function(default_signature)\r\n\r\nmodel.save(\"cur\", signatures={\"serving_default\": serving_default})\r\n```\r\n\r\nAnd you're done! By changing the `default_signature` dict here you can use whatever signature you want, adding or removing keys or setting custom dtypes, etc.\r\n\r\n\r\n",
"Thanks!",
"No probs! I'm going to close the issue now, but feel free to comment or reopen it if you hit any other trouble with our input sigs.",
"Hi @Rocketknight1 , can we directly update the input signatures of the model without save and load? If we save and load that model, the model will be a tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object, which will be hard to deal with in our further logics.",
"I suspect not, unfortunately! When a TF model is traced and compiled and saved as SavedModel, it's probably quite hard to do 'model surgery' on it afterwards to remove the token_type_ids. It might be easier to create a new model in Python using `TFAutoModelForCausalLM` and then maybe load the weights from the checkpoint into the Python model, and then finally save the Python model with the desired signature?",
"Thanks @Rocketknight1 , maybe the only thing from our side that we are able to do is to accept that now the signature is `signature_wrapper(*, attention_mask, input_ids, token_type_ids)`. Saving another signature to it will change the model type and is infeasible from our side. You say that:\r\n\r\n> GPT2 got token_type_ids in its input signature because the model does actually support them as an input.\r\n\r\nMy question is, how to use the current infer signature to simulate the previous invocation. IMO, we should give a feed_dict like {\"input_ids\": xxx, \"attention_mask\": xxx, \"token_type_ids\": None} to the concrete function, right? However, we cannot pass a `None` to the `token_type_ids` as a parameter of the serving function because it is not allowed to do so in TF v1.\r\n\r\nOverall, could you give me some hints on whether you think it is possible to mimic the original call that set the token_type_ids to None. More concretely, if we use the original GPT2 model forward call, we should use it like \r\n```\r\nmodel(input_ids=xxx,attention_mask=xxx, token_type_ids=None)\r\n```\r\n\r\nSo what should be the equivalent of that by using the current infer signature function `signature_wrapper(*, attention_mask, input_ids, token_type_ids)`, when TF does not support a feed_dict `token_type_ids` value to be None?\r\n\r\nHere is a real example code with latest Transformers that you can review to understand the problem, where we now have to feed token_type_ids as None to the serving function https://github.com/intel/neural-compressor/blob/master/examples/tensorflow/nlp/large_language_models/quantization/ptq/smoothquant/main.py#L82\r\n\r\nWelcome to any suggestions!",
"Hi @Spycsh, I dug into this and it's actually trickier than I thought. For most models that support `token_type_ids`, when the value isn't set then the default is just to create an all-zeros array, something like `tf.zeros_like(input_ids)`. Therefore, for those models you could easily mimic `token_type_ids=None` by instead just passing a `tf.zeros()` array.\r\n\r\nHowever, GPT-2 is an old model now, and so its implementation of token type IDs is quite odd! Specifically, the input embeddings are also used as the token type embeddings, but when `token_type_ids=None`, then token type embeddings are completely skipped. This means that **any** token_type_ids you pass will affect the embeddings, and so there is no tensor value you can pass that will yield the same results as `token_type_ids=None`.\r\n\r\nGiven this, there's not a lot you can do when a model has already been exported with `token_type_ids` in its input signature, except to load the weights from it into a new `TFGPT2Model` and then save that model with the correct signature. I can definitely see how annoying this is in the specific case of GPT-2, so I'll probably make a PR to remove `token_type_ids` from its default input signature, so at least this stops happening in future.",
"Thanks @Rocketknight1 ! I will try to verify our example on your branch:)",
"@Rocketknight1 , our example works correctly with the branch to your latest [PR](https://github.com/huggingface/transformers/pull/26962) that explicitly set the signature. Thanks and hope to see it merged!",
"@spycsh merged!"
] | 1,697 | 1,698 | 1,697 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.33.0.dev0
- Platform: Linux-5.16.0-rc8-intel-next-01534-g53cb5f883cf7-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.17.1
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante @Rocketknight1
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here are the minimal reproduction steps:
1. Using transformers 4.33.0, run following code
```
import tensorflow as tf
import transformers
model = transformers.TFAutoModelForCausalLM.from_pretrained("gpt2")
model.save("cur")
loaded = tf.saved_model.load("cur")
print(loaded.signatures["serving_default"])
```
And you can see the input signature as:
```
ConcreteFunction signature_wrapper(*, attention_mask, input_ids, token_type_ids)
```
2. Reinstall old version transformers 4.29.2, run nearly the same code (just change the name 'cur' to 'prev' here),
```
import tensorflow as tf
import transformers
model = transformers.TFAutoModelForCausalLM.from_pretrained("gpt2")
model.save("prev")
loaded = tf.saved_model.load("prev")
print(loaded.signatures["serving_default"])
```
you will get
```
ConcreteFunction signature_wrapper(*, attention_mask, input_ids)
```
Could you give me some hints why the latest version contains the `token_type_ids` in the input signature? How to get rid of it? I tried to pass None to parameter in the feed_dict but it did not work.
### Expected behavior
I expect the input signature should be consistent to the old version (4.29.2), which only input_ids and attention_mask is needed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26783/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26782/comments | https://api.github.com/repos/huggingface/transformers/issues/26782/events | https://github.com/huggingface/transformers/pull/26782 | 1,941,348,984 | PR_kwDOCUB6oc5ctQlH | 26,782 | [docstring] Fix docstring for `RwkvConfig` | {
"login": "Bojun-Feng",
"id": 102875484,
"node_id": "U_kgDOBiHBXA",
"avatar_url": "https://avatars.githubusercontent.com/u/102875484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bojun-Feng",
"html_url": "https://github.com/Bojun-Feng",
"followers_url": "https://api.github.com/users/Bojun-Feng/followers",
"following_url": "https://api.github.com/users/Bojun-Feng/following{/other_user}",
"gists_url": "https://api.github.com/users/Bojun-Feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bojun-Feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bojun-Feng/subscriptions",
"organizations_url": "https://api.github.com/users/Bojun-Feng/orgs",
"repos_url": "https://api.github.com/users/Bojun-Feng/repos",
"events_url": "https://api.github.com/users/Bojun-Feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bojun-Feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Spuer clean, thank you @Bojun-Feng again!\r\n> \r\n> (Could you remove the Draft status - I think it's ready to be merged?)\r\n\r\nOf course!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26782). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26782",
"html_url": "https://github.com/huggingface/transformers/pull/26782",
"diff_url": "https://github.com/huggingface/transformers/pull/26782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26782.patch",
"merged_at": 1697185231000
} |
https://api.github.com/repos/huggingface/transformers/issues/26781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26781/comments | https://api.github.com/repos/huggingface/transformers/issues/26781/events | https://github.com/huggingface/transformers/issues/26781 | 1,941,261,569 | I_kwDOCUB6oc5ztU0B | 26,781 | MaskCLIP | {
"login": "sushmanthreddy",
"id": 73489688,
"node_id": "MDQ6VXNlcjczNDg5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushmanthreddy",
"html_url": "https://github.com/sushmanthreddy",
"followers_url": "https://api.github.com/users/sushmanthreddy/followers",
"following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions",
"organizations_url": "https://api.github.com/users/sushmanthreddy/orgs",
"repos_url": "https://api.github.com/users/sushmanthreddy/repos",
"events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushmanthreddy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"I would like to work on this issue ."
] | 1,697 | 1,697 | null | CONTRIBUTOR | null | ### Model description
MaskCLIP represents a transformative step in the realm of open-vocabulary universal image segmentation. Built upon the robust foundation of pre-trained CLIP models, it negates the need for additional finetuning or distillation. The core of MaskCLIP is its innovative Transformer-based MaskCLIP Visual Encoder. This encoder is meticulously designed to integrate mask tokens with a pre-trained ViT CLIP model, making it adept at both semantic and instance segmentation, as well as class prediction. One of MaskCLIP's standout features is its ability to efficiently harness the power of pre-trained dense and local CLIP features within its Visual Encoder. This design choice not only streamlines the segmentation process but also sidesteps the traditionally lengthy student-teacher training phase. Demonstrating its prowess, MaskCLIP has consistently outperformed existing methods on renowned datasets like ADE20K and PASCAL, especially in tasks of semantic, instance, and panoptic segmentation.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
project page : https://maskclip.github.io
github : https://github.com/mlpc-ucsd/MaskCLIP
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26781/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26780/comments | https://api.github.com/repos/huggingface/transformers/issues/26780/events | https://github.com/huggingface/transformers/issues/26780 | 1,941,232,771 | I_kwDOCUB6oc5ztNyD | 26,780 | CoaT | {
"login": "sushmanthreddy",
"id": 73489688,
"node_id": "MDQ6VXNlcjczNDg5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushmanthreddy",
"html_url": "https://github.com/sushmanthreddy",
"followers_url": "https://api.github.com/users/sushmanthreddy/followers",
"following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions",
"organizations_url": "https://api.github.com/users/sushmanthreddy/orgs",
"repos_url": "https://api.github.com/users/sushmanthreddy/repos",
"events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushmanthreddy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | ### Model description
Co-scale conv-attentional image Transformers (CoaT), a Transformer-based image classifier equipped with co-scale and conv-attentional mechanisms. First, the co-scale mechanism maintains the integrity of Transformersβ encoder branches at individual scales, while allowing representations learned at different scales to ef- fectively communicate with each other; we design a series of serial and parallel blocks to realize the co-scale mecha- nism. Second, we devise a conv-attentional mechanism by realizing a relative position embedding formulation in the factorized attention module with an efficient convolution-like implementation. CoaT empowers image Transformers with enriched multi-scale and contextual modeling capabilities. On ImageNet, relatively small CoaT models attain superior classification results compared with similar-sized convolu- tional neural networks and image/vision Transformers.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://arxiv.org/pdf/2104.06399.pdf
https://github.com/mlpc-ucsd/CoaT | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26780/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26779/comments | https://api.github.com/repos/huggingface/transformers/issues/26779/events | https://github.com/huggingface/transformers/pull/26779 | 1,941,171,174 | PR_kwDOCUB6oc5csqMr | 26,779 | Update expect outputs of `IdeficsProcessorTest.test_tokenizer_padding` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
There is a change on this Hub repo.
https://huggingface.co/HuggingFaceM4/tiny-random-idefics/commit/cb18d7776a2e725eb20a2b9f8addf0991b9194b6
that is used for testing.
cc @leot13: you and @ArthurZucker knows better than me on this | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26779/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26779",
"html_url": "https://github.com/huggingface/transformers/pull/26779",
"diff_url": "https://github.com/huggingface/transformers/pull/26779.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26779.patch",
"merged_at": 1697183531000
} |
https://api.github.com/repos/huggingface/transformers/issues/26778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26778/comments | https://api.github.com/repos/huggingface/transformers/issues/26778/events | https://github.com/huggingface/transformers/issues/26778 | 1,941,168,833 | I_kwDOCUB6oc5zs-LB | 26,778 | Many things are not importable in an environment with flash attention v1 installed | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @dakinggg \r\nThanks for reporting, https://github.com/huggingface/transformers/pull/26785 should hopefully resolve the issue"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
and
```
Name: flash-attn
Version: 1.0.9
Summary: Flash Attention: Fast and Memory-Efficient Exact Attention
Home-page: https://github.com/HazyResearch/flash-attention
Author: Tri Dao
Author-email: [email protected]
License: UNKNOWN
Location: /mnt/workdisk/danielking/miniconda3/envs/foundry-3.10/lib/python3.10/site-packages
Requires: einops, ninja, packaging, torch
Required-by:
```
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
In [1]: from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING
In [3]: lll = list(MODEL_FOR_CAUSAL_LM_MAPPING.values())
```
or
```
mistral = transformers.AutoModelForCausalLM.from_pretrained('mistralai/Mistral-7B-v0.1')
```
result in
```
File /mnt/workdisk/danielking/miniconda3/envs/foundry-3.10/lib/python3.10/site-packages/transformers/utils/import_utils.py:1284, in _LazyModule._get_module(self, module_name)
1282 return importlib.import_module("." + module_name, self.__name__)
1283 except Exception as e:
-> 1284 raise RuntimeError(
1285 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1286 f" traceback):\n{e}"
1287 ) from e
RuntimeError: Failed to import transformers.models.mistral.modeling_mistral because of the following error (look up to see its traceback):
cannot import name 'flash_attn_func' from 'flash_attn' (/mnt/workdisk/danielking/miniconda3/envs/foundry-3.10/lib/python3.10/site-packages/flash_attn/__init__.py)
```
### Expected behavior
I should still be able to import things when i have flash attention v1 installed, even if i can't make use of it directly in transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26778/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26778/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26777/comments | https://api.github.com/repos/huggingface/transformers/issues/26777/events | https://github.com/huggingface/transformers/issues/26777 | 1,941,157,458 | I_kwDOCUB6oc5zs7ZS | 26,777 | Custom tokenizer no longer compatible with transformers 4.34 | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There's pretty much nothing we can do here, this custom model needs to be updated. The break is expected, mostly put the call to super.__init__() at the end of the __init__",
"Yeah understood this is a custom tokenizer. Thanks for the fix suggestion! Passing along to replit.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In [1]: import transformers
In [2]: replitt = transformers.AutoTokenizer.from_pretrained('replit/replit-code-v1-3b', trust_remote_code=True)
...
File ~/.cache/huggingface/modules/transformers_modules/replit/replit-code-v1-3b/cc0a4f17a8d72b71d62ea53cb0e23e4dac352067/replit_lm_tokenizer.py:76, in ReplitLMTokenizer.get_vocab(self)
75 def get_vocab(self):
---> 76 vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
77 vocab.update(self.added_tokens_encoder)
78 return vocab
File ~/.cache/huggingface/modules/transformers_modules/replit/replit-code-v1-3b/cc0a4f17a8d72b71d62ea53cb0e23e4dac352067/replit_lm_tokenizer.py:73, in ReplitLMTokenizer.vocab_size(self)
71 @property
72 def vocab_size(self):
---> 73 return self.sp_model.get_piece_size()
AttributeError: 'ReplitLMTokenizer' object has no attribute 'sp_model'
### Expected behavior
I don't actually know if it is expected to work or not, but maybe you can advise on the fix. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26777/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26776/comments | https://api.github.com/repos/huggingface/transformers/issues/26776/events | https://github.com/huggingface/transformers/issues/26776 | 1,941,155,203 | I_kwDOCUB6oc5zs62D | 26,776 | Cannot save Adafactor optimizer when using Trainer with accelerate and fsdp | {
"login": "scinerd68",
"id": 62432739,
"node_id": "MDQ6VXNlcjYyNDMyNzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/62432739?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scinerd68",
"html_url": "https://github.com/scinerd68",
"followers_url": "https://api.github.com/users/scinerd68/followers",
"following_url": "https://api.github.com/users/scinerd68/following{/other_user}",
"gists_url": "https://api.github.com/users/scinerd68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scinerd68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scinerd68/subscriptions",
"organizations_url": "https://api.github.com/users/scinerd68/orgs",
"repos_url": "https://api.github.com/users/scinerd68/repos",
"events_url": "https://api.github.com/users/scinerd68/events{/privacy}",
"received_events_url": "https://api.github.com/users/scinerd68/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems that this will only be an issue when `fsdp_use_orig_params` is set to `true`, I was able to run the script when it is set to `false`"
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
version:
pytorch==2.0.1
transformers==4.34.0
accelerate==0.23.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
# Import stuff
...
# Arguments
args = TrainingArguments(
output_dir="/checkpoints/test_train_llm", # set to "/checkpoints/train_llm" when training
per_device_train_batch_size=4,
logging_steps=1, # Set to 50 when training
max_steps=4, # Set to 10_000 when training
gradient_accumulation_steps=8,
weight_decay=0.1,
warmup_ratio = 0.01,
lr_scheduler_type="cosine",
learning_rate=1e-5,
save_steps=2, # Set to 500 when training
fp16=True,
push_to_hub=False,
gradient_checkpointing=True,
optim="adafactor",
save_total_limit=5
)
model_name = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name,
local_files_only=False,
cache_dir='/checkpoints')
model = AutoModelForCausalLM.from_pretrained(model_name,
local_files_only=False,
cache_dir='/checkpoints')
# Load and prepare dataset
lm_datasets = ...
# Train
trainer = Trainer(
model=model,
args=args,
train_dataset=lm_datasets,
data_collator=default_data_collator
)
result = trainer.train()
```
It seems that the Adafactor optimizer is not able to save
```python
Traceback (most recent call last):
File "/app/main3.py", line 214, in <module>
main()
File "/app/main3.py", line 209, in main
result = trainer.train()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1591, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1984, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2339, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2408, in _save_checkpoint
save_fsdp_optimizer(
File "/opt/conda/lib/python3.10/site-packages/accelerate/utils/fsdp_utils.py", line 138, in save_fsdp_optimizer
optim_state = FSDP.optim_state_dict(model, optimizer)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 1753, in optim_state_dict
return FullyShardedDataParallel._optim_state_dict_impl(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 1154, in _optim_state_dict_impl
return _optim_state_dict(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/_optim_utils.py", line 1455, in _optim_state_dict
_gather_orig_param_state(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/_optim_utils.py", line 1690, in _gather_orig_param_state
gathered_state = _all_gather_optim_state(fsdp_state, optim_state)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/fsdp/_optim_utils.py", line 1637, in _all_gather_optim_state
for name, non_tensor_value in object_state.non_tensors.items():
AttributeError: 'int' object has no attribute 'items'
```
My config file to run with `accelerate launch`:
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Expected behavior
The optimizer is saved successfully and the model is continued to be trained | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26776/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26775/comments | https://api.github.com/repos/huggingface/transformers/issues/26775/events | https://github.com/huggingface/transformers/issues/26775 | 1,941,145,107 | I_kwDOCUB6oc5zs4YT | 26,775 | bos_token does not persist through save_pretrained in 4.34 | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks will be fixed by #26570 "
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
In [1]: import transformers
In [2]: t0tt = transformers.AutoTokenizer.from_pretrained('bigscience/T0pp')
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
In [3]: t0tt.add_special_tokens({'bos_token': '[NEWSPECIAL]'})
Out[3]: 1
In [4]: t0tt.save_pretrained('saved-tokenizer')
Out[4]:
('saved-tokenizer/tokenizer_config.json',
'saved-tokenizer/special_tokens_map.json',
'saved-tokenizer/spiece.model',
'saved-tokenizer/added_tokens.json',
'saved-tokenizer/tokenizer.json')
In [5]: loaded_t0tt = transformers.AutoTokenizer.from_pretrained('saved-tokenizer')
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
In [6]: t0tt.bos_token
Out[6]: '[NEWSPECIAL]'
In [7]: loaded_t0tt.bos_token
Using bos_token, but it is not set yet.
```
### Expected behavior
Expected that an added pad_token persists when saving and then reloading | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26775/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26774/comments | https://api.github.com/repos/huggingface/transformers/issues/26774/events | https://github.com/huggingface/transformers/pull/26774 | 1,941,140,150 | PR_kwDOCUB6oc5csjkL | 26,774 | deprecate function `get_default_device` in `tools/base.py` | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26774). All of your documentation changes will be reflected on that endpoint.",
"this PR is ready to be merged :) @muellerzr @ArthurZucker ",
"cc @amyeroberts "
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As per title.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26774/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26774",
"html_url": "https://github.com/huggingface/transformers/pull/26774",
"diff_url": "https://github.com/huggingface/transformers/pull/26774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26774.patch",
"merged_at": 1698743740000
} |
https://api.github.com/repos/huggingface/transformers/issues/26773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26773/comments | https://api.github.com/repos/huggingface/transformers/issues/26773/events | https://github.com/huggingface/transformers/issues/26773 | 1,941,136,116 | I_kwDOCUB6oc5zs2L0 | 26,773 | Saving and loading a tokenizer does not produce an identical tokenizer in 4.34 | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker ",
"Hey! Also fixed in #26570 THanks for reporting! "
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
In [1]: import transformers
In [2]: t0tt = transformers.AutoTokenizer.from_pretrained('bigscience/T0pp')
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
In [3]: t0tt.save_pretrained('saved-tokenizer')
Out[3]:
('saved-tokenizer/tokenizer_config.json',
'saved-tokenizer/special_tokens_map.json',
'saved-tokenizer/spiece.model',
'saved-tokenizer/added_tokens.json',
'saved-tokenizer/tokenizer.json')
In [4]: loaded_t0tt = transformers.AutoTokenizer.from_pretrained('saved-tokenizer')
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
In [6]: t0tt._eos_token
Out[6]: AddedToken("</s>", rstrip=True, lstrip=True, single_word=False, normalized=True, special=True)
In [7]: loaded_t0tt._eos_token
Out[7]: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True)
In [8]: t0tt.eos_token
Out[8]: '</s>'
In [9]: t0tt('hello </s> goodbye')
Out[9]: {'input_ids': [21820, 1, 23281, 1], 'attention_mask': [1, 1, 1, 1]}
In [10]: loaded_t0tt('hello </s> goodbye')
Out[10]: {'input_ids': [21820, 3, 1, 23281, 1], 'attention_mask': [1, 1, 1, 1, 1]}
```
### Expected behavior
When saving and loading a tokenizer, it
(1) behaves the same
(2) has the same config details on the AddedToken | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26773/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26772/comments | https://api.github.com/repos/huggingface/transformers/issues/26772/events | https://github.com/huggingface/transformers/issues/26772 | 1,941,123,386 | I_kwDOCUB6oc5zszE6 | 26,772 | Adding new special tokens does not seem to work properly in 4.34 | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker ",
"Hey, this is expected for a fast tokenizer because you are not changing the `processor` . You need to run `llamat.update_post_processor()` before to make sure this is taken into account. \r\nInitializing the tokenizer would be the best way to have the expected results for fast tokenizers\r\n",
"Thanks, some clarifications:\r\n(1) Is it expected that this behavior changed in 4.34? Its a bit scary of a behavior to silently change and could lead to very tricky bugs in peoples' workflows.\r\n(2) Is the recommendation that I always call `update_post_processor` after adding new special tokens?",
"1. It's not necessarily from 4.34. I tested with 4.33 and this does not work either. \r\n2. Fast tokenizers as supposed to be stateless, and we usually don't support post init updates. My recommendation is rather to do everything at init time rather than post init for fast tokenizers π ",
"Huh, I wonder why my unit test was passing before π
but anyway, makes sense, thank you!"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In [10]: import transformers
In [11]: llamat = transformers.AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')
In [12]: llamat('hello')
Out[12]: {'input_ids': [1, 22172], 'attention_mask': [1, 1]}
In [13]: llamat.add_special_tokens({'bos_token': '[NEWSPECIAL]'})
Out[13]: 1
In [14]: llamat('hello')
Out[14]: {'input_ids': [1, 22172], 'attention_mask': [1, 1]}
### Expected behavior
Expected that the new bos token is used instead of the old one. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26772/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26771/comments | https://api.github.com/repos/huggingface/transformers/issues/26771/events | https://github.com/huggingface/transformers/pull/26771 | 1,940,792,024 | PR_kwDOCUB6oc5crWYJ | 26,771 | [docstring] Fix docstring for CanineConfig | {
"login": "Sparty",
"id": 3923604,
"node_id": "MDQ6VXNlcjM5MjM2MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3923604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparty",
"html_url": "https://github.com/Sparty",
"followers_url": "https://api.github.com/users/Sparty/followers",
"following_url": "https://api.github.com/users/Sparty/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparty/subscriptions",
"organizations_url": "https://api.github.com/users/Sparty/orgs",
"repos_url": "https://api.github.com/users/Sparty/repos",
"events_url": "https://api.github.com/users/Sparty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice it works for you now @Sparty . One minor change and we are ready to go π ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26771). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/26638
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26771/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26771",
"html_url": "https://github.com/huggingface/transformers/pull/26771",
"diff_url": "https://github.com/huggingface/transformers/pull/26771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26771.patch",
"merged_at": 1697443724000
} |
https://api.github.com/repos/huggingface/transformers/issues/26770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26770/comments | https://api.github.com/repos/huggingface/transformers/issues/26770/events | https://github.com/huggingface/transformers/pull/26770 | 1,940,648,601 | PR_kwDOCUB6oc5cq2nI | 26,770 | Fix Falcon generation test | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | One of the Falcon tests was broken by the shift to using in-library checkpoints. The reason is that the Falcon tokenizer now correctly doesn't return `token_type_ids`, since Falcon doesn't use that input. The test was discarding that key, but this is no longer necessary. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26770/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26770",
"html_url": "https://github.com/huggingface/transformers/pull/26770",
"diff_url": "https://github.com/huggingface/transformers/pull/26770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26770.patch",
"merged_at": 1697206227000
} |
https://api.github.com/repos/huggingface/transformers/issues/26769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26769/comments | https://api.github.com/repos/huggingface/transformers/issues/26769/events | https://github.com/huggingface/transformers/pull/26769 | 1,940,585,682 | PR_kwDOCUB6oc5cqodY | 26,769 | translation brazilian portuguese | {
"login": "alvarorichard",
"id": 88117897,
"node_id": "MDQ6VXNlcjg4MTE3ODk3",
"avatar_url": "https://avatars.githubusercontent.com/u/88117897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarorichard",
"html_url": "https://github.com/alvarorichard",
"followers_url": "https://api.github.com/users/alvarorichard/followers",
"following_url": "https://api.github.com/users/alvarorichard/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarorichard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarorichard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarorichard/subscriptions",
"organizations_url": "https://api.github.com/users/alvarorichard/orgs",
"repos_url": "https://api.github.com/users/alvarorichard/repos",
"events_url": "https://api.github.com/users/alvarorichard/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarorichard/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have already applied the suggested changes",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26769). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | Hello, I would like to add the Brazilian Portuguese translation to the README.md. I translated it as faithfully and comprehensibly as possible to the original. I only did the translation, without adding anything else besides that. I kindly ask you to consider this contribution and review it for approval.
@stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26769/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26769",
"html_url": "https://github.com/huggingface/transformers/pull/26769",
"diff_url": "https://github.com/huggingface/transformers/pull/26769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26769.patch",
"merged_at": 1697220828000
} |
https://api.github.com/repos/huggingface/transformers/issues/26768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26768/comments | https://api.github.com/repos/huggingface/transformers/issues/26768/events | https://github.com/huggingface/transformers/issues/26768 | 1,940,576,360 | I_kwDOCUB6oc5zqtho | 26,768 | special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. | {
"login": "twers1",
"id": 102418053,
"node_id": "U_kgDOBhrGhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102418053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twers1",
"html_url": "https://github.com/twers1",
"followers_url": "https://api.github.com/users/twers1/followers",
"following_url": "https://api.github.com/users/twers1/following{/other_user}",
"gists_url": "https://api.github.com/users/twers1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twers1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twers1/subscriptions",
"organizations_url": "https://api.github.com/users/twers1/orgs",
"repos_url": "https://api.github.com/users/twers1/repos",
"events_url": "https://api.github.com/users/twers1/events{/privacy}",
"received_events_url": "https://api.github.com/users/twers1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually this should not be trigger as the tokens that are added are added at the beginning of the vocab. Thanks for reporting!",
"> Actually this should not be trigger as the tokens that are added are added at the beginning of the vocab. Thanks for reporting!\r\n\r\nhow can I get an answer from artificial intelligence then? he sends me this and doesn't finish the command \"python openai.py\"",
"It will be fixed by #26570 ! Otherwise it's just a warning should not impact your code "
] | 1,697 | 1,697 | 1,697 | NONE | null | 
i have a python code:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = 'cpu'
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
model.to('cpu')
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
vegeterian_recipe_prompt = """### Instruction: Act as a gourmet chef.
I have a friend coming over who is a vegetarian.
I want to impress my friend with a special vegetarian dish.
What do you recommend?
Give me two options, along with the whole recipe for each.
### Answer:
"""
encoded_instruction = tokenizer(vegeterian_recipe_prompt, return_tensors="pt", add_special_tokens=True)
model_inputs = encoded_instruction.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=500, do_sample=True, pad_token_id=tokenizer.eos_token_id)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26768/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26767/comments | https://api.github.com/repos/huggingface/transformers/issues/26767/events | https://github.com/huggingface/transformers/issues/26767 | 1,940,508,868 | I_kwDOCUB6oc5zqdDE | 26,767 | `QuantAct` in `IntSoftmax` receives integer value | {
"login": "mbekmyrz",
"id": 46756692,
"node_id": "MDQ6VXNlcjQ2NzU2Njky",
"avatar_url": "https://avatars.githubusercontent.com/u/46756692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbekmyrz",
"html_url": "https://github.com/mbekmyrz",
"followers_url": "https://api.github.com/users/mbekmyrz/followers",
"following_url": "https://api.github.com/users/mbekmyrz/following{/other_user}",
"gists_url": "https://api.github.com/users/mbekmyrz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbekmyrz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbekmyrz/subscriptions",
"organizations_url": "https://api.github.com/users/mbekmyrz/orgs",
"repos_url": "https://api.github.com/users/mbekmyrz/repos",
"events_url": "https://api.github.com/users/mbekmyrz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbekmyrz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey can I contribute to this issue can you tell me what all techstack I need and how should I start",
"Sure! Feel free to open a PR for a fix! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | **Problem/Question:**
`IntSoftmax` class has `QuantAct`:
https://github.com/huggingface/transformers/blob/fc63914399b6f60512c720959f9182b02ae4a45c/src/transformers/models/ibert/quant_modules.py#L379-L380
and it is called in `IntSoftmax.forward()` by passing `exp_int` which is integer:
https://github.com/huggingface/transformers/blob/fc63914399b6f60512c720959f9182b02ae4a45c/src/transformers/models/ibert/quant_modules.py#L418
and then within `QuantAct.forward` it is passed to `FixedPointMul` where the integer value is rescaled again to find 'integer' value:
https://github.com/huggingface/transformers/blob/fc63914399b6f60512c720959f9182b02ae4a45c/src/transformers/models/ibert/quant_modules.py#L206-L213
https://github.com/huggingface/transformers/blob/fc63914399b6f60512c720959f9182b02ae4a45c/src/transformers/models/ibert/quant_modules.py#L787
Whereas it was an integer already. All the other modules in IBert that has `QuantAct` passes real valued `x`, and it's okay for `FixedPointMul` to divide by scaling factor to find `z_integer`. But within `IntSoftmax` input `x` was already an integer.
Same error in the original source code: https://github.com/kssteven418/I-BERT/blob/1b09c759d6aeb71312df9c6ef74fa268a87c934e/fairseq/quantization/utils/quant_modules.py#L652C9-L652C72
**Solution:**
```
exp, exp_scaling_factor = self.act(exp_int * exp_scaling_factor, exp_scaling_factor)
```
tagging @kssteven418 as the designer of the model:) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26767/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26766/comments | https://api.github.com/repos/huggingface/transformers/issues/26766/events | https://github.com/huggingface/transformers/issues/26766 | 1,940,453,692 | I_kwDOCUB6oc5zqPk8 | 26,766 | `use_default_system_prompt` broken | {
"login": "AjayP13",
"id": 5404177,
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AjayP13",
"html_url": "https://github.com/AjayP13",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Rocketknight1 - This seems to stem from a commit pushed to `meta-llama/Llama-2-7b-chat-hf` on HuggingFace Hub: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/commit/af6df14e494ef16d69ec55e9a016e900a2dde1c8\r\n\r\nWas this intentional?",
"@ajayp13 Yes, this was intentional! After testing between us and Meta, people felt that the default system prompt caused LLaMA-2 to refuse too many requests, even totally valid ones, which made it much less useful than it could be.\r\n\r\nYou can try running it without the default prompt, or with a shorter system prompt of your own - either of those should work!",
"I see, thanks! Closing.",
"If anyone else sees this issue and wants to keep using the original prompt, here is the original system message. You can add this as the first message in your conversation with role `system` in order to keep using it:\r\n\r\n```\r\n\"\"\"You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your \\\r\nanswers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure\\\r\n that your responses are socially unbiased and positive in nature.\r\n\r\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not \\\r\ncorrect. If you don't know the answer to a question, please don't share false information.\"\"\"\r\n```"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | ### System Info
- Python 3.11.4
- 4.34.0
### Who can help?
@younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```bash
Python 3.11.4 (main, Jun 20 2023, 17:23:00) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers
>>> transformers.__version__
'4.34.0'
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
>>> tokenizer.use_default_system_prompt = True
>>> tokenizer.apply_chat_template([{"role": "user", "content": "Hello"}], tokenize=False)
'<s>[INST] Hello [/INST]'
```
### Expected behavior
It should return the default LLaMa-2 system prompt (this was just working yesterday, something recently broke):
```bash
'[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\nHello [/INST]'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26766/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26765/comments | https://api.github.com/repos/huggingface/transformers/issues/26765/events | https://github.com/huggingface/transformers/pull/26765 | 1,940,364,466 | PR_kwDOCUB6oc5cp3eB | 26,765 | Disable default system prompt for LLaMA | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Just opened #26766, I see this is intentional at request of Meta. @Rocketknight1 was any reason given for removing the system prompt? Should no system prompt be used? Can it be replaced with a different system prompt if one still needs to be used?",
"Hi @AjayP13, the default system prompt was generally felt to cause the model to be very conservative, and to refuse to answer lots of queries, even valid ones. You can try running LLaMA-2 either without a system prompt at all, or with any prompt of your choice - either of those is a valid option!"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | Disable the system prompt by default in LLaMA, as requested by Meta.
Note that I've already pushed chat templates to the repos, so this change should mostly already be in effect! I'm just changing it in the library too for consistency. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26765/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26765",
"html_url": "https://github.com/huggingface/transformers/pull/26765",
"diff_url": "https://github.com/huggingface/transformers/pull/26765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26765.patch",
"merged_at": 1697204918000
} |
https://api.github.com/repos/huggingface/transformers/issues/26764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26764/comments | https://api.github.com/repos/huggingface/transformers/issues/26764/events | https://github.com/huggingface/transformers/pull/26764 | 1,940,263,281 | PR_kwDOCUB6oc5cpg6G | 26,764 | Skip `TrainerIntegrationFSDP::test_basic_run_with_cpu_offload` if `torch < 2.1` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
Skip `TrainerIntegrationFSDP::test_basic_run_with_cpu_offload` if `torch < 2.1` as this takes 4 hours on `torch 2.0`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26764",
"html_url": "https://github.com/huggingface/transformers/pull/26764",
"diff_url": "https://github.com/huggingface/transformers/pull/26764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26764.patch",
"merged_at": 1697127729000
} |
https://api.github.com/repos/huggingface/transformers/issues/26763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26763/comments | https://api.github.com/repos/huggingface/transformers/issues/26763/events | https://github.com/huggingface/transformers/pull/26763 | 1,940,091,241 | PR_kwDOCUB6oc5co7EJ | 26,763 | Don't close ClearML Task when training is complete | {
"login": "johnml1135",
"id": 13733556,
"node_id": "MDQ6VXNlcjEzNzMzNTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/13733556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnml1135",
"html_url": "https://github.com/johnml1135",
"followers_url": "https://api.github.com/users/johnml1135/followers",
"following_url": "https://api.github.com/users/johnml1135/following{/other_user}",
"gists_url": "https://api.github.com/users/johnml1135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnml1135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnml1135/subscriptions",
"organizations_url": "https://api.github.com/users/johnml1135/orgs",
"repos_url": "https://api.github.com/users/johnml1135/repos",
"events_url": "https://api.github.com/users/johnml1135/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnml1135/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @skinan for first review - as integrations are maintained by the contributors who added them ",
"@johnml1135 Am I correct in understanding this PR can be closed as the issue was resolved in #26614 ? ",
"Yes - it should be resolved now.",
"@johnml1135 Thanks for confirming! "
] | 1,697 | 1,702 | 1,702 | NONE | null | This is to fix https://github.com/huggingface/transformers/issues/26762. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26763",
"html_url": "https://github.com/huggingface/transformers/pull/26763",
"diff_url": "https://github.com/huggingface/transformers/pull/26763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26763.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26762/comments | https://api.github.com/repos/huggingface/transformers/issues/26762/events | https://github.com/huggingface/transformers/issues/26762 | 1,940,089,999 | I_kwDOCUB6oc5zo2yP | 26,762 | ClearML task closes when done training - unnecessary and causes issues | {
"login": "johnml1135",
"id": 13733556,
"node_id": "MDQ6VXNlcjEzNzMzNTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/13733556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnml1135",
"html_url": "https://github.com/johnml1135",
"followers_url": "https://api.github.com/users/johnml1135/followers",
"following_url": "https://api.github.com/users/johnml1135/following{/other_user}",
"gists_url": "https://api.github.com/users/johnml1135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnml1135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnml1135/subscriptions",
"organizations_url": "https://api.github.com/users/johnml1135/orgs",
"repos_url": "https://api.github.com/users/johnml1135/repos",
"events_url": "https://api.github.com/users/johnml1135/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnml1135/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @johnml1135 ! I think that the task should indeed close if it was created by the callback class. If you don't want it to close, just initialize the task externally, i.e. call `Task.init` before creating the callback instance. In this case, the task will not close on when the training ends.",
"I do initialize the task well before I create the model to train:\r\n* https://github.com/sillsdev/machine.py/blob/main/machine/jobs/build_nmt_engine.py\r\n* https://github.com/sillsdev/machine.py/blob/main/machine/jobs/nmt_engine_build_job.py\r\n\r\nI don't know what's could more be done...",
"@johnml1135 Note that our fix has not been officially released yet. Have you installed `transformers` by cloning this repository and doing something like:\r\n```\r\ncd transformers\r\npython3 -m pip install -e .\r\n```\r\n?",
"I think I see - you are making a fix to resolve the issue as you specified above. Can you link to the commit/PR that implements the fix here?",
"@johnml1135 You can find the PR here: https://github.com/huggingface/transformers/pull/26614",
"Great - that should resolve our issue!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
When a python script ends, it automatically closes the task. After training, some people (me included) may want to do more for the task, including inferencing on the model. Therefore, always automatically closing the task on training completion, while sounding nice, is not needed and causes issues.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See code here:
https://github.com/huggingface/transformers/blob/0ebee8b93358b6ef0182398b8fcbd7afd64c0f97/src/transformers/integrations/integration_utils.py#L1488-L1493
### Expected behavior
The ClearML task does not close when training is complete. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26762/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26761/comments | https://api.github.com/repos/huggingface/transformers/issues/26761/events | https://github.com/huggingface/transformers/pull/26761 | 1,940,075,470 | PR_kwDOCUB6oc5co3rB | 26,761 | π¨π¨π¨ [`Quantization`] Store the original dtype in the config as a private attribute π¨π¨π¨ | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
First step of an alternative design of https://github.com/huggingface/transformers/pull/26560
For quantized models, instead of introducing a complex logic of retrieving the original weights dtype, I propose to simply add a private attribute `_quantization_original_dtype` in the config object.
`to` method does not need to be touched here as `to` cannot be called on quantized models (but for GPTQ models you can call `to` to perform device placement only - **not** for dtype casting)
that way we could adapt #26560 to simply check if the config has the attribute `_quantization_original_dtype` which is the case only for quantized models, else retrieve the dtype by retrieving the dtype of the linear layer weights in a classic manner.
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26761/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26761",
"html_url": "https://github.com/huggingface/transformers/pull/26761",
"diff_url": "https://github.com/huggingface/transformers/pull/26761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26761.patch",
"merged_at": 1697479013000
} |
https://api.github.com/repos/huggingface/transformers/issues/26760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26760/comments | https://api.github.com/repos/huggingface/transformers/issues/26760/events | https://github.com/huggingface/transformers/pull/26760 | 1,940,002,044 | PR_kwDOCUB6oc5conxU | 26,760 | Fix `PerceiverModelIntegrationTest::test_inference_masked_lm` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
The PR #23909 changed the result of `vocab_size` of
```
tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver")
```
from `262` to `256`, but the logit has shape `[1, 2048, 262]`.
Let's use `len` here.
## To reproduce:
```
from transformers import PerceiverTokenizer
tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver")
# 256 on `2da88537` but `262` on one commit before (`835b0a05`)
print(tokenizer.vocab_size)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26760/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26760",
"html_url": "https://github.com/huggingface/transformers/pull/26760",
"diff_url": "https://github.com/huggingface/transformers/pull/26760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26760.patch",
"merged_at": 1697125386000
} |
https://api.github.com/repos/huggingface/transformers/issues/26759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26759/comments | https://api.github.com/repos/huggingface/transformers/issues/26759/events | https://github.com/huggingface/transformers/issues/26759 | 1,939,799,021 | I_kwDOCUB6oc5znvvt | 26,759 | KeyError: 'mistral' | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @andysingal \r\nThanks for the issue, can you share with us which transformers version are you using?\r\nCan you try to use the latest transformers version?\r\n\r\n```bash\r\npip install -U transformers\r\n```",
"I had the same error.\r\n`pip install -U transformers` \r\nsolved it. thanks\r\n",
"Thanks alot, I unchecked high ram and it worked. I already had the latest\r\nversion of Transformers. Thanks\r\n\r\nOn Thu, Oct 12, 2023 at 7:40β―PM Younes Belkada ***@***.***>\r\nwrote:\r\n\r\n> Hi @andysingal <https://github.com/andysingal>\r\n> Thanks for the issue, can you share with us which transformers version are\r\n> you using?\r\n> Can you try to use the latest transformers version?\r\n>\r\n> pip install -U transformers\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/26759#issuecomment-1759686346>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNKRP3ZM743DBF6FJWTX6725HAVCNFSM6AAAAAA55PDUPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJZGY4DMMZUGY>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Hi team,\r\n\r\nIf i try transformer version = 4.35, then i get error of \r\n\r\nImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes` \r\n\r\nBut if i downgrade to 4:30\r\nthen i get\r\nkey error :mistral\r\n\r\nCan anyone help me?",
"@Abhaycnvrg with transformers 4.35 are you running with the latest version of bitsandbytes and accelerate installed? ",
"yes. i did not give any version number (so it took the latest version of the bitsandbytes with python 3.11.5 and 3.11.1 )\r\n",
"@Abhaycnvrg Could you open a new issue, and provide full details of the error (full traceback) and your running environment (run `transformers-cli env` in the terminal and copy-paste the output)? ",
"in the other thread, jitender-cnvrg already gave",
"> Hi team,\r\n> \r\n> If i try transformer version = 4.35, then i get error of\r\n> \r\n> ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes`\r\n> \r\n> But if i downgrade to 4:30 then i get key error :mistral\r\n> \r\n> Can anyone help me?\r\n\r\nI have exactly the same issue, whether I use CPU or cuda.",
"I can confirm that I am seeing the same issue as mentioned above, downgraded to Transformers 4.30.0 and get the KeyError : mistral \r\n\r\nor the ImportError mentioned above with Transformers 4.35.1",
"Hey all. Just saying that you have the same issue without a reproducer and a traceback will not help anyone. \r\nFeel free to open a new issue with a reproducer that does not use an external package (in this case langchain) if the issue is to load something with `transformers`. If the issue is with `langchain` than open an issue on the `langchain` repo π€ ",
"[Abhaycnvrg](https://github.com/Abhaycnvrg) [smarthi](https://github.com/smarthi) those errors you are facing:\r\nTransformers 4.30.0 --> KeyError : mistral\r\nTransformers 4.35.2 --> ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate\r\n only happen if you execute your program on CPU, they don't appear once you run your program on GPU.\r\n \r\n That's how i got to solve it.\r\n Cheers.\r\n",
"> [Abhaycnvrg](https://github.com/Abhaycnvrg) [smarthi](https://github.com/smarthi) those errors you are facing: Transformers 4.30.0 --> KeyError : mistral Transformers 4.35.2 --> ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate only happen if you execute your program on CPU, they don't appear once you run your program on GPU.\r\n> \r\n> That's how i got to solve it. Cheers.\r\n\r\nI am getting both these errors for the different transformers version you mention. I'm trying to run `autotrain-advanced` locally on my Apple Silicon Macbook. Anyone knows a fix for that?",
"Hi team\r\nI got to solve it by using a very specific huggingface image.\r\nThe details are in this thread\r\nhttps://github.com/huggingface/transformers/issues/27376"
] | 1,697 | 1,702 | 1,697 | NONE | null | ### System Info
RTX 3090
### Who can help?
@younesbelkada @Arthur
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import BitsAndBytesConfig
from langchain import HuggingFacePipeline
from langchain import PromptTemplate, LLMChain
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
model_id = "ehartford/samantha-mistral-instruct-7b"
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_4bit = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto",quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipeline = pipeline(
"text-generation",
model=model_4bit,
tokenizer=tokenizer,
use_cache=True,
device_map="auto",
max_length=500,
do_sample=True,
top_k=5,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
)
llm = HuggingFacePipeline(pipeline=pipeline)
```
gives error:
```
(β¦)ral-instruct-7b/resolve/main/config.json: 100%
628/628 [00:00<00:00, 58.6kB/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-13-e2c6cff1d0cf>](https://localhost:8080/#) in <cell line: 16>()
14
15 from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
---> 16 model_4bit = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto",quantization_config=quantization_config)
17 tokenizer = AutoTokenizer.from_pretrained(model_id)
18
2 frames
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py](https://localhost:8080/#) in __getitem__(self, key)
732
733
--> 734 class _LazyConfigMapping(OrderedDict):
735 """
736 A dictionary that lazily load its values when they are requested.
```
### Expected behavior
runs the model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26759/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26758/comments | https://api.github.com/repos/huggingface/transformers/issues/26758/events | https://github.com/huggingface/transformers/issues/26758 | 1,939,722,738 | I_kwDOCUB6oc5zndHy | 26,758 | transformers DataCollatorWithPadding should have an option to specify the attributes which require padding. | {
"login": "varadhbhatnagar",
"id": 20443618,
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varadhbhatnagar",
"html_url": "https://github.com/varadhbhatnagar",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
Currently there is no way to specify the attributes which require padding in transformers [DataCollatorWithPadding](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/data/data_collator.py#L215).
### Motivation
Two instances from our data may be of this format:
```
{
"input_ids": [1,2,3,4,5],
"attention_mask": [0,1,1,1,1],
"label": "apple"
}
{
"input_ids": [1,2,3,4,5,6,7,8],
"attention_mask": [0,1,1,1,1,1,1,1],
"label": "mango"
}
```
User wants to implement [dynamic padding](https://huggingface.co/learn/nlp-course/chapter3/2?fw=pt#dynamic-padding) over these two data points, but currently there is no way to specify that `label` attribute should not be passed through [`self.tokenizer.pad()`]()https://github.com/huggingface/transformers/blob/b71f20a7c9f3716d30f6738501559acf863e2c5c/src/transformers/data/data_collator.py#L249
### Your contribution
I can submit a PR for this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26758/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26757/comments | https://api.github.com/repos/huggingface/transformers/issues/26757/events | https://github.com/huggingface/transformers/issues/26757 | 1,939,698,438 | I_kwDOCUB6oc5znXMG | 26,757 | transformers DefaultDataCollator does not work with 'str' type data | {
"login": "varadhbhatnagar",
"id": 20443618,
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varadhbhatnagar",
"html_url": "https://github.com/varadhbhatnagar",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey! The data collator are meant to be overwritten for your custom usage π "
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.33.3
- Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.3
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/19a3vfER_Onq19XffudQx4SNo_uuURDYc?usp=sharing
### Expected behavior
- The transformers [DefaultDataCollator](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DefaultDataCollator) behaves differently compared to the PyTorch [default_collate()](https://pytorch.org/docs/stable/data.html#torch.utils.data.default_collate). The PyTorch `default_collate()` allows processing of other types such as `str`, which do not work with `DefaultDataCollator`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26757/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26756/comments | https://api.github.com/repos/huggingface/transformers/issues/26756/events | https://github.com/huggingface/transformers/pull/26756 | 1,939,670,099 | PR_kwDOCUB6oc5cneyo | 26,756 | chore: fix typos | {
"login": "afuetterer",
"id": 35225576,
"node_id": "MDQ6VXNlcjM1MjI1NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/35225576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afuetterer",
"html_url": "https://github.com/afuetterer",
"followers_url": "https://api.github.com/users/afuetterer/followers",
"following_url": "https://api.github.com/users/afuetterer/following{/other_user}",
"gists_url": "https://api.github.com/users/afuetterer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afuetterer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afuetterer/subscriptions",
"organizations_url": "https://api.github.com/users/afuetterer/orgs",
"repos_url": "https://api.github.com/users/afuetterer/repos",
"events_url": "https://api.github.com/users/afuetterer/events{/privacy}",
"received_events_url": "https://api.github.com/users/afuetterer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26756). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Hi, this is my first PR to the project. It fixes some minor typos.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26756/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26756",
"html_url": "https://github.com/huggingface/transformers/pull/26756",
"diff_url": "https://github.com/huggingface/transformers/pull/26756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26756.patch",
"merged_at": 1697126427000
} |
https://api.github.com/repos/huggingface/transformers/issues/26755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26755/comments | https://api.github.com/repos/huggingface/transformers/issues/26755/events | https://github.com/huggingface/transformers/issues/26755 | 1,939,639,517 | I_kwDOCUB6oc5znIzd | 26,755 | [Bug] TypeError: forward() got an unexpected keyword argument 'padding_mask' | {
"login": "SingL3",
"id": 20473466,
"node_id": "MDQ6VXNlcjIwNDczNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SingL3",
"html_url": "https://github.com/SingL3",
"followers_url": "https://api.github.com/users/SingL3/followers",
"following_url": "https://api.github.com/users/SingL3/following{/other_user}",
"gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SingL3/subscriptions",
"organizations_url": "https://api.github.com/users/SingL3/orgs",
"repos_url": "https://api.github.com/users/SingL3/repos",
"events_url": "https://api.github.com/users/SingL3/events{/privacy}",
"received_events_url": "https://api.github.com/users/SingL3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The issue is that the codebase you are using has [monkey patched in a different `forward` function in the `LlamaAttention` module](https://github.com/pacman100/DHS-LLM-Workshop/blob/main/chat_assistant/training/llama_flash_attn_monkey_patch.py) <-- see the link. The monkey patched `forward` was valid in 4.33 but no longer matches the function signature of `LlamaAttention.forward` in 4.34.0. I had the same issue. This is not a bug in transformers.",
"I see! Thank you! Closing now.",
" I have to use transformers==4.34.0. Any solution for this error?",
"> Any solution for this error?\r\n\r\n1) Remove the monkey patching\r\n2) Load your model with: `model = transformers.AutoModelForCausalLM.from_pretrained(name, use_flash_attention_2=True)`.\r\n",
"> > Any solution for this error?\r\n> \r\n> 1. Remove the monkey patching\r\n> 2. Load your model with: `model = transformers.AutoModelForCausalLM.from_pretrained(name, use_flash_attention_2=True)`.\r\n\r\n Why this should help? It doesn't change the model behavior?\r\nThanks",
"> > > Any solution for this error?\r\n> > \r\n> > \r\n> > \r\n> > 1. Remove the monkey patching\r\n> > 2. Load your model with: `model = transformers.AutoModelForCausalLM.from_pretrained(name, use_flash_attention_2=True)`.\r\n> \r\n> Why this should help? It doesn't change the model behavior? Thanks\r\n\r\nHugging Face has already integrated flash attention for llama. So there is no need to use monkey patch. Just load model with `use_flash_attention_2=True`"
] | 1,697 | 1,699 | 1,697 | NONE | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.34.0
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Follow https://huggingface.co/blog/ram-efficient-pytorch-fsdp and submit mission with [srun](https://gist.github.com/pacman100/1cb1f17b2f1b3139a63b764263e70b25) to train a llama2 model which is save using transformers==4.33.0.
### Expected behavior
Training successfully.
I think downgrade to 4.33.0 is a solution but I wonder it is a better solution? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26755/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26755/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26754/comments | https://api.github.com/repos/huggingface/transformers/issues/26754/events | https://github.com/huggingface/transformers/pull/26754 | 1,939,622,477 | PR_kwDOCUB6oc5cnUX9 | 26,754 | Fix `MistralIntegrationTest` OOM | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
Fix `MistralIntegrationTest` OOM | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26754/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26754",
"html_url": "https://github.com/huggingface/transformers/pull/26754",
"diff_url": "https://github.com/huggingface/transformers/pull/26754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26754.patch",
"merged_at": 1697106672000
} |
https://api.github.com/repos/huggingface/transformers/issues/26753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26753/comments | https://api.github.com/repos/huggingface/transformers/issues/26753/events | https://github.com/huggingface/transformers/pull/26753 | 1,939,513,422 | PR_kwDOCUB6oc5cm8q5 | 26,753 | Create transformer.py | {
"login": "vibhorjoshi",
"id": 105739194,
"node_id": "U_kgDOBk1zug",
"avatar_url": "https://avatars.githubusercontent.com/u/105739194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vibhorjoshi",
"html_url": "https://github.com/vibhorjoshi",
"followers_url": "https://api.github.com/users/vibhorjoshi/followers",
"following_url": "https://api.github.com/users/vibhorjoshi/following{/other_user}",
"gists_url": "https://api.github.com/users/vibhorjoshi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vibhorjoshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vibhorjoshi/subscriptions",
"organizations_url": "https://api.github.com/users/vibhorjoshi/orgs",
"repos_url": "https://api.github.com/users/vibhorjoshi/repos",
"events_url": "https://api.github.com/users/vibhorjoshi/events{/privacy}",
"received_events_url": "https://api.github.com/users/vibhorjoshi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @vibhorjoshi - what is this PR for? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,702 | 1,702 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26753/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26753",
"html_url": "https://github.com/huggingface/transformers/pull/26753",
"diff_url": "https://github.com/huggingface/transformers/pull/26753.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26753.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26752/comments | https://api.github.com/repos/huggingface/transformers/issues/26752/events | https://github.com/huggingface/transformers/pull/26752 | 1,939,467,574 | PR_kwDOCUB6oc5cmyvX | 26,752 | Add a default decoder_attention_mask for EncoderDecoderModel during training | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yup, training will be affected without this change if we followed the new recommended way of training EncoderDecoderModel (just passing in labels without specifying decoder_input_ids). I've updated the description.\r\n\r\nWe would show a warning. Although there is no crash, the training would be flawed since we would be attending to pad tokens in the label sequences (during decoding stage).",
"Thanks for this PR @hackyon!\r\n\r\n> We would show a warning. Although there is no crash, the training would be flawed since we would be attending to pad tokens in the label sequences (during decoding stage).\r\n\r\nCould you clarify when this warning would be given? Is this a warning if you _do_ specify both labels and decoder_input_ids?",
"> Thanks for this PR @hackyon!\r\n> \r\n> > We would show a warning. Although there is no crash, the training would be flawed since we would be attending to pad tokens in the label sequences (during decoding stage).\r\n> \r\n> Could you clarify when this warning would be given? Is this a warning if you _do_ specify both labels and decoder_input_ids?\r\n\r\nI'm referring to the \"we strongly recommend passing an attention mask\" warning that you brought up in #25271. This warning comes up when you specify just the labels, and leaving the decoder_input_ids/decoder_attention_mask to None.\r\n",
"Looks good to me too -- thank you for the contribution, @hackyon π€ "
] | 1,697 | 1,704 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Add a default decoder_attention_mask for EncoderDecoderModel during training. Since we are already creating the default decoder_input_ids from the labels, we should also create a default decoder_attention_mask to go with it.
Before this fix, the user was shown a warning message ("we strongly recommend passing an attention mask") when following the [new suggested](https://huggingface.co/docs/transformers/model_doc/encoder-decoder#training) [method of training](https://github.com/huggingface/transformers/blob/bef02fd6b9cde975c51607fb936050ef706ff6d8/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L42-L47) for EncoderDecoderModel. Although there has been no report of the bug in the wild yet as it would be a silent bug, I suspect it will likely cause [this particular issue](https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked) if the pad tokens in the default decoder_input_ids are not taken into account.
Fixes #25271
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @StevenSong
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26752/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26752",
"html_url": "https://github.com/huggingface/transformers/pull/26752",
"diff_url": "https://github.com/huggingface/transformers/pull/26752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26752.patch",
"merged_at": 1698168376000
} |
https://api.github.com/repos/huggingface/transformers/issues/26751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26751/comments | https://api.github.com/repos/huggingface/transformers/issues/26751/events | https://github.com/huggingface/transformers/pull/26751 | 1,939,440,508 | PR_kwDOCUB6oc5cms7f | 26,751 | Add many missing spaces in adjacent strings | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"CI error is superfluous, merging!",
"Awesome! Thanks for getting to this so quickly, these kinds of PRs can become a merge conflict mess if they're left for too long.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26751). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | MEMBER | null | Hello!
# What does this PR do?
This PR resolves numerous occurrences of the following:
```python
raise ValueError(
"`training_args.block_size` needs to be a multiple of the global train/eval batch size."
f"Got {training_args.block_size}, {train_batch_size} and {eval_batch_size} respectively instead."
)
```
Which results in errors like:
```
`training_args.block_size` needs to be a multiple of the global train/eval batch size.Got 4, 6 and 12 respectively instead."
^^
```
## How did I go about it?
I created a simple regex: `(["'][^"'\n]*[^ n])(["']\n *f?r?["'][^ ])`, which matches full strings that don't end with spaces or `n` (due to `\n`), followed by a newline, some spaces, and then another string that also doesn't start with a space. I manually went over all cases where this pattern matched through the codebase, and replaced the pattern with `$1 $2` if it was indeed a real mistake.
## Before submitting
- [x] This PR fixes a typo or improves the docs
## Who can review?
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26751/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26751",
"html_url": "https://github.com/huggingface/transformers/pull/26751",
"diff_url": "https://github.com/huggingface/transformers/pull/26751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26751.patch",
"merged_at": 1697099321000
} |
https://api.github.com/repos/huggingface/transformers/issues/26750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26750/comments | https://api.github.com/repos/huggingface/transformers/issues/26750/events | https://github.com/huggingface/transformers/pull/26750 | 1,939,423,238 | PR_kwDOCUB6oc5cmpF3 | 26,750 | Fix `PersimmonIntegrationTest` OOM | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Finally get it to work completely by also using `torch.cuda.empty_cache` and `gc.collect`",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
Fix `PersimmonIntegrationTest` OOM: just use 8-bit | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26750/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26750",
"html_url": "https://github.com/huggingface/transformers/pull/26750",
"diff_url": "https://github.com/huggingface/transformers/pull/26750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26750.patch",
"merged_at": 1697102658000
} |
https://api.github.com/repos/huggingface/transformers/issues/26749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26749/comments | https://api.github.com/repos/huggingface/transformers/issues/26749/events | https://github.com/huggingface/transformers/pull/26749 | 1,939,400,419 | PR_kwDOCUB6oc5cmkLl | 26,749 | [integration] Update Ray Tune integration for Ray 2.7 (continuation) | {
"login": "krfricke",
"id": 14904111,
"node_id": "MDQ6VXNlcjE0OTA0MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krfricke",
"html_url": "https://github.com/krfricke",
"followers_url": "https://api.github.com/users/krfricke/followers",
"following_url": "https://api.github.com/users/krfricke/following{/other_user}",
"gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krfricke/subscriptions",
"organizations_url": "https://api.github.com/users/krfricke/orgs",
"repos_url": "https://api.github.com/users/krfricke/repos",
"events_url": "https://api.github.com/users/krfricke/events{/privacy}",
"received_events_url": "https://api.github.com/users/krfricke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
WIP addition to #26499 (I don't have write rights there and @justinvyu is currently out of office).
Opening here as draft PR to run CI - we can either switch to this PR if CI succeeds or merge my updates into #26499.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26749/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26749",
"html_url": "https://github.com/huggingface/transformers/pull/26749",
"diff_url": "https://github.com/huggingface/transformers/pull/26749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26749.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26748/comments | https://api.github.com/repos/huggingface/transformers/issues/26748/events | https://github.com/huggingface/transformers/pull/26748 | 1,939,389,173 | PR_kwDOCUB6oc5cmhty | 26,748 | use isinstance instead of type comparison | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26748). All of your documentation changes will be reflected on that endpoint.",
"@amyeroberts thanks! so I relaxed the `ruff` version as with these changes the latest `ruff` runs without raising any issues... but yes i un-did that as per your request",
"fixed by #27144 "
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fix for E721 errors which are annoying when doing `make style` with a newer `ruff` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26748/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26748",
"html_url": "https://github.com/huggingface/transformers/pull/26748",
"diff_url": "https://github.com/huggingface/transformers/pull/26748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26748.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26747/comments | https://api.github.com/repos/huggingface/transformers/issues/26747/events | https://github.com/huggingface/transformers/pull/26747 | 1,939,311,828 | PR_kwDOCUB6oc5cmQ0N | 26,747 | Translating `en/internal` folder docs to Japanese π―π΅ | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" @stevhliu and @MKhalusova",
"@stevhliu the docbuilder broke due to a misspelled file name.",
"@stevhliu it should pass the tests now. approve the workflow",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26747). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Add japanese translation to `ja/internal`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26746
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26747/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26747",
"html_url": "https://github.com/huggingface/transformers/pull/26747",
"diff_url": "https://github.com/huggingface/transformers/pull/26747.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26747.patch",
"merged_at": 1697580081000
} |
https://api.github.com/repos/huggingface/transformers/issues/26746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26746/comments | https://api.github.com/repos/huggingface/transformers/issues/26746/events | https://github.com/huggingface/transformers/issues/26746 | 1,939,268,145 | I_kwDOCUB6oc5zluIx | 26,746 | Translating `en/internal` folder docs to Japanese π―π΅ | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Japanese-speaking community!
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
<!--
Keep on adding more as you go π₯
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26746/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26745/comments | https://api.github.com/repos/huggingface/transformers/issues/26745/events | https://github.com/huggingface/transformers/issues/26745 | 1,939,141,886 | I_kwDOCUB6oc5zlPT- | 26,745 | Improving Typing | {
"login": "Siddhesh-Agarwal",
"id": 68057995,
"node_id": "MDQ6VXNlcjY4MDU3OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/68057995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Siddhesh-Agarwal",
"html_url": "https://github.com/Siddhesh-Agarwal",
"followers_url": "https://api.github.com/users/Siddhesh-Agarwal/followers",
"following_url": "https://api.github.com/users/Siddhesh-Agarwal/following{/other_user}",
"gists_url": "https://api.github.com/users/Siddhesh-Agarwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Siddhesh-Agarwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Siddhesh-Agarwal/subscriptions",
"organizations_url": "https://api.github.com/users/Siddhesh-Agarwal/orgs",
"repos_url": "https://api.github.com/users/Siddhesh-Agarwal/repos",
"events_url": "https://api.github.com/users/Siddhesh-Agarwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Siddhesh-Agarwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We use typings to make the code readable. We're not looking to have our code **_exactly_** typed if it has to come at the expense of readability. If the changes you have in mind adhere to this mindset, then yes, feel free to open a PR.",
"I totally get you, I'll be making some small changes here there (Hope to make things a little better)",
"Hey @LysandreJik, I was going through the code and came across this:\r\n\r\n```\r\nclass ClassInstantier(OrderedDict):\r\n def __getitem__(self, key):\r\n content = super().__getitem__(key)\r\n cls, kwargs = content if isinstance(content, tuple) else (content, {})\r\n return cls(**kwargs)\r\n```\r\n> [Source](https://github.com/huggingface/transformers/blob/5bfda28dd36b47912da1cdd0e2e83ab32c7408e4/src/transformers/activations.py#L206-L210)\r\n\r\nIt seems like transformers is only meant for [python>=3.8](https://github.com/huggingface/transformers/blob/5bfda28dd36b47912da1cdd0e2e83ab32c7408e4/setup.py#L149) and [Dictionary order is preserved since Python 3.7](https://docs.python.org/3/whatsnew/3.7.html). So, I am guessing we can replace the `OrdererDict` with the native `dict`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
I was working on a project using Transformers and came across a few functions that had not been typed completely. Example:
1. [TrainingArguments](https://github.com/huggingface/transformers/blob/e1cec43415e72c9853288d4e9325b734d36dd617/src/transformers/training_args.py#L161)
2. [activations_tf.py](https://github.com/huggingface/transformers/blob/e1cec43415e72c9853288d4e9325b734d36dd617/src/transformers/activations_tf.py)
3. [convert_slow_tokenizer.py](https://github.com/huggingface/transformers/blob/e1cec43415e72c9853288d4e9325b734d36dd617/src/transformers/convert_slow_tokenizer.py)
### Motivation
I am a fan to greatly typed code since it makes my code editors' intellisense work better.
### Your contribution
Can I create a PR to solve this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26745/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26744/comments | https://api.github.com/repos/huggingface/transformers/issues/26744/events | https://github.com/huggingface/transformers/issues/26744 | 1,939,108,872 | I_kwDOCUB6oc5zlHQI | 26,744 | ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' | {
"login": "congchan",
"id": 18083731,
"node_id": "MDQ6VXNlcjE4MDgzNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/18083731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/congchan",
"html_url": "https://github.com/congchan",
"followers_url": "https://api.github.com/users/congchan/followers",
"following_url": "https://api.github.com/users/congchan/following{/other_user}",
"gists_url": "https://api.github.com/users/congchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/congchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/congchan/subscriptions",
"organizations_url": "https://api.github.com/users/congchan/orgs",
"repos_url": "https://api.github.com/users/congchan/repos",
"events_url": "https://api.github.com/users/congchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/congchan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, peft requires a higher accelerate version, please do `pip install accelerate -U`",
"> Hi, peft requires a higher accelerate version, please do `pip install accelerate -U`\r\n\r\nHi, I need this version because higher accelerate raise another error `ValueError: FSDP requires PyTorch >= 2.0.1`. \r\nYet for hardware compatibility reasons, I cannot easily change torch version right now. \r\nAnd I actually don't need peft at all. \r\nAny other solutions?",
"You can try fully uninstalling peft then from the ecosystem. ",
"Thanks"
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
transformer version 4.34
accelerate 0.20.3
Actually I never call peft and use npu, not sure why this import is neccecssary.
cc trainer: @muellerzr and @pacman100
```
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
return importlib.import_module("." + module_name, self.__name__)
return importlib.import_module("." + module_name, self.__name__)return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
return importlib.import_module("." + module_name, self.__name__) File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
Traceback (most recent call last):
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
from .auto import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
Traceback (most recent call last):
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
Traceback (most recent call last):
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
Traceback (most recent call last):
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1282, in _get_module
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
return importlib.import_module("." + module_name, self.__name__)
from .auto import (
File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/conda/lib/python3.9/importlib/__init__.py", line 127, in import_module
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
File "/usr/local/conda/lib/python3.9/site-packages/transformers/trainer.py", line 200, in <module>
from peft import PeftModel
File "/usr/local/conda/lib/python3.9/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/auto.py", line 30, in <module>
from .config import PeftConfig
File "/usr/local/conda/lib/python3.9/site-packages/peft/config.py", line 24, in <module>
from .utils import CONFIG_NAME, PeftType, TaskType
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/__init__.py", line 22, in <module>
from .other import (
File "/usr/local/conda/lib/python3.9/site-packages/peft/utils/other.py", line 24, in <module>
from accelerate.utils import is_npu_available, is_xpu_available
ImportError: cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workdir/train/train.py", line 28, in <module>
from transformers import Trainer, TrainerCallback
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1272, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1284, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'is_npu_available' from 'accelerate.utils' (/usr/local/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3161) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/local/conda/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/conda/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/run.py", line 766, in <module>
main()
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
train/train.py FAILED
------------------------------------------------------------
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
A causal language modeling task with trainer and FSDP
### Expected behavior
Don't call `is_npu_available` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26744/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26743/comments | https://api.github.com/repos/huggingface/transformers/issues/26743/events | https://github.com/huggingface/transformers/issues/26743 | 1,939,014,104 | I_kwDOCUB6oc5zkwHY | 26,743 | Problem with eos_token_id not being used when add_eos_token=True is used | {
"login": "iliasmiraoui",
"id": 83364633,
"node_id": "MDQ6VXNlcjgzMzY0NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/83364633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliasmiraoui",
"html_url": "https://github.com/iliasmiraoui",
"followers_url": "https://api.github.com/users/iliasmiraoui/followers",
"following_url": "https://api.github.com/users/iliasmiraoui/following{/other_user}",
"gists_url": "https://api.github.com/users/iliasmiraoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliasmiraoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliasmiraoui/subscriptions",
"organizations_url": "https://api.github.com/users/iliasmiraoui/orgs",
"repos_url": "https://api.github.com/users/iliasmiraoui/repos",
"events_url": "https://api.github.com/users/iliasmiraoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliasmiraoui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That is most probably because `normalized = True` for this added token. Thus the actual token that is known to the underlying fast tokenizer is `'β<|im_end|>'` which is why the `<|im_end|>` is not recognized and encoded as an unk token. ",
"I think it was fixed !!"
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
transformers==4.35.dev0
I have been playing around with the new Mistral models/finetunes. For training, I believe it is needed to add "add_eos_token=True" on the tokenizer thats being passed to the SFTrainer so that the model still predicts eos_token at the end of the sequence and it's better than including it in the "text" column. I didn't see a lot of docs on templates so want to check I am not inventing this right?
I have noticed that the behavior works well on the regular Mistral model but is completely off on some other models like the OpenOrca fine-tune. While the eos_token exists in the tokenizer and is correctly categorized for the prompt_template, the tokenizer always uses the unknown token instead when tokenizing and "add_eos_token=True" (see screenshot below) which is definitely not right and leads to problems with never-ending generation after finetunes.
<img width="889" alt="Screenshot 2023-10-11 at 9 28 22β―PM" src="https://github.com/huggingface/transformers/assets/83364633/b43a774f-b652-4654-ade3-92dc890660d4">
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model_name = "Open-Orca/Mistral-7B-OpenOrca"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True,add_eos_token=True)
tokenizer("Test")
### Expected behavior
It should output `{'input_ids': [1, 3735, 32000], 'attention_mask': [1, 1, 1]} `instead of` {'input_ids': [1, 3735, 0], 'attention_mask': [1, 1, 1]}` since `tokenizer.eos_token_id` is 32000. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26743/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26743/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26742/comments | https://api.github.com/repos/huggingface/transformers/issues/26742/events | https://github.com/huggingface/transformers/issues/26742 | 1,938,793,759 | I_kwDOCUB6oc5zj6Uf | 26,742 | [New model] RT-DETR | {
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"I'll be working on porting this model."
] | 1,697 | 1,697 | null | CONTRIBUTOR | null | ### Model description
The paper "DETRs Beat YOLOs on Real-time Object Detection" proposes a Real-Time DEtection TRansformer (RT-Detr), a hybrid encoder and a transformer decoder for object detection tasks.
Repo has +380 stars and paper shows improvement in inference speed and AP in comparison with other models, outperforming YOLOv8 detectors in both speed and accuracy.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/abs/2304.08069
Code: https://github.com/lyuwenyu/RT-DETR/ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26742/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26741/comments | https://api.github.com/repos/huggingface/transformers/issues/26741/events | https://github.com/huggingface/transformers/pull/26741 | 1,938,399,448 | PR_kwDOCUB6oc5cjIgm | 26,741 | Fix backward compatibility of Conversation | {
"login": "wdhorton",
"id": 13503072,
"node_id": "MDQ6VXNlcjEzNTAzMDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13503072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wdhorton",
"html_url": "https://github.com/wdhorton",
"followers_url": "https://api.github.com/users/wdhorton/followers",
"following_url": "https://api.github.com/users/wdhorton/following{/other_user}",
"gists_url": "https://api.github.com/users/wdhorton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wdhorton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wdhorton/subscriptions",
"organizations_url": "https://api.github.com/users/wdhorton/orgs",
"repos_url": "https://api.github.com/users/wdhorton/repos",
"events_url": "https://api.github.com/users/wdhorton/events{/privacy}",
"received_events_url": "https://api.github.com/users/wdhorton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26741). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null |
# What does this PR do?
I ran into a case where an external library was depending on the `new_user_input` field of Conversation. https://github.com/SeldonIO/MLServer/blob/release/1.4.x/runtimes/huggingface/mlserver_huggingface/codecs/utils.py#L37
This field was deprecated as part of the refactor, but if `transformers` wants to maintain backwards compatibility for now (which is mentioned in a few comments) then there's a good argument for supporting it. Some comments referred to it as an "internal" property, but it didn't start with `_` as is Python convention, so I think it's reasonable that other libraries were referencing it directly.
It's not difficult to add it to the other supported backwards-compatible properties. In addition, the implementation of `past_user_inputs` didn't actually match the past behavior (it would contain the most recent message as well) so I updated that as well.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26741/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26741",
"html_url": "https://github.com/huggingface/transformers/pull/26741",
"diff_url": "https://github.com/huggingface/transformers/pull/26741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26741.patch",
"merged_at": 1697109564000
} |
https://api.github.com/repos/huggingface/transformers/issues/26740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26740/comments | https://api.github.com/repos/huggingface/transformers/issues/26740/events | https://github.com/huggingface/transformers/issues/26740 | 1,938,374,531 | I_kwDOCUB6oc5ziT-D | 26,740 | Cross lingual Summarization decode in wrong language | {
"login": "Mehrab-Hossain",
"id": 50637969,
"node_id": "MDQ6VXNlcjUwNjM3OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/50637969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrab-Hossain",
"html_url": "https://github.com/Mehrab-Hossain",
"followers_url": "https://api.github.com/users/Mehrab-Hossain/followers",
"following_url": "https://api.github.com/users/Mehrab-Hossain/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrab-Hossain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrab-Hossain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrab-Hossain/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrab-Hossain/orgs",
"repos_url": "https://api.github.com/users/Mehrab-Hossain/repos",
"events_url": "https://api.github.com/users/Mehrab-Hossain/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrab-Hossain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @Mehrab-Hossain, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,702 | 1,702 | NONE | null | ### System Info
Colab : https://colab.research.google.com/drive/1C1EbkntltNtcVy7MJmRs1xo37pqmC6u7?usp=sharing
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to fine tune mt5-base model to get summary in bengali from english sentence. but while decoding model always gives result in english not in bengali. how can i control or can i give instruction to target language detection.
here is some of my codes:
model_checkpoint1 = βgoogle/mt5-smallβ
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint1, use_fast=False)
mt5_config = AutoConfig.from_pretrained(model_checkpoint1)
mt5_config.decoder_start_token_id = 250042
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint1,config=mt5_config)
custom_tokenizer_config = {
βadditional_special_tokensβ: [ββ<extra_id_69>β, ββ<extra_id_57>β, ββ<extra_id_58>β, ββ<extra_id_0>β],
}
tokenizer.add_special_tokens(custom_tokenizer_config)
data preprocess
max_input_length = 1024
max_target_length = 128
prefix = βen to bn:β
prefix1 = ββ<extra_id_57>β
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples[βtextβ]]
# Ensure the prefix is added to the "summary" field
summaries = [prefix1 + summary for summary in examples["summary"]]
print(summaries)
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(summaries, max_length=max_target_length, truncation=True)
print(labels)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
#compute metrics
import nltk
import numpy as np
nltk.download(βpunktβ)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
# Replace -100 in the labels as we canβt decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Rouge expects a newline after each sentence
decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
print(decoded_preds)
print(decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
result["gen_len"] = np.mean(prediction_lens)
return {k: round(v, 4) for k, v in result.items()}
### Expected behavior

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26740/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26739/comments | https://api.github.com/repos/huggingface/transformers/issues/26739/events | https://github.com/huggingface/transformers/pull/26739 | 1,938,270,075 | PR_kwDOCUB6oc5circL | 26,739 | fix resume_from_checkpoint bug | {
"login": "Jintao-Huang",
"id": 45290347,
"node_id": "MDQ6VXNlcjQ1MjkwMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/45290347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jintao-Huang",
"html_url": "https://github.com/Jintao-Huang",
"followers_url": "https://api.github.com/users/Jintao-Huang/followers",
"following_url": "https://api.github.com/users/Jintao-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/Jintao-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jintao-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jintao-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/Jintao-Huang/orgs",
"repos_url": "https://api.github.com/users/Jintao-Huang/repos",
"events_url": "https://api.github.com/users/Jintao-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jintao-Huang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, it seems that citest has encountered a timeout error. It doesn't seem to be an error on my end. Can you help me take a look, please?",
"Hi, any chance you can rebuild/rebase from main? I think that will fix your failing tests. Thanks!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26739). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes bug:
When `resume_from_checkpoint` is set to True, the `state` is imported, which includes `self.state.best_model_checkpoint`. If the best model is still the one corresponding to the model in the resume_from_checkpoint directory when saving the model during training, an exception will be raised at `best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint)))`:
ValueError: '/path/to/checkpoint-100' is not in list.
Fixes issue(https://github.com/huggingface/transformers/issues/22172)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- trainer: @muellerzr and @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26739/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26739",
"html_url": "https://github.com/huggingface/transformers/pull/26739",
"diff_url": "https://github.com/huggingface/transformers/pull/26739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26739.patch",
"merged_at": 1697462987000
} |
https://api.github.com/repos/huggingface/transformers/issues/26738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26738/comments | https://api.github.com/repos/huggingface/transformers/issues/26738/events | https://github.com/huggingface/transformers/issues/26738 | 1,938,260,182 | I_kwDOCUB6oc5zh4DW | 26,738 | Each IBertLayer receives the same `hidden_states_scaling_factor` in `IBertEncoder.forward()` | {
"login": "mbekmyrz",
"id": 46756692,
"node_id": "MDQ6VXNlcjQ2NzU2Njky",
"avatar_url": "https://avatars.githubusercontent.com/u/46756692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbekmyrz",
"html_url": "https://github.com/mbekmyrz",
"followers_url": "https://api.github.com/users/mbekmyrz/followers",
"following_url": "https://api.github.com/users/mbekmyrz/following{/other_user}",
"gists_url": "https://api.github.com/users/mbekmyrz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbekmyrz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbekmyrz/subscriptions",
"organizations_url": "https://api.github.com/users/mbekmyrz/orgs",
"repos_url": "https://api.github.com/users/mbekmyrz/repos",
"events_url": "https://api.github.com/users/mbekmyrz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbekmyrz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @kssteven418 as the designer of the model and porter of the code :)",
"thanks @LysandreJik for reply.\r\n\r\nhello @kssteven418, it seems original source code also passes the same scaling factor to each layer:\r\nhttps://github.com/kssteven418/I-BERT/blob/1b09c759d6aeb71312df9c6ef74fa268a87c934e/fairseq/modules/transformer_sentence_encoder.py#L304-L305\r\n\r\nI see that there is an `input_act` inside each `TransformerSentenceEncoderLayer` before applying `self_attn` which is different from huggingface implementation where it has this QuantAct at the end of the layer as `output_activation` in `IBertOutput`. Although, `input_act` in each layer still uses the same scaling factor to calculate quantized integer value: https://github.com/kssteven418/I-BERT/blob/1b09c759d6aeb71312df9c6ef74fa268a87c934e/fairseq/quantization/utils/quant_utils.py#L233\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | **Problem/Question:**
Why each layer receives the same `hidden_states_scaling_factor` in `IBertEncoder.forward()`? https://github.com/huggingface/transformers/blob/fc63914399b6f60512c720959f9182b02ae4a45c/src/transformers/models/ibert/modeling_ibert.py#L584-L590
In fact, `layer_output_scaling_factor` is not even passed as one of the outputs in `IBertLayer.forward()`: https://github.com/huggingface/transformers/blob/fc63914399b6f60512c720959f9182b02ae4a45c/src/transformers/models/ibert/modeling_ibert.py#L532-L537
**Solution:**
in IBertEncoder:
```
layer_outputs, hidden_states_scaling_factor = layer_module(
hidden_states,
hidden_states_scaling_factor,
attention_mask,
layer_head_mask,
output_attentions,
)
```
in IBertLayer:
```
return outputs, layer_output_scaling_factor
```
**Reasoning:**
Following the paper explanations in https://arxiv.org/abs/2101.01321, it seems that equations with quantized inputs should follow the below pattern:
$x_0$ - is real valued input variable
$q_0$ - is integer valued quantized input variable
$S_0$ - is a scaling factor
$x_0 = S_0q_0$ - according to the paper
$i$ - is the layer index
for each layer: $x_i, S_i = Layer(x_{i-1}, S_{i-1})$
where inside a layer:
$q_{in} = x_{in} / S_{in}$ - get integer values of the layer inputs
$q_{out}, S_{out} = IntFunction(q_{in}, S_{in})$ - some function that performs integer arithmetic
$x_{out} = q_{out} * S_{out}$
and returns $x_{out}, S_{out}$
However, right now, according the code in `modeling_ibert.py`, specifically `IBertLayer` and `IBertEncoder`, the equation looks like this:
for each layer: $x_i = Layer(x_{i-1}, S_0)$ - doesn't even take new scaling factor, just passes the same $S_0$ to each layer
where inside a layer: $q_i = x_i / S_0$
Which seems to be not correct, because each layer is getting the same scaling factor, which is initial $S_0$.
Example with two layers, where each layer is just a Linear layer (or 'QuantLinear'):
given $x_0$, $S_0$ as inputs
Layer 1:
$x_1 = Layer(x_0, S_0) $
inside a layer:
$x_{out} = x_{in} * w_1$ - w is the real valued weights, let's say $v$ and $F$ are the quantized weights and weights scaling factor respectively.
$x_{out} = q_{in} * S_{in} * v_1 * F_1$
$x_{out} = (q_{in} * v_1) * (S_{in} * F_1)$
$x_{out} = q_{out} * S_{out}$
$S_{out} = S_{in} * F_1$
layer outputs are:
$x_1 = (q_0 * v_1) * S_1$
$S_1 = S_0 * F_1$
Layer 2:
$x_2 = Layer(x_1, S_0) $
similarly layer outputs are:
$x_2 = (q_1 * v_2) * S_2$
$S_2 = S_0 * F_2$
Whereas, $S_2$ should be $S_2 = S_1 * F_1$
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26738/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26737/comments | https://api.github.com/repos/huggingface/transformers/issues/26737/events | https://github.com/huggingface/transformers/pull/26737 | 1,938,217,987 | PR_kwDOCUB6oc5cif29 | 26,737 | Fix doctest for `Blip2ForConditionalGeneration` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
`Blip2ForConditionalGeneration` doctest fails due to OOM. This PR restructures its docstring to make it pass doctesting. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26737/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26737",
"html_url": "https://github.com/huggingface/transformers/pull/26737",
"diff_url": "https://github.com/huggingface/transformers/pull/26737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26737.patch",
"merged_at": 1697097668000
} |
https://api.github.com/repos/huggingface/transformers/issues/26736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26736/comments | https://api.github.com/repos/huggingface/transformers/issues/26736/events | https://github.com/huggingface/transformers/issues/26736 | 1,938,066,494 | I_kwDOCUB6oc5zhIw- | 26,736 | update readme | {
"login": "Chanchal-D",
"id": 117251667,
"node_id": "U_kgDOBv0eUw",
"avatar_url": "https://avatars.githubusercontent.com/u/117251667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Chanchal-D",
"html_url": "https://github.com/Chanchal-D",
"followers_url": "https://api.github.com/users/Chanchal-D/followers",
"following_url": "https://api.github.com/users/Chanchal-D/following{/other_user}",
"gists_url": "https://api.github.com/users/Chanchal-D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Chanchal-D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chanchal-D/subscriptions",
"organizations_url": "https://api.github.com/users/Chanchal-D/orgs",
"repos_url": "https://api.github.com/users/Chanchal-D/repos",
"events_url": "https://api.github.com/users/Chanchal-D/events{/privacy}",
"received_events_url": "https://api.github.com/users/Chanchal-D/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry you felt bored :(\r\nWhat would you remove?",
"There's a lot of extra theory that is of no use, I can make it short and interactive. Plus there are 200+ model architectures that are too long. only the key highlights should be mentioned.",
"For model architectures I get your point, the rest seems pretty much required no? What exactly would you remove?",
"yes, it is required, but could written in a short and concise way that is easy to understand.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
the existing readme is too long. It should contain prerequisites and license.
### Motivation
I felt bored when I found the readme too long.
### Your contribution
I will make the readme more concise and brief in an interactive way. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26736/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26735/comments | https://api.github.com/repos/huggingface/transformers/issues/26735/events | https://github.com/huggingface/transformers/pull/26735 | 1,937,765,406 | PR_kwDOCUB6oc5cg6iU | 26,735 | Update docker files to use `torch==2.1.0` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
Make daily CI use torch 2.1.0 π€ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26735/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26735",
"html_url": "https://github.com/huggingface/transformers/pull/26735",
"diff_url": "https://github.com/huggingface/transformers/pull/26735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26735.patch",
"merged_at": 1697034216000
} |
https://api.github.com/repos/huggingface/transformers/issues/26734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26734/comments | https://api.github.com/repos/huggingface/transformers/issues/26734/events | https://github.com/huggingface/transformers/pull/26734 | 1,937,716,664 | PR_kwDOCUB6oc5cgvm_ | 26,734 | Revert #20715 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The import is missing from `src/transformers/utils/__init__.py` as well. Thanks for adding warnings!",
"put them back to `__init__` too",
"Merge as the failing test is irrelevant."
] | 1,697 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
Fix #25948 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26734",
"html_url": "https://github.com/huggingface/transformers/pull/26734",
"diff_url": "https://github.com/huggingface/transformers/pull/26734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26734.patch",
"merged_at": 1697035602000
} |
https://api.github.com/repos/huggingface/transformers/issues/26733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26733/comments | https://api.github.com/repos/huggingface/transformers/issues/26733/events | https://github.com/huggingface/transformers/pull/26733 | 1,937,644,457 | PR_kwDOCUB6oc5cgfkC | 26,733 | Fix checkpoint path in `no_trainer` scripts | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
This PR changes the print statement to print the right path in the `no_trainer` scripts when loading in a checkpoint
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/25998
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26733/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26733",
"html_url": "https://github.com/huggingface/transformers/pull/26733",
"diff_url": "https://github.com/huggingface/transformers/pull/26733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26733.patch",
"merged_at": 1697033787000
} |
https://api.github.com/repos/huggingface/transformers/issues/26732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26732/comments | https://api.github.com/repos/huggingface/transformers/issues/26732/events | https://github.com/huggingface/transformers/issues/26732 | 1,937,561,069 | I_kwDOCUB6oc5zfNXt | 26,732 | Error while saving checkpoint during training | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmmm we have very little visibility in the error due to your log of the error. Would it be possible to have it completely raise so as to have the traceback?\r\n\r\nAlso could you try installing from source to see if your problem is fixed? You can do so with `pip install git+https://github.com/huggingface/transformers`.\r\nThanks!",
"@LysandreJik Please find the full traceback below\r\n\r\n```\r\nTypeError Traceback (most recent call last)\r\nCell In[x], line 3\r\n 1 # Save the fine-tuned model\r\n----> 3 tokenizer.save_pretrained(\"tokenfile\")\r\n\r\nFile /3tb/share/anaconda3/envs/ak_env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2130, in PreTrainedTokenizerBase.save_pretrained(self, save_directory, legacy_format, filename_prefix, push_to_hub, **kwargs)\r\n 2128 write_dict = convert_added_tokens(self.special_tokens_map_extended, add_type_field=False)\r\n 2129 with open(special_tokens_map_file, \"w\", encoding=\"utf-8\") as f:\r\n-> 2130 out_str = json.dumps(write_dict, indent=2, sort_keys=True, ensure_ascii=False) + \"\\n\"\r\n 2131 f.write(out_str)\r\n 2132 logger.info(f\"Special tokens file saved in {special_tokens_map_file}\")\r\n\r\nFile /3tb/share/anaconda3/envs/ak_env/lib/python3.10/json/__init__.py:238, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)\r\n 232 if cls is None:\r\n 233 cls = JSONEncoder\r\n 234 return cls(\r\n 235 skipkeys=skipkeys, ensure_ascii=ensure_ascii,\r\n 236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,\r\n 237 separators=separators, default=default, sort_keys=sort_keys,\r\n--> 238 **kw).encode(obj)\r\n\r\nFile /3tb/share/anaconda3/envs/ak_env/lib/python3.10/json/encoder.py:201, in JSONEncoder.encode(self, o)\r\n 199 chunks = self.iterencode(o, _one_shot=True)\r\n...\r\n 178 \"\"\"\r\n--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '\r\n 180 f'is not JSON serializable')\r\n\r\nTypeError: Object of type property is not JSON serializable\r\n```\r\n\r\n**Temporary Fix** :wrench:\r\n\r\nIssue was happening when we save tokenizer while saving checkpoint. I was able to fix it by removing tokenizer parameter in trainer as below:\r\n\r\n```\r\ntrainer = SFTTrainer(\r\n model=model,\r\n train_dataset=dataset,\r\n peft_config=peft_config,\r\n dataset_text_field=\"text\",\r\n max_seq_length=800,\r\n //tokenizer=tokenizer,\r\n args=training_arguments,\r\n)\r\n```\r\n",
"cc @ArthurZucker ",
"Hey! I think this will be fixed by #26570! Will keep you updated",
"Hey @humza-sami could you try running your script with #26570? \r\ndoing something liek `gh pr checkout 26570` if you have installed from source should help ",
"Hi @ArthurZucker , I followed this:\r\n\r\n```\r\npip uninstall transformers\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers/\r\ngit fetch origin pull/26570/head:pull_26570\r\ngit checkout pull_26570\r\npip install .\r\n```\r\n\r\nStill when I save the tokenizer, error is same.\r\n\r\n```\r\nfrom transformers import (AutoModelForCausalLM,AutoTokenizer,TrainingArguments,BitsAndBytesConfig)\r\nMODEL_NAME = \"codellama/CodeLlama-7b-Instruct-hf\"\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, add_special_tokens=False, add_eos_token=False, add_bos_token=False)\r\ntokenizer.pad_token = None\r\ntokenizer.save_pretrained(\"sample\")\r\n```\r\n\r\n**ERROR**\r\n```\r\n\r\nUsing pad_token, but it is not set yet.\r\nUsing pad_token, but it is not set yet.\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[10], line 1\r\n----> 1 tokenizer.save_pretrained(\"sample\")\r\n\r\nFile /usr/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:2435, in PreTrainedTokenizerBase.save_pretrained(self, save_directory, legacy_format, filename_prefix, push_to_hub, **kwargs)\r\n 2432 tokenizer_config.pop(\"special_tokens_map_file\", None)\r\n 2434 with open(tokenizer_config_file, \"w\", encoding=\"utf-8\") as f:\r\n-> 2435 out_str = json.dumps(tokenizer_config, indent=2, sort_keys=True, ensure_ascii=False) + \"\\n\"\r\n 2436 f.write(out_str)\r\n 2437 logger.info(f\"tokenizer config file saved in {tokenizer_config_file}\")\r\n\r\nFile /usr/lib/python3.8/json/__init__.py:234, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)\r\n 232 if cls is None:\r\n 233 cls = JSONEncoder\r\n--> 234 return cls(\r\n 235 skipkeys=skipkeys, ensure_ascii=ensure_ascii,\r\n 236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,\r\n 237 separators=separators, default=default, sort_keys=sort_keys,\r\n 238 **kw).encode(obj)\r\n\r\nFile /usr/lib/python3.8/json/encoder.py:201, in JSONEncoder.encode(self, o)\r\n 199 chunks = self.iterencode(o, _one_shot=True)\r\n 200 if not isinstance(chunks, (list, tuple)):\r\n--> 201 chunks = list(chunks)\r\n 202 return ''.join(chunks)\r\n\r\nFile /usr/lib/python3.8/json/encoder.py:431, in _make_iterencode.<locals>._iterencode(o, _current_indent_level)\r\n 429 yield from _iterencode_list(o, _current_indent_level)\r\n 430 elif isinstance(o, dict):\r\n--> 431 yield from _iterencode_dict(o, _current_indent_level)\r\n 432 else:\r\n 433 if markers is not None:\r\n\r\nFile /usr/lib/python3.8/json/encoder.py:405, in _make_iterencode.<locals>._iterencode_dict(dct, _current_indent_level)\r\n 403 else:\r\n 404 chunks = _iterencode(value, _current_indent_level)\r\n--> 405 yield from chunks\r\n 406 if newline_indent is not None:\r\n 407 _current_indent_level -= 1\r\n\r\nFile /usr/lib/python3.8/json/encoder.py:438, in _make_iterencode.<locals>._iterencode(o, _current_indent_level)\r\n 436 raise ValueError(\"Circular reference detected\")\r\n 437 markers[markerid] = o\r\n--> 438 o = _default(o)\r\n 439 yield from _iterencode(o, _current_indent_level)\r\n 440 if markers is not None:\r\n\r\nFile /usr/lib/python3.8/json/encoder.py:179, in JSONEncoder.default(self, o)\r\n 160 def default(self, o):\r\n 161 \"\"\"Implement this method in a subclass such that it returns\r\n 162 a serializable object for ``o``, or calls the base implementation\r\n 163 (to raise a ``TypeError``).\r\n (...)\r\n 177 \r\n 178 \"\"\"\r\n--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '\r\n 180 f'is not JSON serializable')\r\n\r\nTypeError: Object of type method is not JSON serializable\r\n```",
"Bit strange, this worked for me",
"@ArthurZucker If possible can you share a test code snippet you are using which I can test with my code ?\r\nPlease see my simple code which is causing issue:\r\n```\r\nfrom transformers import AutoTokenizer\r\nMODEL_NAME = \"codellama/CodeLlama-7b-Instruct-hf\"\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, add_special_tokens=False, add_eos_token=False, add_bos_token=False)\r\ntokenizer.pad_token = \"[PAD]\"\r\ntokenizer.save_pretrained(\"sample\")\r\n```\r\nIts giving me error. I am using latest 4.34.1v of transformers",
"Alright I can indee reproduce now, the `init_kwargs` include `add_special_tokens` which is a function as well as something we pass to the model usually to specify whether we want to add special tokens or not. It should not be saved as an init_kwargs / should be filtered out when we serialized. I'll push a fix soon",
"I'm still working on the PR π ",
"It's planned for this release! π€ One small test to fix and will be merged",
"Thanks for you patience @ghost (oups) now fixed"
] | 1,697 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.18.0
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am training codellama model on custom dataset. Training starts but when it tries to save the checkpoint then it gives the error and stop training.
**ERROR:**
`2023-10-11 11:34:18,589 - ERROR - Error in Logs due to Object of type method is not JSON serializable
`
**CODE:**
```
import json
import torch
import pandas as pd
import datasets
from peft import LoraConfig,PeftModel
from transformers import (AutoModelForCausalLM,AutoTokenizer,TrainingArguments,BitsAndBytesConfig)
import transformers
from trl import SFTTrainer
import os
import logging
import sys
RANK = 16
LR = 1e-4
EPOCH = 10
BATCH = 11
output_dir = f"../results/10-10-2023/{RANK}_RANK--{LR}_LR--{EPOCH}_EPOCH--{BATCH}_BATCH/"
if not os.path.exists(output_dir):
# If the directory doesn't exist, create it
os.makedirs(output_dir)
print(f"Directory '{output_dir}' created.")
else:
print(f"Directory '{output_dir}' already exists.")
# Create a logger instance
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Create a formatter with the desired format
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
# Create a stream handler to output log messages to the console
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
# Create a file handler to log messages to a file
file_handler = logging.FileHandler(f'{output_dir}/trl-trainer-codellama.txt', encoding='utf-8') # Specify the file name here
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
console_handler = logging.StreamHandler(stream=sys.stdout)
# DEVICE = "cuda:0" if torch.cuda.is_available() else 'cpu'
MODEL_NAME = "./CodeLlama-7b-Instruct-HF"
# loading dataset
dataset = datasets.load_from_disk("../verilog-dataset/codellama_800L_74052E/")
# loading model
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME,use_safetensors=True,load_in_8bit=True,trust_remote_code=True,device_map='auto')
# loading tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, add_special_tokens=False, add_eos_token=False, add_bos_token=False)
tokenizer.pad_token = "[PAD]"
# LORA Configuration
peft_config = LoraConfig(
lora_alpha=RANK*2,
lora_dropout=0.05,
r = RANK,
bias="none",
task_type = "CAUSAL_LM",
target_modules = ["q_proj", "v_proj","lm_head"]
)
training_arguments = TrainingArguments(
per_device_train_batch_size=BATCH,
gradient_accumulation_steps=2,
optim="paged_adamw_32bit",
learning_rate=LR,
fp16=True,
max_grad_norm=0.3,
num_train_epochs=EPOCH,
warmup_ratio=0.05,
logging_steps=5,
save_total_limit=100,
save_strategy="steps",
save_steps=2,
group_by_length=True,
output_dir=output_dir,
report_to="tensorboard",
save_safetensors=True,
lr_scheduler_type="cosine",
seed=42)
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=800,
tokenizer=tokenizer,
args=training_arguments,
)
try:
trainer.train()
except Exception as e:
logger.error(f"Error in Logs due to {e}")
```
### Expected behavior
I am expecting that model should continue training without stopping while saving the checkpoints. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26732/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.