url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/27645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27645/comments | https://api.github.com/repos/huggingface/transformers/issues/27645/events | https://github.com/huggingface/transformers/pull/27645 | 2,005,811,418 | PR_kwDOCUB6oc5gGj3r | 27,645 | [`DocString`] Support a revision in the docstring `add_code_sample_docstrings` to facilitate integrations | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the review but a PR in draft mode means it's a draft π ",
"oh no :sweat: missed the draft status somehow",
"No worries π€ "
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
When PRs on the hub are not merged yet + sometimes for safety reasons we can pin a revision in the docstring code samples added.
Seems like the CI is enough to test, dummy testing the build with a flax t5 model π
Example: https://moon-ci-docs.huggingface.co/docs/transformers/pr_27645/en/model_doc/albert#transformers.FlaxAlbertForMaskedLM.__call__.example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27645/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27645",
"html_url": "https://github.com/huggingface/transformers/pull/27645",
"diff_url": "https://github.com/huggingface/transformers/pull/27645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27645.patch",
"merged_at": 1700839806000
} |
https://api.github.com/repos/huggingface/transformers/issues/27644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27644/comments | https://api.github.com/repos/huggingface/transformers/issues/27644/events | https://github.com/huggingface/transformers/issues/27644 | 2,005,627,044 | I_kwDOCUB6oc53i3Ck | 27,644 | Crash when running `examples/flax/question-answering` | {
"login": "DwarKapex",
"id": 11195921,
"node_id": "MDQ6VXNlcjExMTk1OTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/11195921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DwarKapex",
"html_url": "https://github.com/DwarKapex",
"followers_url": "https://api.github.com/users/DwarKapex/followers",
"following_url": "https://api.github.com/users/DwarKapex/following{/other_user}",
"gists_url": "https://api.github.com/users/DwarKapex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DwarKapex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DwarKapex/subscriptions",
"organizations_url": "https://api.github.com/users/DwarKapex/orgs",
"repos_url": "https://api.github.com/users/DwarKapex/repos",
"events_url": "https://api.github.com/users/DwarKapex/events{/privacy}",
"received_events_url": "https://api.github.com/users/DwarKapex/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Meet same issue when JAX v0.4.20."
] | 1,700 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.8.0 (gpu)
- Jax version: 0.4.21.dev20231121+g2efa5862a
- JaxLib version: 0.4.21.dev20231121
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi @ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install the latest version of HF transformer:
```
$> git clone https://github.com/huggingface/transformers.git /opt/transformers
$> cd /opt/transformers
$> pip install -e .
```
2. Navigate to examples:
```
$> cd /opt/transformers/examples/flax/question-answering
```
3. Install requirements:
```
$> pip install -r requirements.txt
```
4. Run test:
```
$> python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--per_device_train_batch_size 12 \
--output_dir ./bert-qa-squad \
--eval_steps 1000
```
5. Crash with the following error:
```
Traceback (most recent call last):
File "/opt/transformers/examples/flax/question-answering/run_qa.py", line 1095, in <module>
main()
File "/opt/transformers/examples/flax/question-answering/run_qa.py", line 900, in main
model = FlaxAutoModelForQuestionAnswering.from_pretrained(
File "/opt/transformers/src/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/opt/transformers/src/transformers/modeling_flax_utils.py", line 902, in from_pretrained
model = cls(config, *model_args, _do_init=_do_init, **model_kwargs)
File "/opt/transformers/src/transformers/models/bert/modeling_flax_bert.py", line 786, in __init__
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init)
File "/opt/transformers/src/transformers/modeling_flax_utils.py", line 219, in __init__
random_params = self.init_weights(self.key, input_shape)
File "/opt/transformers/src/transformers/models/bert/modeling_flax_bert.py", line 821, in init_weights
module_init_outputs = self.module.init(
File "/opt/transformers/src/transformers/models/bert/modeling_flax_bert.py", line 1572, in __call__
start_logits, end_logits = logits.split(self.config.num_labels, axis=-1)
AttributeError: 'ArrayImpl' object has no attribute 'split'. Did you mean: '_split'?
```
### Expected behavior
Expect no exception | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27644/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27643/comments | https://api.github.com/repos/huggingface/transformers/issues/27643/events | https://github.com/huggingface/transformers/pull/27643 | 2,005,474,441 | PR_kwDOCUB6oc5gFa96 | 27,643 | Simplify the implementation of jitter noise in moe models | {
"login": "jiangwangyi",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwangyi",
"html_url": "https://github.com/jiangwangyi",
"followers_url": "https://api.github.com/users/jiangwangyi/followers",
"following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwangyi/orgs",
"repos_url": "https://api.github.com/users/jiangwangyi/repos",
"events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwangyi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR simplifies the implementation of jitter noise in both gptsan_japanese and switch_transformers.
The former implementation is:
```Python
if self.jitter_noise > 0:
# Get the lower and upper bound of the uniform distribution
# Adapted from: https://stackoverflow.com/questions/44328530/how-to-get-a-uniform-distribution-in-a-range-r1-r2-in-pytorch
distrib_lower_bound = 1.0 - self.jitter_noise
distrib_upper_bound = 1.0 + self.jitter_noise
uniform_distrib = torch.rand(hidden_states.shape, device=hidden_states.device, dtype=self.dtype)
uniform_distrib = uniform_distrib * (distrib_lower_bound - distrib_upper_bound)
uniform_distrib = uniform_distrib + distrib_upper_bound
# Multiply the token inputs by the uniform distribution - adding some noise
hidden_states *= uniform_distrib
```
The simplified implementation is:
```Python
if self.jitter_noise > 0:
# Multiply the token inputs by the uniform distribution - adding some noise
hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.jitter_noise, 1.0 + self.jitter_noise)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @ArthurZucker @younesbelkada
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27643/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27643",
"html_url": "https://github.com/huggingface/transformers/pull/27643",
"diff_url": "https://github.com/huggingface/transformers/pull/27643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27643.patch",
"merged_at": 1700650180000
} |
https://api.github.com/repos/huggingface/transformers/issues/27642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27642/comments | https://api.github.com/repos/huggingface/transformers/issues/27642/events | https://github.com/huggingface/transformers/issues/27642 | 2,005,472,954 | I_kwDOCUB6oc53iRa6 | 27,642 | difference in tokenization - bert-base-multilingual-cased vs uncased | {
"login": "dsplog",
"id": 5105387,
"node_id": "MDQ6VXNlcjUxMDUzODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5105387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsplog",
"html_url": "https://github.com/dsplog",
"followers_url": "https://api.github.com/users/dsplog/followers",
"following_url": "https://api.github.com/users/dsplog/following{/other_user}",
"gists_url": "https://api.github.com/users/dsplog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsplog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsplog/subscriptions",
"organizations_url": "https://api.github.com/users/dsplog/orgs",
"repos_url": "https://api.github.com/users/dsplog/repos",
"events_url": "https://api.github.com/users/dsplog/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsplog/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey π€ \r\n1. If you want to know the actual tokenization you should be using:\r\n```python\r\n>>> text = 'hello my dear'\r\n>>> \r\n>>> from transformers import BertTokenizer\r\n>>> bert_c_tok = BertTokenizer.from_pretrained('bert-base-multilingual-cased')\r\n>>> bert_c_tok.tokenize(text)\r\n['hell', '##o', 'my', 'dea', '##r']\r\n```\r\n2. These two tokenizers are different. Their pre-processing are different thus the tokens they produce during training do not have the same distribution. This means for example that `Hello` and `hello` are the same for the uncased model, thus they are more frequent, and thus not split into different tokens! ",
"ah, thanks for the clarification. will do some reading to understand the pre-processing fine prints. if you have any links to share, it will be helpful. "
] | 1,700 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.33.3
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
>>> text = 'hello my dear'
>>>
>>> from transformers import BertTokenizer
>>> bert_c_tok = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
>>>
>>> enc_c_tok = bert_c_tok.encode(text)
>>> print([bert_c_tok.decode([x]) for x in enc_c_tok])
['[CLS]', 'hell', '##o', 'my', 'dea', '##r', '[SEP]']
>>>
>>> bert_unc_tok = BertTokenizer.from_pretrained('bert-base-multilingual-uncased')
>>> enc_unc_tok = bert_unc_tok.encode(text)
>>> print([bert_unc_tok.decode([x]) for x in enc_unc_tok])
['[CLS]', 'hello', 'my', 'dear', '[SEP]']
```
### Expected behavior
was not expecting bert cased tokenization splitting "hello" as "hell" and "##o" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27642/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27641/comments | https://api.github.com/repos/huggingface/transformers/issues/27641/events | https://github.com/huggingface/transformers/pull/27641 | 2,005,118,209 | PR_kwDOCUB6oc5gEONj | 27,641 | [docs] Quantization | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I just saw this today @stevhliu, but I think that we table of content is not displayed correctly. It would be great to fix this since the doc is quite big. Thanks ! Feel free to merge the PR after that ! \r\n\r\n<img width=\"1287\" alt=\"Screenshot 2023-11-27 at 12 18 43 PM\" src=\"https://github.com/huggingface/transformers/assets/57196510/13e4248e-dd1b-43c2-874d-ed276e84244a\">\r\n",
"Thanks y'all, pinging @ArthurZucker for a final review! The PR preview is still building I think, but the side table of contents should be fixed now which was caused by starting with the wrong section level in `## Quantization`"
] | 1,700 | 1,701 | 1,701 | MEMBER | null | This PR resolves #27575 to clean up the Quantization API docs and create a new section for the existing content in the Performance and scalability section for more visibility. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27641/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27641",
"html_url": "https://github.com/huggingface/transformers/pull/27641",
"diff_url": "https://github.com/huggingface/transformers/pull/27641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27641.patch",
"merged_at": 1701189707000
} |
https://api.github.com/repos/huggingface/transformers/issues/27640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27640/comments | https://api.github.com/repos/huggingface/transformers/issues/27640/events | https://github.com/huggingface/transformers/issues/27640 | 2,004,924,110 | I_kwDOCUB6oc53gLbO | 27,640 | Allow passing 2D attention mask | {
"login": "UniverseFly",
"id": 46997596,
"node_id": "MDQ6VXNlcjQ2OTk3NTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/46997596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UniverseFly",
"html_url": "https://github.com/UniverseFly",
"followers_url": "https://api.github.com/users/UniverseFly/followers",
"following_url": "https://api.github.com/users/UniverseFly/following{/other_user}",
"gists_url": "https://api.github.com/users/UniverseFly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/UniverseFly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/UniverseFly/subscriptions",
"organizations_url": "https://api.github.com/users/UniverseFly/orgs",
"repos_url": "https://api.github.com/users/UniverseFly/repos",
"events_url": "https://api.github.com/users/UniverseFly/events{/privacy}",
"received_events_url": "https://api.github.com/users/UniverseFly/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hey, the model's forward already supports passing a 2d attention mask, it is just expended to 4d because that is the format required by the attention implementation. \r\nWould you mind elaborating on what you cannot currently do? (Might be related to #27539?) ",
"> Hey, the model's forward already supports passing a 2d attention mask, it is just expended to 4d because that is the format required by the attention implementation.\r\n> Would you mind elaborating on what you cannot currently do? (Might be related to #27539?)\r\n\r\nYeah, I might not make it clear. The current \"2D\"s are `[batch_size, num_tokens]`. What I suggested was `[batch_size, num_tokens, num_tokens]` so we can have a matrix for each batch that explicitly defines what each token should attend to. https://github.com/huggingface/transformers/pull/27539 seems relevant",
"Just chiming in, here is some more context (also very interested in this feature). From what I understand, this is not trivial implement in general...\r\n\r\nAs one current example, the [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl/tree/main) finetuning harness implements efficient sample packing with correct block diagonal attention masking through [a series of monkey patches](https://github.com/OpenAccess-AI-Collective/axolotl/tree/main/src/axolotl/monkeypatch) for the underlying huggingface model definitions for a few of the very popular models like llama and mistral. Though I have not looked through the code in detail, I believe it leverages the fact that the flash attention api supports the masking required to implement this scheme.\r\n\r\nIt is relevant for efficient finetuning (the reason it's incorporated into axolotl), and general wisdom (and whispers from inside large corps) suggest that this type of block diagonal masking is better for large scale training code.\r\n\r\n(https://github.com/huggingface/transformers/pull/27539 is relevant, but it looks like the focus may be on the beam search/speculative decoding use case, not this slightly more general use case. Also here's a relevant hf forum post https://discuss.huggingface.co/t/the-correct-attention-mask-for-examples-packing/52909/2)"
] | 1,700 | 1,701 | null | NONE | null | ### Feature request
Allow passing a 2D attention mask in `model.forward`.
### Motivation
With this feature, it would be much easier to avoid cross-context contamination during pretraining and supervised finetuning when packing the sequences together for more efficient training.
Here is an example usecase discussed in (https://github.com/huggingface/trl/issues/805):

### Your contribution
Upon investigation into the source code, I found the current logic of initializing attention masks is mostly a fixed code snippet encoded in each model:
```python
if getattr(self.config, "_flash_attn_2_enabled", False):
# 2d mask is passed through the layers
attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
else:
# 4d mask is passed through the layers
attention_mask = _prepare_4d_causal_attention_mask(
attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
)
```
To enable this behavior may require hacking into each model. I should be able to handle part of them and submit a draft PR. But before that, I want to know if this feature request is reasonable. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27640/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27639/comments | https://api.github.com/repos/huggingface/transformers/issues/27639/events | https://github.com/huggingface/transformers/pull/27639 | 2,004,901,114 | PR_kwDOCUB6oc5gDeqc | 27,639 | Add `TimedTextStreamer` for measuring per-token latency using `generate()` | {
"login": "danielkorat",
"id": 32893314,
"node_id": "MDQ6VXNlcjMyODkzMzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/32893314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielkorat",
"html_url": "https://github.com/danielkorat",
"followers_url": "https://api.github.com/users/danielkorat/followers",
"following_url": "https://api.github.com/users/danielkorat/following{/other_user}",
"gists_url": "https://api.github.com/users/danielkorat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielkorat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielkorat/subscriptions",
"organizations_url": "https://api.github.com/users/danielkorat/orgs",
"repos_url": "https://api.github.com/users/danielkorat/repos",
"events_url": "https://api.github.com/users/danielkorat/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielkorat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @danielkorat π Thank you for opening the PR!\r\n\r\nI haven't seen other requests to measure per-token time, so I'm assuming a limited number of users would benefit from it. Moreover, I have in my list of TODOs for `generate()` to refactor how streaming works: instead of printing/emitting tokens, `generate()` would become an iterator -- from which per-token measurements would become trivial.\r\n\r\nFor the reasons listed above, I'm deciding not to merge this PR (limited benefits, short/medium time of existence, plus maintenance burden).\r\n\r\nIf other users feel strongly about this PR, please react to this comment: happy to reconsider when we reach 10 reactions π€ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
Adds a `TimedTextStreamer` for measuring per-token latency using `generate()` .
The idea behind this implementation is a non-intrusive method to measure per-token generation latency, which does not modify code in `genration/utils.py` making it compatible with future versions of `utils.py` as well.
By measuring per-token latency, one can deduce the 1st token latency vs the 2nd token latency (all tokens after the 1st), an important distinction in benchmarking of decoder only models.
## Usage
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TimedTextStreamer
>>> tok = AutoTokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("gpt2")
>>> inputs = tok(["An increasing sequence: one, two, three, four, five, six, seven,"], return_tensors="pt")
>>> # Option 1: Do not print the generated text (more accurate measurement)
>>> streamer = TimedTextStreamer()
>>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=5)
>>> streamer.get_token_times()
[146.61671698559076, 91.94262302480638, 16.620049951598048, 14.585152035579085, 14.050466008484364]
>>> # Option 2: Print the generated text as well
>>> streamer = TimedTextStreamer(tokenizer=tok)
>>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=5)
eight, nine, ten
>>> streamer.get_token_times()
[162.81271493062377, 18.371607991866767, 15.906393993645906, 14.754525036551058, 14.49775299988687]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
cc @gante @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27639/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27639/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27639",
"html_url": "https://github.com/huggingface/transformers/pull/27639",
"diff_url": "https://github.com/huggingface/transformers/pull/27639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27639.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27638/comments | https://api.github.com/repos/huggingface/transformers/issues/27638/events | https://github.com/huggingface/transformers/pull/27638 | 2,004,795,258 | PR_kwDOCUB6oc5gDHWH | 27,638 | translate internal folder files to chinese | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu\r\n\r\nHi, here is another pr. I will fix merge conflic later.\r\n\r\nAnd for file `trainer_utils.md`, I think the subtile \"Distributed Evaluation\" is repeated. Would you mind providing a useful subtitle? I can update both en file and zh file at this pr.",
"@stevhliu \r\n\r\nHi, thansk for your comment and reviews. I just update these reviews and I aslo update `trainer_utils` in `en` folder. \r\n\r\nAnd I am little busy past few weeks and now I think I will be back for those works again.\r\n\r\nBest",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27638). All of your documentation changes will be reflected on that endpoint.",
"Thanks again, and no worries about being busy. Feel free to take the time you need π€ "
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27638/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27638",
"html_url": "https://github.com/huggingface/transformers/pull/27638",
"diff_url": "https://github.com/huggingface/transformers/pull/27638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27638.patch",
"merged_at": 1701713068000
} |
https://api.github.com/repos/huggingface/transformers/issues/27637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27637/comments | https://api.github.com/repos/huggingface/transformers/issues/27637/events | https://github.com/huggingface/transformers/pull/27637 | 2,004,763,823 | PR_kwDOCUB6oc5gDAdF | 27,637 | Added test cases for rembert refering to albert and reformer test_tok⦠| {
"login": "nileshkokane01",
"id": 8201108,
"node_id": "MDQ6VXNlcjgyMDExMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nileshkokane01",
"html_url": "https://github.com/nileshkokane01",
"followers_url": "https://api.github.com/users/nileshkokane01/followers",
"following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}",
"gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions",
"organizations_url": "https://api.github.com/users/nileshkokane01/orgs",
"repos_url": "https://api.github.com/users/nileshkokane01/repos",
"events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}",
"received_events_url": "https://api.github.com/users/nileshkokane01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I do not know why tests_tf, tests_torch, run_tests and tests_jax is failing. \r\nCan anyone let me know? I can work on it to resolve.",
"> make sure the mask_token is initialized the same way as an added token for both the fast and the slow tokenizers\r\n\r\nYou are right. [This](https://github.com/huggingface/transformers/blob/main/src/transformers/models/rembert/tokenization_rembert_fast.py#L120) is absent in slow token for rembert.\r\n\r\nCommenting that line I am not getting any failed cases. \r\n\r\nHow to tackle such test cases for which the mask token are initialized differently?\r\nShould the constructor be passed with a appropriate mask_token?\r\n\r\r\n\r\n\r\n` self.assertDictEqual(EXPECTED_ADDED_TOKENS_DECODER, tokenizer_fasadded_tokens_decoder)\r\nE AssertionError: {0: A[454 chars]trip=False, single_word=False, noalized=Fals[123 chars]rue)} != {0: A[454 chars]trip=True, single_word=False, normalized=False22 chars]rue)}\r\nE Diff is 870 characters long. Set self.maxDiff to None to see it.`",
"I overrided test_added_tokens_serialization to clear the following error. \r\nDo not know if its a right way. \r\n\r\n\r\n`\r\n AssertionError: {0: A[454 chars]trip=False, single_word=False, normalized=Fals[123 chars]rue)} != {0: A[454 chars]trip=True, single_word=False, normalized=False[122 chars]rue)}E Diff is 870 characters long. Set self.maxDiff to None to see it.\r\n\r\ntests\\test_tokenization_common.py:4128: AssertionError\r\n\r\n\r\n`\r\n\r\n",
"It's usually a bit tricky!",
"Hi @ArthurZucker ,\r\n\r\nThanks for your suggestion. \r\n\r\nI dig in more deeper and found out that [this](https://github.com/huggingface/transformers/blob/main/tests/test_tokenization_common.py#L4084) line for slow tokens is having AddedToken for [MASK] with lstrip= False. \r\n\r\nHowever, [ this](https://github.com/huggingface/transformers/blob/main/tests/test_tokenization_common.py#L4123) will give the instance of Fast Token with lstrip = True as it invokes the constructor RemBertTokenizerFast where lstrip = True explicitly. \r\n\r\nThe above issue can be fixed and all test Pass- by setting [this](https://github.com/huggingface/transformers/blob/main/src/transformers/models/rembert/tokenization_rembert_fast.py#L120) **lstrip= False** and that requires change in file outside test_tokenization_rembert.py , however , the comment above that line gives the reason as to why we have lstrip = True. \r\n\r\n> Mask token behave like a normal word, i.e. include the space before it\r\n\r\nDo let me know, I can send the changes accordingly.",
"Hey! No, I'm pretty sure this should be fixed by setting the `mask_token` using :\r\n\r\n```python\r\n # Mask token behave like a normal word, i.e. include the space before it\r\n mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token\r\n```\r\nin the slow tokenizer. Did you try that? ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27637). All of your documentation changes will be reflected on that endpoint.",
"That isn't fixing the issue because it requires mask_token to be string.\r\n\r\nWhen you call [from_pretrained](https://github.com/huggingface/transformers/blob/main/tests/test_tokenization_common.py#L4083) all the 'added_token_decoder' gets converted to AddedToken instance from [here](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L2183) based on the config file .\r\n\r\nTherefore, the slow token will have lstrip as False, despite the line added as it checks for instance as string.",
"No. mask_token does not need to be a string it has to be an AddedToken, otherwise it's casted to addedToken when adding it to the vocab if it's not part of the vocab. Anyway what I am saying is that this test works for the ~100 or so tokenizers, to skip it we need a good reason like a very edgy case or whatever, otherwise it means that loading the tokenizers from slow to fast, from the hub etc does not produce correct results. ",
"π€ ",
"Hi,\r\n\r\n> No. mask_token does not need to be a string it has to be an AddedToken\r\n\r\nIf this is the case, the following line will fix the issue. I just verified it. \r\n\r\n` # Mask token behave like a normal word, i.e. include the space before it\r\n mask_token = AddedToken('[MASK]', lstrip=True, rstrip=False, normalized=False) `\r\n\r\nIf the above change to slow token is okay, I'll send a patch, otherwise we have to rely on 'finding' a strong case to skip the the slow->fast test.",
"Yeah the above change should be alright mostly if all tests pass! ",
"I have made the necessary changes. Can you have a look ? ",
"can you Pl check the latest commit.",
"@ArthurZucker \r\n\r\nThanks for your support! "
] | 1,700 | 1,707 | 1,701 | CONTRIBUTOR | null |
# What does this PR do?
It addresses #16627 to create test cases for RemBert.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @SaulLu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ArthurZucker
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27637/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27637",
"html_url": "https://github.com/huggingface/transformers/pull/27637",
"diff_url": "https://github.com/huggingface/transformers/pull/27637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27637.patch",
"merged_at": 1701693418000
} |
https://api.github.com/repos/huggingface/transformers/issues/27636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27636/comments | https://api.github.com/repos/huggingface/transformers/issues/27636/events | https://github.com/huggingface/transformers/pull/27636 | 2,004,741,340 | PR_kwDOCUB6oc5gC7li | 27,636 | Reflect RoCm support in the documentation | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We may want to update https://huggingface.co/docs/transformers/perf_hardware as well, explaining `rocm-smi --showtopoweight` and `rocm-smi --shownodesbw` output.",
"@LysandreJik @ArthurZucker @amyeroberts WDYT?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @ArthurZucker that would indeed be a great addition showcasing the variety of hardware supported by Transformers (or extensions of transformers / other community libraries). I'll leave it for an other PR!"
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | As per title.
We will need https://github.com/huggingface/optimum/pull/1546 to be merged first, and an Optimum release. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27636/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27636",
"html_url": "https://github.com/huggingface/transformers/pull/27636",
"diff_url": "https://github.com/huggingface/transformers/pull/27636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27636.patch",
"merged_at": 1700841557000
} |
https://api.github.com/repos/huggingface/transformers/issues/27635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27635/comments | https://api.github.com/repos/huggingface/transformers/issues/27635/events | https://github.com/huggingface/transformers/pull/27635 | 2,004,708,527 | PR_kwDOCUB6oc5gC0Zv | 27,635 | Explicitely specify `use_cache=True` in Flash Attention tests | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | Fixes https://github.com/huggingface/transformers/pull/27625#issuecomment-1821029483 making things more readable | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27635/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27635",
"html_url": "https://github.com/huggingface/transformers/pull/27635",
"diff_url": "https://github.com/huggingface/transformers/pull/27635.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27635.patch",
"merged_at": 1700585590000
} |
https://api.github.com/repos/huggingface/transformers/issues/27634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27634/comments | https://api.github.com/repos/huggingface/transformers/issues/27634/events | https://github.com/huggingface/transformers/pull/27634 | 2,004,683,236 | PR_kwDOCUB6oc5gCuxs | 27,634 | Update chat template warnings/guides | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,700 | 1,701 | 1,701 | MEMBER | null | The default ChatML template doesn't add a BOS token, but some models expect it. This PR makes a small update to the docs and warning messages to make users more aware of the potential issue here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27634/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27634",
"html_url": "https://github.com/huggingface/transformers/pull/27634",
"diff_url": "https://github.com/huggingface/transformers/pull/27634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27634.patch",
"merged_at": 1701110410000
} |
https://api.github.com/repos/huggingface/transformers/issues/27633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27633/comments | https://api.github.com/repos/huggingface/transformers/issues/27633/events | https://github.com/huggingface/transformers/pull/27633 | 2,004,576,314 | PR_kwDOCUB6oc5gCWoL | 27,633 | Add deepspeed test to amd scheduled CI | {
"login": "echarlaix",
"id": 80481427,
"node_id": "MDQ6VXNlcjgwNDgxNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/80481427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echarlaix",
"html_url": "https://github.com/echarlaix",
"followers_url": "https://api.github.com/users/echarlaix/followers",
"following_url": "https://api.github.com/users/echarlaix/following{/other_user}",
"gists_url": "https://api.github.com/users/echarlaix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echarlaix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echarlaix/subscriptions",
"organizations_url": "https://api.github.com/users/echarlaix/orgs",
"repos_url": "https://api.github.com/users/echarlaix/repos",
"events_url": "https://api.github.com/users/echarlaix/events{/privacy}",
"received_events_url": "https://api.github.com/users/echarlaix/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27633). All of your documentation changes will be reflected on that endpoint.",
"There is a PR merged before this one\r\n\r\nhttps://github.com/huggingface/transformers/pull/27743\r\n\r\nDon't hesitate if you need my help on resolving the conflict :-)",
"Running the tests, all tests passing on A100 pass as well on MI250. Two failing ones (that seem unrelated to deepspeed: https://github.com/huggingface/transformers/blob/e0d2e695827594c6a95a68612f32c65b2686985e/tests/extended/test_trainer_ext.py#L125, for both MI250/A100) are:\r\n\r\n```\r\nFAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_1_low - AssertionError: 8 != 2\r\nFAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_trainer_log_level_replica_2_high - AssertionError: 7 != 1\r\n```\r\n\r\nwhich I'll leave to an other PR.\r\n\r\nOne issue with the image `rocm/pytorch:rocm5.6_ubuntu20.04_py3.8_pytorch_2.0.1` is that it has `numba==0.49.0` (that is imported [here](https://github.com/huggingface/transformers/blob/e0d2e695827594c6a95a68612f32c65b2686985e/tests/models/wav2vec2/test_modeling_flax_wav2vec2.py#L67) through librosa) installed, which plays not nicely with transformers dependency resolution as it uses outdated `np.long`.\r\n\r\nI have the warning `cpu_adam cuda is missing or is incompatible with installed torch, only cpu ops can be compiled!` as well with `rocm/pytorch:rocm5.6_ubuntu20.04_py3.8_pytorch_2.0.1` (for example `pytest tests/ -k \"test_can_resume_training_errors_zero2_fp16\" -s -vvvvv`), so I moved to use `rocm/dev-ubuntu-22.04:5.6` instead. Issue open in DeepSpeed about it: https://github.com/microsoft/DeepSpeed/issues/4768",
"@ydshieh Should be in good shape. There are many jobs though, but I guess it is the way transformers CI is designed? https://github.com/huggingface/transformers/actions/runs/7089122062/job/19293182925",
"We should not trigger that many jobs in a PR. We usually just removed unrelated jobs, although this is a bit time consuming.\r\n\r\nI don't mind keep it running this way (for this time).",
"Well, it probably will affect tomorrow's scheduled CI.",
"Let's cancel it. I will show you how we usually do the experimentation tomorrow.",
"Sorry!",
"Hi @fxmarty Let me know if you have any question or need help regarding my above comments.",
"Thanks @fxmarty @ydshieh for the updates while I was sick, let me push the new image manually, just launching all the tests locally to verify everything is working with the updated image before pushing",
"Hi @echarlaix You don't need to run all the tests. Just make sure \r\n\r\n- the image could build\r\n- the deepspeed build step works \r\n- the workflow contain no bug - i.e. it could be trigger and run on GH actions\r\n- the deepspeed tests could be launched (we don't really care how many failing tests it has for now)\r\n ",
"There is 11 failing [tests](https://github.com/huggingface/transformers/actions/runs/7114222566/job/19367866854?pr=27633) for AMD vs 7 for the current [CI](https://github.com/huggingface/transformers/actions/runs/7095377457), these 4 tests are bf16 variant of already failing tests.\r\n\r\n\r\nFailing test current CI :\r\n\r\n```\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_do_eval_no_train\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero2_fp16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero3_fp16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero2_fp16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero3_fp16\r\ntests/deepspeed/test_model_zoo.py::TestDeepSpeedModelZoo::test_zero_to_fp32_zero3_qa_led\r\ntests/deepspeed/test_model_zoo.py::TestDeepSpeedModelZoo::test_zero_to_fp32_zero3_trans_fsmt\r\n```\r\n\r\nFailing tests for AMD :\r\n```\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_do_eval_no_train\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero2_bf16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero2_fp16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero3_bf16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero3_fp16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero2_bf16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero2_fp16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero3_bf16\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero3_fp16\r\ntests/deepspeed/test_model_zoo.py::TestDeepSpeedModelZoo::test_zero_to_fp32_zero3_trans_t5_v1\r\ntests/deepspeed/test_model_zoo.py::TestDeepSpeedModelZoo::test_zero_to_fp32_zero3_trans_fsmt\r\n```\r\n\r\n\r\n\r\n\r\n",
"I will merge after a fix #27951 being merged."
] | 1,700 | 1,702 | 1,702 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27633/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27633",
"html_url": "https://github.com/huggingface/transformers/pull/27633",
"diff_url": "https://github.com/huggingface/transformers/pull/27633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27633.patch",
"merged_at": 1702308817000
} |
https://api.github.com/repos/huggingface/transformers/issues/27632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27632/comments | https://api.github.com/repos/huggingface/transformers/issues/27632/events | https://github.com/huggingface/transformers/issues/27632 | 2,004,313,660 | I_kwDOCUB6oc53d2Y8 | 27,632 | Deployment in notebook(Kaggle) Transformers can't Integration DeepSpeed | {
"login": "ohand007",
"id": 66542185,
"node_id": "MDQ6VXNlcjY2NTQyMTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/66542185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohand007",
"html_url": "https://github.com/ohand007",
"followers_url": "https://api.github.com/users/ohand007/followers",
"following_url": "https://api.github.com/users/ohand007/following{/other_user}",
"gists_url": "https://api.github.com/users/ohand007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohand007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohand007/subscriptions",
"organizations_url": "https://api.github.com/users/ohand007/orgs",
"repos_url": "https://api.github.com/users/ohand007/repos",
"events_url": "https://api.github.com/users/ohand007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohand007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @pacman100 and @muellerzr ",
"99% sure you can't use DeepSpeed in a notebook. @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,706 | 1,706 | NONE | null | ### System Info
transformers==4.35.0
deepspeed==0.12.2
pytorch==2.0.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import transformers
from transformers import (
Seq2SeqTrainingArguments
)
Seq2SeqTrainingArguments(do_train=True,do_eval=True,
output_dir='/kaggle/working/',
per_device_train_batch_size=8,
per_device_eval_batch_size=8,overwrite_output_dir=True,predict_with_generate=True,
save_total_limit=6,num_train_epochs = 1,
deepspeed='/workspace/transformerTest/transformers/examples/pytorch/translation/ds_config_zero31.json',)
```
Here is my error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[109], line 3
1 import os
----> 3 Seq2SeqTrainingArguments(do_train=True,do_eval=True,
4 output_dir='/kaggle/working/',
5 per_device_train_batch_size=8,
6 per_device_eval_batch_size=8,overwrite_output_dir=True,predict_with_generate=True,
7 save_total_limit=6,num_train_epochs = 1,
8 deepspeed='/workspace/transformerTest/transformers/examples/pytorch/translation/ds_config_zero31.json',)
File <string>:122, in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, eval_delay, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, log_level, log_level_replica, log_on_each_node, logging_dir, logging_strategy, logging_first_step, logging_steps, logging_nan_inf_filter, save_strategy, save_steps, save_total_limit, save_safetensors, save_on_each_node, no_cuda, use_cpu, use_mps_device, seed, data_seed, jit_mode_eval, use_ipex, bf16, fp16, fp16_opt_level, half_precision_backend, bf16_full_eval, fp16_full_eval, tf32, local_rank, ddp_backend, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, fsdp, fsdp_min_num_params, fsdp_config, fsdp_transformer_layer_cls_to_wrap, deepspeed, label_smoothing_factor, optim, optim_args, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, ddp_bucket_cap_mb, ddp_broadcast_buffers, dataloader_pin_memory, skip_memory_metrics, use_legacy_prediction_loop, push_to_hub, resume_from_checkpoint, hub_model_id, hub_strategy, hub_token, hub_private_repo, hub_always_push, gradient_checkpointing, gradient_checkpointing_kwargs, include_inputs_for_metrics, fp16_backend, push_to_hub_model_id, push_to_hub_organization, push_to_hub_token, mp_parameters, auto_find_batch_size, full_determinism, torchdynamo, ray_scope, ddp_timeout, torch_compile, torch_compile_backend, torch_compile_mode, dispatch_batches, split_batches, include_tokens_per_second, neftune_noise_alpha, sortish_sampler, predict_with_generate, generation_max_length, generation_num_beams, generation_config)
File /opt/conda/lib/python3.10/site-packages/transformers/training_args.py:1679, in TrainingArguments.__post_init__(self)
1675 from transformers.integrations.deepspeed import HfTrainerDeepSpeedConfig
1677 # will be used later by the Trainer
1678 # note: leave self.deepspeed unmodified in case a user relies on it not to be modified)
-> 1679 self.hf_deepspeed_config = HfTrainerDeepSpeedConfig(self.deepspeed)
1680 self.hf_deepspeed_config.trainer_config_process(self)
1682 # Accelerate DeepSpeed Plugin
File /opt/conda/lib/python3.10/site-packages/transformers/integrations/deepspeed.py:88, in HfTrainerDeepSpeedConfig.__init__(self, config_file_or_dict)
87 def __init__(self, config_file_or_dict):
---> 88 super().__init__(config_file_or_dict)
89 self._dtype = None
90 self.mismatches = []
File /opt/conda/lib/python3.10/site-packages/transformers/integrations/deepspeed.py:78, in HfDeepSpeedConfig.__init__(self, config_file_or_dict)
76 dep_version_check("accelerate")
77 dep_version_check("deepspeed")
---> 78 super().__init__(config_file_or_dict)
TypeError: object.__init__() takes exactly one argument (the instance to initialize)

### Expected behavior
The code does not report errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27632/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27631/comments | https://api.github.com/repos/huggingface/transformers/issues/27631/events | https://github.com/huggingface/transformers/issues/27631 | 2,004,233,116 | I_kwDOCUB6oc53diuc | 27,631 | Add the learning rate (in exponential representation, like "9.197948717948718e-08") to the model training output | {
"login": "blademoon",
"id": 17560478,
"node_id": "MDQ6VXNlcjE3NTYwNDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/17560478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blademoon",
"html_url": "https://github.com/blademoon",
"followers_url": "https://api.github.com/users/blademoon/followers",
"following_url": "https://api.github.com/users/blademoon/following{/other_user}",
"gists_url": "https://api.github.com/users/blademoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blademoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blademoon/subscriptions",
"organizations_url": "https://api.github.com/users/blademoon/orgs",
"repos_url": "https://api.github.com/users/blademoon/repos",
"events_url": "https://api.github.com/users/blademoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/blademoon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey! I think the way you log it is nice, just needs the log to be in scientific format! ",
"@ArthurZucker I already try convert `float` to scientific notation by using:\r\n```python\r\n...\r\n logs[\"learning_rate\"] = \"{:.2e}\".format(self._get_learning_rate())\r\n...\r\n```\r\n\r\nIt dosent work. Tensorboard `add_scalar()` din't support this.",
"@ArthurZucker Anyway, it would be great if this was implemented in the library itself. Then there would be no need to make \"crutches\" to output this information.",
"feel free to open a PR and ping @muellerzr ! ",
"@ArthurZucker Maybe you know someone on the HuggingFace team who could help solve the \"Rate\" field width issue in the output?",
"@muellerzr Testing in progress...)))\r\n\r\n<img width=\"605\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/17560478/478bb935-2474-4c91-96a6-5ae470fdf396\">\r\n",
"@muellerzr \r\n\r\nTested it 3 times.\r\nFirst time - just installed the library.\r\nSecond time - restarted the kernel in notepad. Repeated the test.\r\nThird time - restarted WSL2, to be more sure that the problem is reproduced after a reboot.\r\n\r\nThe result in all three cases:\r\n\r\n<img width=\"560\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/17560478/356d13d1-69f3-4b91-8415-96b5fbc1db9e\">\r\n\r\n\r\nThe code used in the tests*:\r\n```python\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir = result_model_name, # ΠΈΠ·ΠΌΠ΅Π½ΠΈΡΠ΅ ΠΈΠΌΡ ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΠΈΡ ΠΏΠΎ ΡΠ²ΠΎΠ΅ΠΌΡ ΡΡΠΌΠΎΡΡΠ΅Π½ΠΈΡ\r\n per_device_train_batch_size=train_batch_size,\r\n gradient_accumulation_steps=accumulation_steps, # ΡΠ²Π΅Π»ΠΈΡΠΈΠ²Π°Π΅ΡΡΡ Π² 2x ΡΠ°Π·Π° ΠΏΡΠΈ ΠΊΠ°ΠΆΠ΄ΠΎΠΌ ΡΠΌΠ΅Π½ΡΡΠ΅Π½ΠΈΠΈ ΡΠ°Π·ΠΌΠ΅ΡΠ° Π±Π°ΡΡΠ° Π² 2x ΡΠ°Π·Π° \r\n learning_rate=lr,\r\n lr_scheduler_type=lr_scheduler, # ΠΠ»Π°Π½ΠΈΡΠΎΠ²ΡΠΈΠΊ Π΄Π»Ρ learning_rate\r\n warmup_steps=250,\r\n max_steps=training_steps_count,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n evaluation_strategy=\"steps\",\r\n optim = \"adamw_torch\",\r\n per_device_eval_batch_size=eval_batch_size, # Π‘Π°ΠΌΡΠΉ ΠΏΡΠΎΡΡΠΎΠΉ ΡΠΏΠΎΡΠΎΠ± Π·Π°Π΄Π°ΡΡ ΡΠ°Π²Π½ΡΠΌ ΠΏΠ°ΡΠ°ΠΌΠ΅ΡΡΡ per_device_train_batch_size\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=500,\r\n eval_steps=500,\r\n logging_steps=25,\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=selected_metric,\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n hub_private_repo=True\r\n)\r\n\r\n\r\n# class MyTrainer(Seq2SeqTrainer):\r\n# def log(self, logs: Dict[str, float]) -> None:\r\n# logs[\"learning_rate\"] = self._get_learning_rate() * 1_000_000 # Π£ΠΌΠ½ΠΎΠΆΠ΅Π½ΠΈΠ΅ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ Π΄Π»Ρ ΠΈΡΠΏΡΠ°Π²Π»Π΅Π½ΠΈΡ ΠΎΡΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ. \r\n# print(logs[\"learning_rate\"])\r\n# super().log(logs)\r\n\r\n\r\n\r\n# trainer = MyTrainer(\r\n# args=training_args,\r\n# model=model,\r\n# train_dataset=common_voice[\"train\"],\r\n# eval_dataset=common_voice[\"test\"],\r\n# data_collator=data_collator,\r\n# compute_metrics=compute_metrics,\r\n# tokenizer=processor.feature_extractor,\r\n# )\r\n\r\n\r\ntrainer = Seq2SeqTrainer(\r\n args=training_args,\r\n model=model,\r\n train_dataset=common_voice[\"train\"],\r\n eval_dataset=common_voice[\"test\"],\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics,\r\n tokenizer=processor.feature_extractor,\r\n)\r\n```\r\n(*) - Naturally, the commented lines were not executed.",
"@muellerzr The learning rate is duplicated in scientific notation and decimalization for some unknown reason.",
"Iβm still actively working on it, but your excitement is appreciated :)",
"@muellerzr If additional testing is required. Tag me. Thanks again for your responsiveness.",
"@ArthurZucker @muellerzr Good afternoon. I think you might be interested to know. Thanks to the feature you are developing, I was able to fine-tune Whisper small to WER = 12.6%.\r\n\r\nhttps://huggingface.co/artyomboyko/whisper-small-fine_tuned-ru-v2\r\nhttps://huggingface.co/spaces/artyomboyko/Whisper-medium-ru-v2"
] | 1,700 | 1,702 | null | NONE | null | ### Feature request
Good day. Can you add current learning rate to output of training process.
Now, if we train model with Trainer we get those output (without leraning rate):
<img width="302" alt="image" src="https://github.com/huggingface/transformers/assets/17560478/239c3ce1-c7dc-4d79-9664-52ac4e3ea605">
I'm using this code to add information about the current learning rate value to the output:
```python
from transformers import Trainer, Seq2SeqTrainer
from typing import Any, Dict, List, Union
class MyTrainer(Seq2SeqTrainer):
def log(self, logs: Dict[str, float]) -> None:
logs["learning_rate"] = self._get_learning_rate()
print(logs["learning_rate"])
super().log(logs)
trainer = MyTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
trainer.train()
```
This causes the learning rate values to appear in the tabular output during model training, as here:
<img width="536" alt="image" src="https://github.com/huggingface/transformers/assets/17560478/defa1a31-15a5-4994-905e-7afdccee76de">
But the latest version of the library doesn't have enough field width to display this parameter correctly.
### Motivation
Currently, when using the library, the learning rate is not displayed. It can only be viewed on the hub. But if the library is used locally without internet access, there is no way to track the current learning rate during fine-tuning/training of the model.
### Your contribution
Gave a succinct example of the sample code in "Feature request" field that I use to output a learning rate value. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27631/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27630/comments | https://api.github.com/repos/huggingface/transformers/issues/27630/events | https://github.com/huggingface/transformers/issues/27630 | 2,004,224,309 | I_kwDOCUB6oc53dgk1 | 27,630 | Error while trying to resume from checkpoint | {
"login": "sstoia",
"id": 129397487,
"node_id": "U_kgDOB7Zy7w",
"avatar_url": "https://avatars.githubusercontent.com/u/129397487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sstoia",
"html_url": "https://github.com/sstoia",
"followers_url": "https://api.github.com/users/sstoia/followers",
"following_url": "https://api.github.com/users/sstoia/following{/other_user}",
"gists_url": "https://api.github.com/users/sstoia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sstoia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sstoia/subscriptions",
"organizations_url": "https://api.github.com/users/sstoia/orgs",
"repos_url": "https://api.github.com/users/sstoia/repos",
"events_url": "https://api.github.com/users/sstoia/events{/privacy}",
"received_events_url": "https://api.github.com/users/sstoia/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @muellerzr ",
"gently pinging @muellerzr or @SunMarc if he has time look into it! "
] | 1,700 | 1,707 | null | NONE | null | ### System Info
- `transformers` version: 4.28.1
- Platform: Linux-3.10.0-1160.95.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.16
- Huggingface_hub version: 0.19.3
- Safetensors version: 0.4.0
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Some weeks ago I started the pre-training of longformer model for a specific task. The size of the dataset is pretty big so I had to load the dataset with _streaming_, as it is impossible to load all the dataset at once. Because of this, the evaluation strategy is based on _steps_ instead of _epochs_.
I set the _max_steps_ parameter from _TrainingArguments_ as _5e+5, which is equivalent to 1.05 epochs from the dataset. The problem appears when using the _resume_from_checkpoint_ parameter from _Trainer_ as the path to the latest checkpoint saved from the Trainer.
````python
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator = data_collator,
preprocess_logits_for_metrics=preprocess_logits_for_metrics,
compute_metrics=compute_metrics,
callbacks = [EarlyStoppingCallback(PATIENCE)]
)
trainer.train(resume_from_checkpoint = '<path_to_checkpoint>')
````
The error obtained is the next one:
````
0%| | 0/500000.0 [00:00<?, ?it/s]Traceback (most recent call last):
File "/mnt/beegfs/sstoia/proyectos/experian/run_mlm_restart_from_checkpoint.py", line 154, in <module>
trainer.train(resume_from_checkpoint = '/mnt/beegfs/sstoia/proyectos/experian/exp_longformer-base_4096/checkpoint-484000')
File "/mnt/beegfs/sstoia/.conda/envs/poesia/lib/python3.9/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/mnt/beegfs/sstoia/.conda/envs/poesia/lib/python3.9/site-packages/transformers/trainer.py", line 1851, in _inner_training_loop
for epoch in range(epochs_trained):
TypeError: 'float' object cannot be interpreted as an integer
````
### Expected behavior
The trainer should load the checkpoint and continue the pre-training from it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27630/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27629/comments | https://api.github.com/repos/huggingface/transformers/issues/27629/events | https://github.com/huggingface/transformers/issues/27629 | 2,004,135,688 | I_kwDOCUB6oc53dK8I | 27,629 | Whisper: Language detection not working after setting a language once | {
"login": "brunjo",
"id": 1618488,
"node_id": "MDQ6VXNlcjE2MTg0ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1618488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brunjo",
"html_url": "https://github.com/brunjo",
"followers_url": "https://api.github.com/users/brunjo/followers",
"following_url": "https://api.github.com/users/brunjo/following{/other_user}",
"gists_url": "https://api.github.com/users/brunjo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brunjo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brunjo/subscriptions",
"organizations_url": "https://api.github.com/users/brunjo/orgs",
"repos_url": "https://api.github.com/users/brunjo/repos",
"events_url": "https://api.github.com/users/brunjo/events{/privacy}",
"received_events_url": "https://api.github.com/users/brunjo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sanchit-gandhi and @ylacombe ",
"I've found a workaround: Running `del pipe.model.generation_config.language` before calling the `pipe` method resets the language parameter and fixes the issue."
] | 1,700 | 1,701 | 1,701 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-6.2.0-1018-gcp-x86_64-with-glibc2.31
- Python version: 3.11.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, T4 GPU
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When calling the Whisper pipeline with a language (eg `generate_kwargs={"language": "french"}`) the language is set for all future pipeline calls even when setting it explicitly to `None` (ie `generate_kwargs={"language": None}`).
```python
# Set up `pipe` as described at https://huggingface.co/openai/whisper-large-v3#usage
# This works as expected:
pipe("brownfox.mp3", generate_kwargs={"language": None})
# > {'text': ' The quick brown fox jumps over the lazy dog.'}
pipe("brownfox.mp3", generate_kwargs={"language": "french"})
# > {'text': ' Le foie vert rapide tombe sur le chien loup.'}
# Here the language should be auto-detected as English but I'm getting a response in French
pipe("brownfox.mp3", generate_kwargs={"language": None})
# > {'text': ' Le foie vert rapide tombe sur le chien loup.'}
```
Audio used in this example: [brownfox.mp3](https://output.lemonfox.ai/brownfox.mp3)
### Expected behavior
I would expect the language to be auto-detected when `language` is not set even when `language` was set in a previous call. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27629/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27628/comments | https://api.github.com/repos/huggingface/transformers/issues/27628/events | https://github.com/huggingface/transformers/pull/27628 | 2,004,094,694 | PR_kwDOCUB6oc5gAsd3 | 27,628 | update Openai API call method | {
"login": "Strive-for-excellence",
"id": 26090323,
"node_id": "MDQ6VXNlcjI2MDkwMzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/26090323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Strive-for-excellence",
"html_url": "https://github.com/Strive-for-excellence",
"followers_url": "https://api.github.com/users/Strive-for-excellence/followers",
"following_url": "https://api.github.com/users/Strive-for-excellence/following{/other_user}",
"gists_url": "https://api.github.com/users/Strive-for-excellence/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Strive-for-excellence/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Strive-for-excellence/subscriptions",
"organizations_url": "https://api.github.com/users/Strive-for-excellence/orgs",
"repos_url": "https://api.github.com/users/Strive-for-excellence/repos",
"events_url": "https://api.github.com/users/Strive-for-excellence/events{/privacy}",
"received_events_url": "https://api.github.com/users/Strive-for-excellence/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is also same problem in the AzureOpenAiAgent.\r\n```\r\nclass AzureOpenAiAgent(Agent):\r\n.....\r\n def _chat_generate(self, prompt, stop):\r\n result = openai.ChatCompletion.create(\r\n engine=self.deployment_id,\r\n messages=[{\"role\": \"user\", \"content\": prompt}],\r\n temperature=0,\r\n stop=stop,\r\n )\r\n return result[\"choices\"][0][\"message\"][\"content\"]\r\n\r\n```\r\nHowever, since I don't have access to the AzureOpenAi API, I'm unable to test this code. Can someone do this job?"
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixes #27623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27628/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27628",
"html_url": "https://github.com/huggingface/transformers/pull/27628",
"diff_url": "https://github.com/huggingface/transformers/pull/27628.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27628.patch",
"merged_at": 1700670507000
} |
https://api.github.com/repos/huggingface/transformers/issues/27627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27627/comments | https://api.github.com/repos/huggingface/transformers/issues/27627/events | https://github.com/huggingface/transformers/issues/27627 | 2,004,024,924 | I_kwDOCUB6oc53cv5c | 27,627 | Unable to register own custom tokenizer | {
"login": "Ranitbag007",
"id": 133197492,
"node_id": "U_kgDOB_ButA",
"avatar_url": "https://avatars.githubusercontent.com/u/133197492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ranitbag007",
"html_url": "https://github.com/Ranitbag007",
"followers_url": "https://api.github.com/users/Ranitbag007/followers",
"following_url": "https://api.github.com/users/Ranitbag007/following{/other_user}",
"gists_url": "https://api.github.com/users/Ranitbag007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ranitbag007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ranitbag007/subscriptions",
"organizations_url": "https://api.github.com/users/Ranitbag007/orgs",
"repos_url": "https://api.github.com/users/Ranitbag007/repos",
"events_url": "https://api.github.com/users/Ranitbag007/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ranitbag007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hey π€ thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!"
] | 1,700 | 1,700 | null | NONE | null | ### Model description
@sgugger I have tried to register my own tokenization model based on sentencepiece using CustomAITokenizer.register_for_auto_class("AutoTokenizer") . But I am falied to do so.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/RANITBAG/CustomAItokenizer/tree/main . This the repo link | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27627/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27626/comments | https://api.github.com/repos/huggingface/transformers/issues/27626/events | https://github.com/huggingface/transformers/issues/27626 | 2,003,985,273 | I_kwDOCUB6oc53cmN5 | 27,626 | 8-Bit LlamaModel returns different outputs for the same input when in different batches | {
"login": "maximek3",
"id": 37376714,
"node_id": "MDQ6VXNlcjM3Mzc2NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/37376714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximek3",
"html_url": "https://github.com/maximek3",
"followers_url": "https://api.github.com/users/maximek3/followers",
"following_url": "https://api.github.com/users/maximek3/following{/other_user}",
"gists_url": "https://api.github.com/users/maximek3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximek3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximek3/subscriptions",
"organizations_url": "https://api.github.com/users/maximek3/orgs",
"repos_url": "https://api.github.com/users/maximek3/repos",
"events_url": "https://api.github.com/users/maximek3/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximek3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @maximek3 \r\nI think those small instabilities are expected, can you test out with `torch.allclose()` with different values of `rtol` and `atol` ? cc @SunMarc @Titus-von-Koeller in case I am missing anything",
"Hi @younesbelkada, thanks for your quick response.\r\n\r\n`torch.allclose()` returns `False` even with the following parameters: `torch.allclose(o_01[\"logits\"][0], o_02[\"logits\"][0], atol=1e-01, rtol=1e-01)`.\r\n\r\nJust looking at the values myself, I get differences such as `o_01[\"logits\"][0][0,2] = -0.2825` and `o_02[\"logits\"][0][0,2] = -0.1675`.",
"Attention is bi-directional, so I'm not surprised that the logits depend on the hidden states no? ",
"Hi @ArthurZucker, thanks for the reply.\r\n\r\nIs it expected that the output for a given datapoint is influenced by the other datapoints in the same batch?\r\n\r\nMaybe a clearer way to illustrate what seems unexpected is given below:\r\n\r\n```ruby\r\nfrom transformers import LlamaForCausalLM, LlamaTokenizer\r\n\r\n# load a 7B Llama model in 8bit\r\nmodel = LlamaForCausalLM.from_pretrained(\r\n \"huggyllama/llama-7b\", load_in_8bit=True\r\n)\r\nmodel.eval()\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(\"huggyllama/llama-7b\")\r\ntokenizer.pad_token_id = 0 \r\n\r\nprompt1 = \"Hey, are you conscious? Can you talk to me?\"\r\nprompt2 = \"Give me a recipe for porridge.\"\r\nprompt3 = \"How is the weather today?\"\r\n\r\nbatch_12 = tokenizer.batch_encode_plus([prompt1, prompt2], padding=True, return_tensors=\"pt\")\r\nbatch_13 = tokenizer.batch_encode_plus([prompt1, prompt3], padding=True, return_tensors=\"pt\")\r\n\r\n# Generate\r\ngenerate_ids_12 = model.generate(batch_12.input_ids, max_length=50)\r\ngenerate_ids_13 = model.generate(batch_13.input_ids, max_length=50)\r\n\r\nresponse_1_a = tokenizer.batch_decode(generate_ids_12, skip_special_tokens=True)[0]\r\nresponse_1_b = tokenizer.batch_decode(generate_ids_13, skip_special_tokens=True)[0]\r\n\r\nprint(response_1_a)\r\nprint(response_1_b)\r\n``` \r\n\r\nWhen loading with `load_in_8bit=True`, `response_1_a` and `response_1_b` are different. They are the response to the same prompt, it's just the other prompt(s) in the batch that change. This does not happen when we load the model with `load_in_8bit=False`.",
"1. You are right in this case, same prompt but different *batch* is not related to bi-directional attention\r\n2. Quantized results are not always deterministic so yes would expect some differences\r\n3. You are padding the input, probably on the right which changes what the models sees. An awesome explanation of what is happening is available [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535)",
"Thanks for the pointers.\r\n\r\nI did encounter the issues with padding earlier as well. However, I also ran experiments where the prompts were all the exact same length (i.e. no padding), and I still got significant variance in Natural Language Generation metrics for the output when I was shuffling the test set or changing the batch size (both leading to different batch constellations). Are those expected shortcomings when using quantisation?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-4.18.0-372.58.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): 2.9.3 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@SunMarc @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
Run the following script:
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# load a 7B Llama model in 8bit
model = LlamaForCausalLM.from_pretrained(
"huggyllama/llama-7b", load_in_8bit=True
)
model.eval()
# create two batches that share the same first datapoint
b_01 = {}
b_02 = {}
b_01["input_ids"] = torch.randint(10, 1000, (2, 30))
b_02["input_ids"] = torch.randint(10, 1000, (2, 30))
b_01['input_ids'][0] = b_02['input_ids'][0]
# get the model outputs
with torch.no_grad():
o_01 = model(b_01["input_ids"])
o_02 = model(b_02["input_ids"])
# this returns "tensor(True)" and "tensor(False)"
print(torch.all(b_01["input_ids"][0] == b_02["input_ids"][0]))
print(torch.all(o_01["logits"][0] == o_02["logits"][0]))
```
### Expected behavior
We'd expect:
```
tensor(True)
tensor(True)
```
But we get:
```
tensor(True)
tensor(False)
```
,i.e., the model outputs for that same datapoint (`b_01["input_ids"][0]` and `b_02["input_ids"][0]`) are different.
When we load the model with `load_in_8bit=False`, we get the expected behavior, i.e., the model output is consistent for the same datapoint. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27626/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27625/comments | https://api.github.com/repos/huggingface/transformers/issues/27625/events | https://github.com/huggingface/transformers/pull/27625 | 2,003,909,504 | PR_kwDOCUB6oc5gADzy | 27,625 | Fix flash attention bugs with Mistral and Falcon | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @amyeroberts thank you for the review! I added by mistake `test_flash_attn_2_generate_use_cache` in `test_modeling_llama.py` while it was meant to be added in `test_modeling_mistral.py`, hence the confusion, apology!",
"@fxmarty Thanks for clarifying. Tbh, I'm still a bit confused with the tests - it's not clear to me how this explicitly tests for the cache as `use_cache` isn't set anywhere π
",
"@amyeroberts good catch indeed... I just checked, we are going here that sets `use_cache=True`: https://github.com/huggingface/transformers/blob/f93c1e9eceafde40b1d33fbb03834de97556706c/src/transformers/generation/utils.py#L1602\r\n\r\nand uses https://github.com/huggingface/transformers/blob/f93c1e9eceafde40b1d33fbb03834de97556706c/src/transformers/generation/configuration_utils.py#L266",
"@fxmarty OK - thanks for explaining! As a follow up, could you add `use_cache=True` explicitly into the tests? This way it's clearer for anyone who sees the code and isn't subject to silently not being tested anymore if the configs or config handling changes ",
"For sure @amyeroberts I will ping you there. Sorry I should have waited before merging..",
"@fxmarty No worries! It doesn't affect the functionality of this PR so it's fine to be done separately :) "
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | This PR fixes some important bugs in the Mistral and Falcon integration.
https://github.com/huggingface/transformers/pull/26933 broke flash attention for Falcon due to the modification of the layout
The following tests were not passing:
```
FAILED tests/models/mistral/test_modeling_mistral.py::MistralModelTest::test_flash_attn_2_generate_padding_right - AssertionError: ValueError not raised
FAILED tests/models/mistral/test_modeling_mistral.py::MistralModelTest::test_flash_attn_2_inference_padding_right - AssertionError: ValueError not raised
FAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_flash_attn_2_generate_left_padding - RuntimeError: CUDA error: device-side assert triggered
FAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_flash_attn_2_generate_padding_right - RuntimeError: CUDA error: device-side assert triggered
FAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_flash_attn_2_generate_use_cache - RuntimeError: CUDA error: device-side assert triggered
FAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_flash_attn_2_inference - RuntimeError: CUDA error: device-side assert triggered
FAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_flash_attn_2_inference_padding_right - RuntimeError: CUDA error: device-side assert triggered
```
and Falcon with FA2 is not really usable on `main` due to an error in the shape (currently `[batch_size, num_head, seqlen, head_dim]` instead of the required `[batch_size, seqlen, num_head, head_dim]`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27625/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27625",
"html_url": "https://github.com/huggingface/transformers/pull/27625",
"diff_url": "https://github.com/huggingface/transformers/pull/27625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27625.patch",
"merged_at": 1700576444000
} |
https://api.github.com/repos/huggingface/transformers/issues/27624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27624/comments | https://api.github.com/repos/huggingface/transformers/issues/27624/events | https://github.com/huggingface/transformers/pull/27624 | 2,003,882,145 | PR_kwDOCUB6oc5f_9zL | 27,624 | Fix `max_steps` documentation regarding the end-of-training condition | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,701 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixes #26635
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr @pacman100 @stevhliu @MKhalusova
Quick explanation, the document currently states that
> max_steps (`int`, *optional*, defaults to -1):
> If set to a positive number, the total number of training steps to perform. Overrides `num_train_epochs`.
> In case of using a finite iterable dataset the training may stop before reaching the set number of steps
> when all data is exhausted.
But, when you use finite iterable dataset, the training doesn't stop as expected:
```python
import torch
from datasets import Dataset
from torch import nn
from transformers import Trainer, TrainingArguments
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 2)
def forward(self, a, return_loss=True):
output = self.linear(a)
return {"loss": output.sum().abs()}
data = torch.tensor([[i, i] for i in range(10)], dtype=torch.float32) # [[0., 0.], [1., 1.], [2., 2.], ...]
dataset = Dataset.from_dict({"a": data}).to_iterable_dataset() # finite iterable dataset
args = TrainingArguments(output_dir=".", per_device_train_batch_size=1, max_steps=20)
trainer = Trainer(model=MyModule(), args=args, train_dataset=dataset)
trainer.train()
```
It trains for 20 steps, by looping twice through the dataset twice, instead of "stopping before reaching the set number of steps when all data is exhausted".
It's because the logic in the trainer is the following:
```python
global_step = 0
for epoch in range(num_train_epochs):
for step, inputs in enumerate(epoch_iterator):
global_step += 1
if global_step >= max_steps:
break
if global_step >= max_steps:
break
```
Therefore, the doc should be updated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27624/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27624",
"html_url": "https://github.com/huggingface/transformers/pull/27624",
"diff_url": "https://github.com/huggingface/transformers/pull/27624.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27624.patch",
"merged_at": 1700651411000
} |
https://api.github.com/repos/huggingface/transformers/issues/27623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27623/comments | https://api.github.com/repos/huggingface/transformers/issues/27623/events | https://github.com/huggingface/transformers/issues/27623 | 2,003,818,000 | I_kwDOCUB6oc53b9YQ | 27,623 | Issue with transformers==4.29.0 and openai==1.3.4 Integration | {
"login": "Strive-for-excellence",
"id": 26090323,
"node_id": "MDQ6VXNlcjI2MDkwMzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/26090323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Strive-for-excellence",
"html_url": "https://github.com/Strive-for-excellence",
"followers_url": "https://api.github.com/users/Strive-for-excellence/followers",
"following_url": "https://api.github.com/users/Strive-for-excellence/following{/other_user}",
"gists_url": "https://api.github.com/users/Strive-for-excellence/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Strive-for-excellence/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Strive-for-excellence/subscriptions",
"organizations_url": "https://api.github.com/users/Strive-for-excellence/orgs",
"repos_url": "https://api.github.com/users/Strive-for-excellence/repos",
"events_url": "https://api.github.com/users/Strive-for-excellence/events{/privacy}",
"received_events_url": "https://api.github.com/users/Strive-for-excellence/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for reporting π€ would you like to open a PR for a fix? "
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
transformers==4.29.0
openai==1.3.4
When running the code provided in the version openai==1.3.4, which can be found at the following link: https://huggingface.co/docs/transformers/transformers_agents, an error occurs:
```
APIRemovedInV1 Traceback (most recent call last)
[<ipython-input-6-4578d52c5ccf>](https://localhost:8080/#) in <cell line: 1>()
----> 1 boat = agent.run("Generate an image of a boat in the water")
2 boat
5 frames
[/usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py](https://localhost:8080/#) in run(self, task, return_code, remote, **kwargs)
312 """
313 prompt = self.format_prompt(task)
--> 314 result = self.generate_one(prompt, stop=["Task:"])
315 explanation, code = clean_code_for_run(result)
316
[/usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py](https://localhost:8080/#) in generate_one(self, prompt, stop)
407 return self._chat_generate(prompt, stop)
408 else:
--> 409 return self._completion_generate([prompt], stop)[0]
410
411 def _chat_generate(self, prompt, stop):
[/usr/local/lib/python3.10/dist-packages/transformers/tools/agents.py](https://localhost:8080/#) in _completion_generate(self, prompts, stop)
419
420 def _completion_generate(self, prompts, stop):
--> 421 result = openai.Completion.create(
422 model=self.model,
423 prompt=prompts,
[/usr/local/lib/python3.10/dist-packages/openai/_utils/_proxy.py](https://localhost:8080/#) in __getattr__(self, attr)
20
21 def __getattr__(self, attr: str) -> object:
---> 22 return getattr(self.__get_proxied__(), attr)
23
24 @override
[/usr/local/lib/python3.10/dist-packages/openai/_utils/_proxy.py](https://localhost:8080/#) in __get_proxied__(self)
41 def __get_proxied__(self) -> T:
42 if not self.should_cache:
---> 43 return self.__load__()
44
45 proxied = self.__proxied
[/usr/local/lib/python3.10/dist-packages/openai/lib/_old_api.py](https://localhost:8080/#) in __load__(self)
31 @override
32 def __load__(self) -> None:
---> 33 raise APIRemovedInV1(symbol=self._symbol)
34
35
APIRemovedInV1:
You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
```
The error traceback indicates an issue within the file 'transformers/tools/agents.py':
```
def _chat_generate(self, prompt, stop):
result = openai.ChatCompletion.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
stop=stop,
)
return result["choices"][0]["message"]["content"]
```
To resolve this issue, you can update the code in 'transformers/tools/agents.py' as follows:
```
def _chat_generate(self, prompt, stop):
result = openai.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
temperature=0,
stop=stop,
)
return result.choices[0].message.content
```
### Who can help?
@Narsil @stevhliu
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import tools
from transformers.tools import OpenAiAgent
API_KEY = '**'
agent = OpenAiAgent(model='gpt-4',api_key=API_KEY)
boat = agent.run("Generate an image of a boat in the water")
boat
### Expected behavior
Updating the code would be a better solution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27623/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27622/comments | https://api.github.com/repos/huggingface/transformers/issues/27622/events | https://github.com/huggingface/transformers/pull/27622 | 2,003,687,080 | PR_kwDOCUB6oc5f_TUu | 27,622 | [Fuyu and Persimmon] Add FA2 and fused kernels | {
"login": "jzhang38",
"id": 42993249,
"node_id": "MDQ6VXNlcjQyOTkzMjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/42993249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzhang38",
"html_url": "https://github.com/jzhang38",
"followers_url": "https://api.github.com/users/jzhang38/followers",
"following_url": "https://api.github.com/users/jzhang38/following{/other_user}",
"gists_url": "https://api.github.com/users/jzhang38/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzhang38/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzhang38/subscriptions",
"organizations_url": "https://api.github.com/users/jzhang38/orgs",
"repos_url": "https://api.github.com/users/jzhang38/repos",
"events_url": "https://api.github.com/users/jzhang38/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzhang38/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jzhang38 \r\n\r\nI'm close to finishing the FA2 implementation for Persimmon -- see #27052. Also am developing a separate package for loading cuda kernels dynamically that would allow for incorporating fused kernels (including FA's fused ops / layers). ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | # What does this PR do?
- [ ] Add FA2 to Fuyu and Persimmon(just like it has been done to Llama through use_flash_attention_2=True)
- [ ] Add other [fused kernels from the FA2 repo](https://github.com/Dao-AILab/flash-attention/blob/2c3baba4a63c4007c8a132c5380edc9430f88a22/training/README.md?plain=1#L66)
- [ ] Add a working finetuning script.
1 and 2 are what's implemented in [OtterHD](https://huggingface.co/papers/2311.04219). I am not sure if you want to have 2 (fused kernels of rotary embed and layernorm) added into transformers main because that means adding cuda kernel dependecies (see [here](https://github.com/Dao-AILab/flash-attention/blob/2c3baba4a63c4007c8a132c5380edc9430f88a22/training/README.md?plain=1#L66))
3 is related to https://github.com/huggingface/transformers/pull/26997
The goal is to accelerate the fine-tuning process for Fuyu.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @molbap
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27622/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27622",
"html_url": "https://github.com/huggingface/transformers/pull/27622",
"diff_url": "https://github.com/huggingface/transformers/pull/27622.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27622.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27621/comments | https://api.github.com/repos/huggingface/transformers/issues/27621/events | https://github.com/huggingface/transformers/pull/27621 | 2,003,683,699 | PR_kwDOCUB6oc5f_Slj | 27,621 | [ConvNext] Improve backbone | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
This PR makes sure that ConvNextBackbone is implemented as other backbones in the library, meaning no hardcoded `return_dict=True`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27621/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27621",
"html_url": "https://github.com/huggingface/transformers/pull/27621",
"diff_url": "https://github.com/huggingface/transformers/pull/27621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27621.patch",
"merged_at": 1700561682000
} |
https://api.github.com/repos/huggingface/transformers/issues/27620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27620/comments | https://api.github.com/repos/huggingface/transformers/issues/27620/events | https://github.com/huggingface/transformers/issues/27620 | 2,003,557,732 | I_kwDOCUB6oc53a91k | 27,620 | How can check hugging face pretrained model has metadata attribute? | {
"login": "JinSeoung-Oh",
"id": 78573459,
"node_id": "MDQ6VXNlcjc4NTczNDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/78573459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JinSeoung-Oh",
"html_url": "https://github.com/JinSeoung-Oh",
"followers_url": "https://api.github.com/users/JinSeoung-Oh/followers",
"following_url": "https://api.github.com/users/JinSeoung-Oh/following{/other_user}",
"gists_url": "https://api.github.com/users/JinSeoung-Oh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JinSeoung-Oh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JinSeoung-Oh/subscriptions",
"organizations_url": "https://api.github.com/users/JinSeoung-Oh/orgs",
"repos_url": "https://api.github.com/users/JinSeoung-Oh/repos",
"events_url": "https://api.github.com/users/JinSeoung-Oh/events{/privacy}",
"received_events_url": "https://api.github.com/users/JinSeoung-Oh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! π€ thanks for the kind words!\r\nCould you share the full traceback and how you called `transformers`. \r\nI am not sure about the integration with Llama-index and I am this unfamiliar with the Metadata attribute",
"It's likely this is something outside `transformers`, and need to open an issue in those libraries. But yes, we can only tell with a full log.",
"@ArthurZucker @ydshieh \r\nThanks!\r\nAnd @ydshieh you are right.\r\nI just misunderstood Llama-index bot' reply\r\nThis is korean(9:45 AM), and I just solved this problem 5 min ago.\r\nI have to load hugging face model using HuggingFaceLM class in Llama-index not transformers.\r\nSorry for bothering..\r\nThanks!"
] | 1,700 | 1,700 | 1,700 | NONE | null | Hi, at first, I want to say thanks for you guys work and contribution.
My question is how can I check hugging face model has metadata attribute.
Theses day, I'm building GraphRAG with Llama-index and NubularDB for korean
Baseline for this work has already been completed.
But korean LLM performance is not good, so I tried the other model listed in OpenKo-LLM LeaderBoard
But all models I tried failed.
Error message is 'Attribute Error: (Llamamodel / LlmaForcusalLM / Mistralmodel / MistralForcusalLM) has no attribute 'metadata''
It just my thought all xxxxForCusalML has no attribute 'metadata'.. Maybe
So, I just want to know how to check this.
I already asked it to LlamaIndex and NebularGraph, but I can not solve this problem.
According to LlamaIndex bot(dosu-bot), there is no reason why llama-index cannot be compatible with Hugging Face models
It said I have to check metadata attribution is properly defined for the model you plan to use.
How I can check it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27620/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27619/comments | https://api.github.com/repos/huggingface/transformers/issues/27619/events | https://github.com/huggingface/transformers/issues/27619 | 2,003,408,505 | I_kwDOCUB6oc53aZZ5 | 27,619 | runtime error clip_vision_bert TypeError: __init__() got an unexpected keyword argument '_do_init' | {
"login": "guanhdrmq",
"id": 81207745,
"node_id": "MDQ6VXNlcjgxMjA3NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/81207745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guanhdrmq",
"html_url": "https://github.com/guanhdrmq",
"followers_url": "https://api.github.com/users/guanhdrmq/followers",
"following_url": "https://api.github.com/users/guanhdrmq/following{/other_user}",
"gists_url": "https://api.github.com/users/guanhdrmq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guanhdrmq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guanhdrmq/subscriptions",
"organizations_url": "https://api.github.com/users/guanhdrmq/orgs",
"repos_url": "https://api.github.com/users/guanhdrmq/repos",
"events_url": "https://api.github.com/users/guanhdrmq/events{/privacy}",
"received_events_url": "https://api.github.com/users/guanhdrmq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Can you update the version of transformers you are using ? \r\nHey! Seems like an error with `multilingual_vqa` calling from pretrained. Would recommend you to upgrade to a more recent version but also isolate this to transformers or `multilingual_vqa` and post the issue there! ",
"Hi! @ArthurZucker. Thanks for your answering. Both of your solutions already done and upgrade to transformers 4.35.2 but does not work. This issue is posted : https://github.com/gchhablani/multilingual-vqa/issues/2.\r\n\r\nIt seems like huggingface scripts problem here modellin_flax_utils.py",
"The traceback points to `\"D:\\multimodal_robustness\\multilingual_vqa\\models\\flax_clip_vision_bert\\modeling_clip_vision_bert.py` which does not exist in transformers! π ",
"Hi @ArthurZucker Haha I know that. Can hugging face team help solve this issue? because people have the same issue but adding _do_init=False does not work. ihttps://github.com/huggingface/transformers/issues/12513\r\nThe author indeed used huggingface framework. I also contact him on git https://github.com/gchhablani but no answers. \r\n\r\nThis author arises another problem on huggingface visualbert lower accuracy in VQA validation dataset.\r\n\r\nSincerely hope hugginface team can help fix this error? Thank you very much.",
"Sorry but I have 0 knowledge on this library, and not a lot of bandwidth π You should be able to open a PR implementing a fix (probably just removing the `_do_init` kwarg from the call to `cls`)",
"Answered here: https://github.com/gchhablani/multilingual-vqa/issues/2#issuecomment-1843424353",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | NONE | null | ### System Info
transformer 4.25.1
python 3.9
torch 1.13
windows 10 or ubuntu 20
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import CLIPProcessor, BertTokenizerFast
from multilingual_vqa.models.flax_clip_vision_bert.modeling_clip_vision_bert import (
FlaxCLIPVisionBertForSequenceClassification,
)
clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = BertTokenizerFast.from_pretrained("bert-base-multilingual-uncased")
dataset = VQADataset(
questions=questions[:10],
annotations=annotations[:10],
image_processor=clip_processor,
text_processor=tokenizer,
)
model = FlaxCLIPVisionBertForSequenceClassification.from_pretrained(
"./pretrained/clip-vision-bert-vqa-ft-6k",
num_labels=len(config.id2label),
id2label=config.id2label,
label2id=config.label2id,
)
print(model)
model.to(device)
model.eval()
test_dataloader = DataLoader(dataset, collate_fn=collate_fn, batch_size=1, shuffle=False)
for batch in tqdm(test_dataloader):
batch = {k: [v.to](http://v.to/)(device) for k, v in batch.items()}
outputs = model(**batch)
preds = outputs.logits[0]
sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores
top_5_indices = sorted_indices[:5]
top_5_tokens = list(map(model.config.id2label.get, top_5_indices))
top_5_scores = preds[top_5_indices]
print(dict(zip(top_5_tokens, top_5_scores))Traceback (most recent call last):
### Expected behavior
fixt the do_init issue and can use clip_vision_bert for vqa inference on validation dataset:
Β File "/media/wayne/data/clip_vision_bert/clip_vision_bert.py", line 210, in <module>
Β Β model = FlaxCLIPVisionBertForSequenceClassification.from_pretrained(
Β File "/media/wayne/data/clip_vision_bert/multilingual_vqa/models/flax_clip_vision_bert/modeling_clip_vision_bert.py", line 991, in from_pretrained
Β Β return super().from_pretrained(*args, **kwargs)
Β File "/home/wayne/anaconda3/envs/clip_vision_bert/lib/python3.8/site-packages/transformers/modeling_flax_utils.py", line 797, in from_pretrained
Β Β model = cls(config, *model_args, _do_init=_do_init, **model_kwargs)
Β File "/media/wayne/data/clip_vision_bert/multilingual_vqa/models/flax_clip_vision_bert/modeling_clip_vision_bert.py", line 857, in __init__
Β Β module = self.module_class(config=config, dtype=dtype, **kwargs)
TypeError: __init__() got an unexpected keyword argument '_do_init' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27619/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27618/comments | https://api.github.com/repos/huggingface/transformers/issues/27618/events | https://github.com/huggingface/transformers/issues/27618 | 2,003,365,582 | I_kwDOCUB6oc53aO7O | 27,618 | I get γKeyError: 'label'γ when I run transformers/examples/pytorch/text-classification/run_glue.py | {
"login": "WeiChunyu-star",
"id": 54472778,
"node_id": "MDQ6VXNlcjU0NDcyNzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/54472778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WeiChunyu-star",
"html_url": "https://github.com/WeiChunyu-star",
"followers_url": "https://api.github.com/users/WeiChunyu-star/followers",
"following_url": "https://api.github.com/users/WeiChunyu-star/following{/other_user}",
"gists_url": "https://api.github.com/users/WeiChunyu-star/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WeiChunyu-star/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WeiChunyu-star/subscriptions",
"organizations_url": "https://api.github.com/users/WeiChunyu-star/orgs",
"repos_url": "https://api.github.com/users/WeiChunyu-star/repos",
"events_url": "https://api.github.com/users/WeiChunyu-star/events{/privacy}",
"received_events_url": "https://api.github.com/users/WeiChunyu-star/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | NONE | null | ### System Info
transformers: 4.36.0.dev0
accelerate : 0.24.1
platformοΌ x86+V100
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. git clone https://github.com/huggingface/transformers
2. pip install ./transformers
3. pip install -r transformers/examples/pytorch/text-classification/requirements.txt
4. python transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path ./bert-base-cased \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir ./mrpc_out
### Expected behavior
Using custom data configuration mrpc-f18c3b066d103ab0
11/21/2023 11:02:43 - INFO - datasets.builder - Using custom data configuration mrpc-f18c3b066d103ab0
Loading Dataset Infos from /root/miniconda3/lib/python3.8/site-packages/datasets/packaged_modules/text
11/21/2023 11:02:43 - INFO - datasets.info - Loading Dataset Infos from /root/miniconda3/lib/python3.8/site-packages/datasets/packaged_modules/text
Overwrite dataset info from restored data version if exists.
11/21/2023 11:02:43 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc-f18c3b066d103ab0/0.0.0/c4a140d10f020282918b5dd1b8a49f0104729c6177f60a6b49ec2a365ec69f34
11/21/2023 11:02:43 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc-f18c3b066d103ab0/0.0.0/c4a140d10f020282918b5dd1b8a49f0104729c6177f60a6b49ec2a365ec69f34
Found cached dataset glue (/root/.cache/huggingface/datasets/glue/mrpc-f18c3b066d103ab0/0.0.0/c4a140d10f020282918b5dd1b8a49f0104729c6177f60a6b49ec2a365ec69f34)
11/21/2023 11:02:43 - INFO - datasets.builder - Found cached dataset glue (/root/.cache/huggingface/datasets/glue/mrpc-f18c3b066d103ab0/0.0.0/c4a140d10f020282918b5dd1b8a49f0104729c6177f60a6b49ec2a365ec69f34)
Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc-f18c3b066d103ab0/0.0.0/c4a140d10f020282918b5dd1b8a49f0104729c6177f60a6b49ec2a365ec69f34
11/21/2023 11:02:43 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/glue/mrpc-f18c3b066d103ab0/0.0.0/c4a140d10f020282918b5dd1b8a49f0104729c6177f60a6b49ec2a365ec69f34
Traceback (most recent call last):
File "transformers/examples/pytorch/text-classification/run_glue.py", line 652, in <module>
main()
File "transformers/examples/pytorch/text-classification/run_glue.py", line 364, in main
label_list = raw_datasets["train"].features["label"].names
KeyError: 'label' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27618/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27617/comments | https://api.github.com/repos/huggingface/transformers/issues/27617/events | https://github.com/huggingface/transformers/pull/27617 | 2,003,364,444 | PR_kwDOCUB6oc5f-NnG | 27,617 | remove the deprecated method `init_git_repo` | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
As docstring says `init_git_repo is deprecated and will be removed in v4.34.0 of Transformers.`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @muellerzr
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27617/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27617",
"html_url": "https://github.com/huggingface/transformers/pull/27617",
"diff_url": "https://github.com/huggingface/transformers/pull/27617.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27617.patch",
"merged_at": 1700582975000
} |
https://api.github.com/repos/huggingface/transformers/issues/27616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27616/comments | https://api.github.com/repos/huggingface/transformers/issues/27616/events | https://github.com/huggingface/transformers/pull/27616 | 2,003,268,459 | PR_kwDOCUB6oc5f95nY | 27,616 | Code refactor for nested conditional statements | {
"login": "YeonwooSung",
"id": 30489717,
"node_id": "MDQ6VXNlcjMwNDg5NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/30489717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YeonwooSung",
"html_url": "https://github.com/YeonwooSung",
"followers_url": "https://api.github.com/users/YeonwooSung/followers",
"following_url": "https://api.github.com/users/YeonwooSung/following{/other_user}",
"gists_url": "https://api.github.com/users/YeonwooSung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YeonwooSung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YeonwooSung/subscriptions",
"organizations_url": "https://api.github.com/users/YeonwooSung/orgs",
"repos_url": "https://api.github.com/users/YeonwooSung/repos",
"events_url": "https://api.github.com/users/YeonwooSung/events{/privacy}",
"received_events_url": "https://api.github.com/users/YeonwooSung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hey! Would recommend you to install `ruff==1.5` to make sure some of the quality check pass. Not against refactoring as long as the CI are all green!\r\n\r\nThanks. Will definitely try this!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27616). All of your documentation changes will be reflected on that endpoint.",
"> hey! this doesn't seem to improve readability so won't accept it sorry!\r\n\r\nFair enough! Thanks. Will close this pull request, since this does not improve the readability"
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Simple code refactoring for "src/transformers/hf_argparser.py" and "src/transformers/image_utils.py".
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [v] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [v] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27616/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27616",
"html_url": "https://github.com/huggingface/transformers/pull/27616",
"diff_url": "https://github.com/huggingface/transformers/pull/27616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27616.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27615/comments | https://api.github.com/repos/huggingface/transformers/issues/27615/events | https://github.com/huggingface/transformers/issues/27615 | 2,003,244,755 | I_kwDOCUB6oc53ZxbT | 27,615 | How to get the number of trainable parameters for a hf model | {
"login": "mathmax12",
"id": 32367611,
"node_id": "MDQ6VXNlcjMyMzY3NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/32367611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathmax12",
"html_url": "https://github.com/mathmax12",
"followers_url": "https://api.github.com/users/mathmax12/followers",
"following_url": "https://api.github.com/users/mathmax12/following{/other_user}",
"gists_url": "https://api.github.com/users/mathmax12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathmax12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathmax12/subscriptions",
"organizations_url": "https://api.github.com/users/mathmax12/orgs",
"repos_url": "https://api.github.com/users/mathmax12/repos",
"events_url": "https://api.github.com/users/mathmax12/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathmax12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, I would use something like this:\r\n```python \r\ndef count_trainable_parameters(model):\r\n model_parameters = filter(lambda p: p.requires_grad, model.parameters())\r\n params = sum([np.prod(p.size()) for p in model_parameters])\r\n return params\r\n```\r\n\r\nnote that we try to keep the github issues for bugs/feature requests.\r\nThis kind of question should be asked on the [forum](https://discuss.huggingface.co/) instead\r\nThanks!"
] | 1,700 | 1,700 | 1,700 | NONE | null | ### Feature request
'
peft_parameters = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=8,
bias="none",
task_type="CAUSAL_LM"
)
train_params = TrainingArguments(
output_dir="./results_modified",
num_train_epochs=1,
per_device_train_batch_size=4,
gradient_accumulation_steps=1,
optim="paged_adamw_32bit",
save_steps=25,
logging_steps=25,
learning_rate=2e-4,
weight_decay=0.001,
fp16=False,
bf16=False,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant",
report_to="tensorboard"
)
fine_tuning = SFTTrainer(
model=base_model,
train_dataset=training_data,
peft_config=peft_parameters,
dataset_text_field="text",
tokenizer=llama_tokenizer,
args=train_params
)
fine_tuning.train()
I am using the above code for model training with Lora. I wonder after applying to Lora. How could I check the number of trainable parameters of the model before and after?
### Motivation
Understand the training process well
### Your contribution
I'd love to | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27615/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27614/comments | https://api.github.com/repos/huggingface/transformers/issues/27614/events | https://github.com/huggingface/transformers/issues/27614 | 2,003,237,457 | I_kwDOCUB6oc53ZvpR | 27,614 | Nougat-small VisionEncoderDecoderModel failed when max_new_tokens > 3584, Index out of range in self | {
"login": "c3-YuelingWu",
"id": 103061109,
"node_id": "U_kgDOBiSWdQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103061109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c3-YuelingWu",
"html_url": "https://github.com/c3-YuelingWu",
"followers_url": "https://api.github.com/users/c3-YuelingWu/followers",
"following_url": "https://api.github.com/users/c3-YuelingWu/following{/other_user}",
"gists_url": "https://api.github.com/users/c3-YuelingWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c3-YuelingWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c3-YuelingWu/subscriptions",
"organizations_url": "https://api.github.com/users/c3-YuelingWu/orgs",
"repos_url": "https://api.github.com/users/c3-YuelingWu/repos",
"events_url": "https://api.github.com/users/c3-YuelingWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/c3-YuelingWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"cc @molbap and @NielsRogge who ported this model! ",
"Hi @c3-YuelingWu ! In `nougat-base` tokenizer, `model_max_length` and `max_length` are both at 4096. In `nougat-small`, they are at 3584. However it's indeed anormal that nougat-small is generating 3584 tokens when nougat-base is generating only 6 with the same blank image. I reproduced your problem, looking into it, weird that the end of sequence token is not outputted.\r\n\r\nCould be an OOV error or something related to pos embed. @NielsRogge in `tokenization_nougat_fast.py` `PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES` is not defined for `nougat-small` but only for `nougat-base`, at 3584. Can it be related?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Any update on this?",
"I don't think either had the time to tackle this, marking as good difficult issue π€ "
] | 1,700 | 1,707 | null | NONE | null | ### System Info
# System Info
- transformers == 4.35.0
- torch == 2.0.1+cu117
- Error shows on both cpu and cuda.
# Task
Use nougat to parse text from large documents.
# Issue
NougatTokenizerFast. model_max_length = 3584 . Setting max_new_tokens=3585 (or larger) in model.generate works well on most pages, but fails on pages which output should be '[MISSING_PAGE_POST]'.
Once the nougat fails, parsing all the following pages will fail unless kernel restart.
# Error
IndexError: index out of range in self
# What I Tried
- Setting `nougat_model.config.decoder.pad_token_id = nougat_image_processor.tokenizer.eos_token_id` doesn't work.
- I suspect one special token is generated outside of vocabulary, which prevents from end-of-senescence detection.
- Nougat_base can generate an empty string with max_new_tokens=3585 on the same page, without errors.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Sample Document**

**Code**
```
nougat_image_processor = NougatProcessor.from_pretrained("facebook/nougat-small")
nougat_model = VisionEncoderDecoderModel.from_pretrained("facebook/nougat-small")
pixel_values = nougat_image_processor(image, return_tensors="pt").pixel_values
device = 'cpu'#"cuda" if torch.cuda.is_available() else "cpu"
if device == 'cuda':
pixel_values = pixel_values.cuda()
nougat_model = nougat_model.cuda()
outputs = nougat_model.generate(
pixel_values.to(device),
min_length=1,
max_new_tokens=3585,
bad_words_ids=[[nougat_image_processor.tokenizer.unk_token_id]],
)
generated = nougat_image_processor.batch_decode(outputs[0], skip_special_tokens=True)[0]
generated = nougat_image_processor.post_process_generation(generated, fix_markdown=False)
print(generated)
```
### Expected behavior
**Expected Output**
'[MISSING_PAGE_POST]' or ' '.
**Observed Output**
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/tmp/ipykernel_11659/2341160804.py in <module>
6 nougat_model = nougat_model.cuda()
7
----> 8 outputs = nougat_model.generate(
9 pixel_values.to(device),
10 min_length=1,
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
116
117 return decorate_context
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
1671 if generation_mode == GenerationMode.GREEDY_SEARCH:
1672 # 11. run greedy search
-> 1673 return self.greedy_search(
1674 input_ids,
1675 logits_processor=logits_processor,
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/transformers/generation/utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
2519
2520 # forward pass to get next token
-> 2521 outputs = self(
2522 **model_inputs,
2523 return_dict=True,
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py in forward(self, pixel_values, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, **kwargs)
602
603 # Decode
--> 604 decoder_outputs = self.decoder(
605 input_ids=decoder_input_ids,
606 attention_mask=decoder_attention_mask,
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
2046
2047 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> 2048 outputs = self.model.decoder(
2049 input_ids=input_ids,
2050 attention_mask=attention_mask,
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1236
1237 # embed positions
-> 1238 positions = self.embed_positions(input, past_key_values_length)
1239
1240 hidden_states = inputs_embeds + positions.to(inputs_embeds.device)
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py in forward(self, input_ids, past_key_values_length)
120 ).expand(bsz, -1)
121
--> 122 return super().forward(positions + self.offset)
123
124
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/torch/nn/modules/sparse.py in forward(self, input)
160
161 def forward(self, input: Tensor) -> Tensor:
--> 162 return F.embedding(
163 input, self.weight, self.padding_idx, self.max_norm,
164 self.norm_type, self.scale_grad_by_freq, self.sparse)
~/.conda/envs/py-chunker-client-ipython/lib/python3.9/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2208 # remove once script supports set_grad_enabled
2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2211
2212
IndexError: index out of range in self | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27614/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27613/comments | https://api.github.com/repos/huggingface/transformers/issues/27613/events | https://github.com/huggingface/transformers/issues/27613 | 2,002,837,466 | I_kwDOCUB6oc53YN_a | 27,613 | RuntimeError "Some tensors share memory" occurred when saving checkpoints during LoRA | {
"login": "Yuta555",
"id": 59324565,
"node_id": "MDQ6VXNlcjU5MzI0NTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/59324565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuta555",
"html_url": "https://github.com/Yuta555",
"followers_url": "https://api.github.com/users/Yuta555/followers",
"following_url": "https://api.github.com/users/Yuta555/following{/other_user}",
"gists_url": "https://api.github.com/users/Yuta555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yuta555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yuta555/subscriptions",
"organizations_url": "https://api.github.com/users/Yuta555/orgs",
"repos_url": "https://api.github.com/users/Yuta555/repos",
"events_url": "https://api.github.com/users/Yuta555/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yuta555/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @LysandreJik, should we do what we did similar for accelerate where we just warned instead? ",
"What we did in accelerate: https://github.com/huggingface/accelerate/pull/2136 cc @SunMarc ",
"cc @Narsil also",
"I've also hit this, and it seems like a backwards incompatible change from `4.33.2`. I appreciate that the error message links to a [page](https://huggingface.co/docs/safetensors/torch_shared_tensors) with clear instructions. But the instructions say to replace `save_file` with `save_model` which would need to be done [deep in the Trainer class](https://github.com/huggingface/transformers/blob/df5c5c62ae253055336f5bb0828ca8e3e15ab6bd/src/transformers/trainer.py#L2893) for those of us using `Trainer`.\r\n\r\nLooking at the code I see there's a workaround to set `save_safetensors=False`. Seems to work, but then we lose whatever benefits safetensors are supposed to give us. (TF compatibility - I'm sure somebody cares, but nobody I know.)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey! Could you try again with main, I think the compatibility has been taken care of! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help?
@muellerzr @pacman
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm fine-tuning Llama 2 7B (ForSequenceClassification) with LoRA as shown below, then got an error when I set target_modules as all linear layers for LoRA while didn't get the error when setting only q and v.
What will cause this error to occur and how can I avoid it? If I can follow this error message to avoid the problem, how can I use `save_model` in the trainer setting?
Error:
```
RuntimeError:
Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'base_model.model.score.lora_A.weight', 'base_model.model.score.modules_to_save.lora_A.weight'}, {'base_model.model.score.modules_to_save.lora_B.weight', 'base_model.model.score.lora_B.weight'}].
A potential way to correctly save your model is to use `save_model`.
More information at https://huggingface.co/docs/safetensors/torch_shared_tensors
```
Code:
```
# Build a model and set configuration for LoRA
model = AutoModelForSequenceClassification.from_pretrained(
model_name_or_path,
torch_dtype=torch.bfloat16,
num_labels=len(label2id),
id2label=id2label,
label2id=label2id,
device_map="auto",
use_flash_attention_2=True,
)
if getattr(model.config, "pad_token_id") is None:
model.config.pad_token_id = model.config.eos_token_id
# Pik up all linear layer
model_modules = str(model.modules)
pattern = r'\((\w+)\): Linear'
linear_layer_names = re.findall(pattern, model_modules)
names = []
# Print the names of the Linear layers
for name in linear_layer_names:
names.append(name)
target_modules = list(set(names))
print(f"target modules for LoRA: {target_modules}")
config = LoraConfig(
task_type="SEQ_CLS",
inference_mode=False,
r=16,
lora_alpha=16,
lora_dropout=0.1,
target_modules=target_modules,
)
model = get_peft_model(model, config)`
# Set Trainer Arguments
training_args = TrainingArguments(
output_dir=OUTPUT_DIR,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=8, # increase by 2x for every 2x decrease in batch size
#num_train_epochs=NUM_EPOCHS,
learning_rate=LR,
bf16=True,
warmup_steps=WARMUP_STEPS,
logging_steps=5, # For check
save_steps=5, # For check
save_safetensors=True,
evaluation_strategy="steps",
max_steps=9, # For check
remove_unused_columns=False,
label_names=["labels"],
group_by_length=True,
lr_scheduler_type=LR_SCHEDULER,
ddp_find_unused_parameters=False,
report_to="wandb",
load_best_model_at_end=True,
metric_for_best_model="accuracy",
greater_is_better=True,
)
data_collator = DataCollatorWithPadding(tokenizer, padding="longest")
#metrics = evaluate.load("accuracy")
acc = evaluate.load('accuracy')
# Added f1, precision and recall as reference
# Set average method as "macro" since dataset is imbalanced but consider each class eqaully important
f1 = evaluate.load('f1', average='macro')
pre = evaluate.load('precision', average='macro')
rec = evaluate.load('recall', average='macro')
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
#return metrics.compute(predictions=predictions, references=labels)
results = {}
results.update(acc.compute(predictions=predictions, references=labels))
results.update(f1.compute(predictions=predictions, references=labels, average="macro"))
results.update(pre.compute(predictions=predictions, references=labels, average="macro"))
results.update(rec.compute(predictions=predictions, references=labels, average="macro"))
return results
trainer = Trainer(
args=training_args,
model=model,
train_dataset=train_data["train"],
eval_dataset=train_data["test"].train_test_split(test_size=0.05)['test'],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
callbacks=[],
)
trainer.train()
```
### Expected behavior
Save checkpoints successfully without the error above. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27613/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27613/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27612/comments | https://api.github.com/repos/huggingface/transformers/issues/27612/events | https://github.com/huggingface/transformers/pull/27612 | 2,002,694,649 | PR_kwDOCUB6oc5f7764 | 27,612 | Generate: Update docs regarding reusing `past_key_values` in `generate` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The right-handed TOC is now fully functional!\r\n\r\n<img width=\"239\" alt=\"Screenshot 2023-11-20 at 18 31 56\" src=\"https://github.com/huggingface/transformers/assets/12240844/cb7da28e-adf2-4bc6-a71c-09e8343b984d\">\r\n"
] | 1,700 | 1,700 | 1,700 | MEMBER | null | # What does this PR do?
Due to a [recent PR](https://github.com/huggingface/transformers/pull/25086), which enables `generate` to return `past_key_values`, we can now reuse `past_key_values` to speed up multi-round conversations with LLMs. This was already described in the [LLM optimization docs](https://huggingface.co/docs/transformers/llm_tutorial_optimization), even though there was no corresponding code back when it was written.
This PR updates the doc with a code example of how to return and reuse `past_key_values`, polishing a few nits encountered along the way.
Related issue: https://github.com/huggingface/transformers/issues/27546 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27612/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27612",
"html_url": "https://github.com/huggingface/transformers/pull/27612",
"diff_url": "https://github.com/huggingface/transformers/pull/27612.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27612.patch",
"merged_at": 1700563695000
} |
https://api.github.com/repos/huggingface/transformers/issues/27611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27611/comments | https://api.github.com/repos/huggingface/transformers/issues/27611/events | https://github.com/huggingface/transformers/pull/27611 | 2,002,675,861 | PR_kwDOCUB6oc5f734U | 27,611 | Flash Attention 2 support for RoCm | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27611). All of your documentation changes will be reflected on that endpoint.",
"@LysandreJik @ArthurZucker @amyeroberts @younesbelkada WDYT?",
"Reviewing now\r\n"
] | 1,700 | 1,701 | 1,701 | COLLABORATOR | null | As per title.
https://github.com/ROCmSoftwarePlatform/flash-attention has been bumped to `2.0.4` recently, and we expect it to be bumped to `2.1` soon. Meanwhile, this PR adds suport for FA2 on RoCm devices. It can be simplified once FA2 RoCm version is bumped to `2.1`.
Tests that are passing on A100 are also passing on MI210 (some are flacky on A100). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27611/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27611/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27611",
"html_url": "https://github.com/huggingface/transformers/pull/27611",
"diff_url": "https://github.com/huggingface/transformers/pull/27611.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27611.patch",
"merged_at": 1701694338000
} |
https://api.github.com/repos/huggingface/transformers/issues/27610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27610/comments | https://api.github.com/repos/huggingface/transformers/issues/27610/events | https://github.com/huggingface/transformers/pull/27610 | 2,002,672,467 | PR_kwDOCUB6oc5f73Ii | 27,610 | [`core` / `gradient_checkpointing`] add support for old GC method | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As discussed offline, let's just display a warning for now, merging! "
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/27596
Models on the Hub still rely on the previous `_set_gradient_checkpointing` logic that has been removed and refactored in https://github.com/huggingface/transformers/pull/27073
While models with code on the Hub maintainers should be aware that overwriting a private method is not a good practice, it has been widely common to declare `_set_gradient_checkpointing` for those models as previously it was the only way to enable GC on these models. Therefore I see this PR as a potential solution to overcome this issue that is likely to happen to many users.
I propose to still support that for few minor releases by displaying a warning that explains to users that this is going to be deprecated in the future.
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27610/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27610",
"html_url": "https://github.com/huggingface/transformers/pull/27610",
"diff_url": "https://github.com/huggingface/transformers/pull/27610.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27610.patch",
"merged_at": 1700561011000
} |
https://api.github.com/repos/huggingface/transformers/issues/27609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27609/comments | https://api.github.com/repos/huggingface/transformers/issues/27609/events | https://github.com/huggingface/transformers/issues/27609 | 2,002,526,716 | I_kwDOCUB6oc53XCH8 | 27,609 | Extend Chat Template Tokenization for Training/Finetuning | {
"login": "siddk",
"id": 2498509,
"node_id": "MDQ6VXNlcjI0OTg1MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2498509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddk",
"html_url": "https://github.com/siddk",
"followers_url": "https://api.github.com/users/siddk/followers",
"following_url": "https://api.github.com/users/siddk/following{/other_user}",
"gists_url": "https://api.github.com/users/siddk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddk/subscriptions",
"organizations_url": "https://api.github.com/users/siddk/orgs",
"repos_url": "https://api.github.com/users/siddk/repos",
"events_url": "https://api.github.com/users/siddk/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"FYI @Rocketknight1 \r\n\r\nThis would also need a support for chat templates in tokenizers IMO",
"Hey @siddk, this definitely seems like a good suggestion, and mirrors suggestions I got from e.g. @philschmid!\r\n\r\nThe first step is relatively easy - we could just check the first element of the input to figure out if it's a single conversation or a list of them, and the same with the second, although we might have to consider backward compatibility.\r\n\r\nThe third is tricky, though - I definitely understand why it's important, but given that the templates can be arbitrary, I'm not sure how we can do that automatically for any template!",
"For the third step, I think we need to define the **assistant_start_prefix** & **assistant_stop**. Only chat_template is not enough to detect what is the content of Assistant in the prompt. If we know the assistant_start_prefix & assistant_stop, we will **unmask** all tokens inside: (assistant_prefix, assistant_stop). \r\nFor example, Assume that assistant_prefix=\"\\nAssistant:\\n\" and assistant_stop=\"</stop>\"\r\nprompt = \"...\\nAssistant:\\nHi, I am here to help you</stop>\" --> unmask tokens: \"I am here to help you</stop>\" and mask all other tokens with -100",
"Lost track of this over the holidays, bumping it and putting it back on my list to deal with soon"
] | 1,700 | 1,706 | null | CONTRIBUTOR | null | ### Feature request
Extend `tokenizer.apply_chat_template` with functionality for training/finetuning, returning `attention_masks` and (optional) `labels` (for ignoring "System" and "User" messages during loss computation).
I think this requires the following steps:
- Adding support for taking in a batch of conversations (e.g., `List[Conversation := List[Dict[str, str]]`)
- Invoking the native `tokenizer.__call__()` after applying the template to each example (passing through padding, truncation, any other parameters).
- **Important**: Adding an optional output for `labels` -- a "masked" version of the returned `input_ids` with tokens corresponding to the System/User roles set to be ignored for loss computation (e.g., set to `IGNORE_INDEX = -100`).
### Motivation
The new `tokenizer.apply_chat_template` feature is great, and resolves a lot of ambiguity when it comes to formatting inputs for chat-based LLMs.
However, right now it's geared for inference-time usage, only taking a single "conversation" and outputting the `input_ids` (tokens) after applying the chat template.
When finetuning models on chat-based data, it would be really nice to unify the `apply_chat_template` API with the `tokenizer.__call__()` API, returning `attention_masks` and (optionally) `labels` (with "System" and "User" role text automatically ignored for loss computation).
### Your contribution
I can try building a proof-of-concept for a "standard" workflow and Draft PR; I think there'd need to be a few discussions about the actual implementation details though! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27609/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/27609/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27608/comments | https://api.github.com/repos/huggingface/transformers/issues/27608/events | https://github.com/huggingface/transformers/pull/27608 | 2,002,486,268 | PR_kwDOCUB6oc5f7Ob1 | 27,608 | dvclive callback: warn instead of fail when logging non-scalars | {
"login": "dberenbaum",
"id": 2308172,
"node_id": "MDQ6VXNlcjIzMDgxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dberenbaum",
"html_url": "https://github.com/dberenbaum",
"followers_url": "https://api.github.com/users/dberenbaum/followers",
"following_url": "https://api.github.com/users/dberenbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions",
"organizations_url": "https://api.github.com/users/dberenbaum/orgs",
"repos_url": "https://api.github.com/users/dberenbaum/repos",
"events_url": "https://api.github.com/users/dberenbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dberenbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@muellerz This makes the tests pass, but I'm not sure if it's intended that the test here logs the learning rate as a list rather than as a scalar (which will fail under several of the existing loggers, but only with a warning like in this PR). \r\n\r\nhttps://github.com/huggingface/transformers/blob/e4280d650c579a87f645d1f4a4535feb27c49804/tests/trainer/test_trainer.py#L675\r\n\r\n`self.lr_scheduler._last_lr` is a [list](https://github.com/pytorch/pytorch/blob/140c54e6ccc5e97f1b7f1e0fcd3d8c6af7dd2ab2/torch/optim/lr_scheduler.py#L167). Should a scalar value be extracted like `self.lr_scheduler._last_lr[0]`? That's the value being tested later as `[\"learning_rate\"][0]`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f31af3927f4091f5fb8126c77a0addebd4c1fe94/tests/trainer/test_trainer.py#L699-L712\r\n\r\nEverywhere else in the codebase, it looks like a scalar is extracted:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f31af3927f4091f5fb8126c77a0addebd4c1fe94/src/transformers/trainer_pt_utils.py#L847-L867\r\n\r\nhttps://github.com/huggingface/transformers/blob/f31af3927f4091f5fb8126c77a0addebd4c1fe94/examples/legacy/pytorch-lightning/run_glue.py#L46",
"In the future it's @muellerzr @dberenbaum, don't want to be pinging random people :) \r\n\r\nYes, let's go with `[0]` as the one being extracted/the scalar. ",
"@muellerzr Apologies to you and the other person who was pinged here Zach! Added the change to the test in the last commit. The current test failures look unrelated."
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/27352#issuecomment-1819131456. This will warn instead of fail when trying to log non-scalars as metrics.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerz Could you please take a look? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27608/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27608",
"html_url": "https://github.com/huggingface/transformers/pull/27608",
"diff_url": "https://github.com/huggingface/transformers/pull/27608.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27608.patch",
"merged_at": 1700555391000
} |
https://api.github.com/repos/huggingface/transformers/issues/27607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27607/comments | https://api.github.com/repos/huggingface/transformers/issues/27607/events | https://github.com/huggingface/transformers/pull/27607 | 2,002,449,277 | PR_kwDOCUB6oc5f7GZ3 | 27,607 | Deprecate `TransfoXL` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
Deprecate `TransfoXL` as discussed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27607/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27607/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27607",
"html_url": "https://github.com/huggingface/transformers/pull/27607",
"diff_url": "https://github.com/huggingface/transformers/pull/27607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27607.patch",
"merged_at": 1700822882000
} |
https://api.github.com/repos/huggingface/transformers/issues/27606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27606/comments | https://api.github.com/repos/huggingface/transformers/issues/27606/events | https://github.com/huggingface/transformers/pull/27606 | 2,002,404,263 | PR_kwDOCUB6oc5f68is | 27,606 | Align backbone stage selection with out_indices & out_features | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27606). All of your documentation changes will be reflected on that endpoint.",
"> Hey! Could you elaborate on the motivation behind the out of order or duplicated (seems to be a choice rather than a bug fix for me no?)\r\n\r\n@ArthurZucker Sure! It's both - a choice and a current bug. The choice is whether we allow passing in different orders and duplicates and the bug is whether this is reflected. At the moment I can pass in duplicates, out-of-order etc. but it won't be reflected in the returned stages. Another option is for input verification where we raise an error if the user chooses `out_features` or `out_indices` which have these properties. I could implement that instead? It might be a bit more defensive \r\n",
"@ArthurZucker Sorry to flip-flop. I've thought a bit more and concluded that not allowing repetitions & different orders when setting `out_features` and `out_indices` would be better: \r\n* It's possible to enable more flexible arguments later in the future but not the other way around - this wouldn't be backward compatible\r\n* Adding checks is backwards compatible: new errors might be raised with existing inputs but these would start flagging unexpected behaviour\r\n* Having multiple or out-of-order arguments is something the user can handle on their side after receiving the outputs\r\n\r\nI'm going to update the PR to add these checks + relevant tests instead ",
"@ArthurZucker There's isn't any proper documentation for the backbones atm - this is being added in #27456. I've added notes about the restrictions in the docstrings"
] | 1,700 | 1,703 | 1,703 | COLLABORATOR | null | # What does this PR do?
This PR adds a set of input verification for the `out_features` and `out_indices` arguments for backbones, making sure that any accepted values align with the returned model outputs.
## More details
`out_features` and `out_indices` are used to specify which blocks' attentions are returned by the Backbone classes.
The following can currently be passed in `out_features`:
* Out-of-order stages: `["stage5", "stage2", "stage4"]`
* Double stages: `["stage3", "stage3"]`
However, this is will not be reflected in the returned feature maps on a forward pass e.g. [here for ResNet](https://github.com/huggingface/transformers/blob/4151fbb49c42bd22f8bf18b1773e09aa84846bdd/src/transformers/models/resnet/modeling_resnet.py#L499). The feature maps are selected by iterating over the stage_names (ordered list of all stages in the backbone) and returning those that have their name in `out_features` and so are in stage-order and will only be selected once.
There is also a misalignment between the TimmBackbone and transformers backbones - as timm will automatically take the set of indices (removing duplicates) whereas transformers will keep them in the `out_indices` attribute.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27606/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27606",
"html_url": "https://github.com/huggingface/transformers/pull/27606",
"diff_url": "https://github.com/huggingface/transformers/pull/27606.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27606.patch",
"merged_at": 1703097197000
} |
https://api.github.com/repos/huggingface/transformers/issues/27605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27605/comments | https://api.github.com/repos/huggingface/transformers/issues/27605/events | https://github.com/huggingface/transformers/issues/27605 | 2,002,369,380 | I_kwDOCUB6oc53Wbtk | 27,605 | Wav2Vec2ForCTC architecture models load incorrectly with torch 2.1 and later | {
"login": "eindenbom",
"id": 45334274,
"node_id": "MDQ6VXNlcjQ1MzM0Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/45334274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eindenbom",
"html_url": "https://github.com/eindenbom",
"followers_url": "https://api.github.com/users/eindenbom/followers",
"following_url": "https://api.github.com/users/eindenbom/following{/other_user}",
"gists_url": "https://api.github.com/users/eindenbom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eindenbom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eindenbom/subscriptions",
"organizations_url": "https://api.github.com/users/eindenbom/orgs",
"repos_url": "https://api.github.com/users/eindenbom/repos",
"events_url": "https://api.github.com/users/eindenbom/events{/privacy}",
"received_events_url": "https://api.github.com/users/eindenbom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"rip. Not sure if there's a way to fix this without updating transformers. Have you filed a bug there?",
"This is transformers repo and PR breaking backward compatibility has landed here. Where should I file a bug then?\r\n\r\nThe major problem is that transformers itself lack ANY provision for backward compatibility hooks in model loading.",
"Oh oops, this is the right spot haha, my bad!",
"cc @sanchit-gandhi ",
"Duplicate of https://github.com/huggingface/transformers/issues/26796 - let's discuss on that thread to keep track of the issue! (note that the weights are loaded correctly, it's just the warning that needs to be updated)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.8.0
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi, @ezyang
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In python console run:
```
from transformers import AutoModelForCTC
model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-100h")
```
Output:
```
Some weights of the model checkpoint at facebook/wav2vec2-base-100h were not used when initializing Wav2Vec2ForCTC: ['wav2vec2.encoder.pos_conv_embed.conv.weight_v', 'wav2vec2.mask_time_emb_vector', 'wav2vec2.encoder.pos_conv_embed.conv.weight_g']
- This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-100h and are newly initialized: ['wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original1', 'wav2vec2.masked_spec_embed', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Root cause for the problem: PR #24030. New parametrization changes weight names from weight_g => parametrizations.weight.original0, weight_v => parametrizations.weight.original1.
Although new parametrization code does contain backward compat load hook, it it not used by transformers loading meachanics.
### Expected behavior
All weights are correctly loaded from checkpoint. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27605/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27604/comments | https://api.github.com/repos/huggingface/transformers/issues/27604/events | https://github.com/huggingface/transformers/issues/27604 | 2,002,335,062 | I_kwDOCUB6oc53WTVW | 27,604 | Whisper language not returned when return_timestamps="word" and return_language=True | {
"login": "Oscaarjs",
"id": 37636054,
"node_id": "MDQ6VXNlcjM3NjM2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/37636054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oscaarjs",
"html_url": "https://github.com/Oscaarjs",
"followers_url": "https://api.github.com/users/Oscaarjs/followers",
"following_url": "https://api.github.com/users/Oscaarjs/following{/other_user}",
"gists_url": "https://api.github.com/users/Oscaarjs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oscaarjs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oscaarjs/subscriptions",
"organizations_url": "https://api.github.com/users/Oscaarjs/orgs",
"repos_url": "https://api.github.com/users/Oscaarjs/repos",
"events_url": "https://api.github.com/users/Oscaarjs/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oscaarjs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This more a feature request than a bug, and is probably related to the fact that the language is not detected for the small chunk but for the full 30sec chunk of audio. + repeating the language does not seem to make sense for me, but we should also de-limit the start and end of detected language, which would bloat the code. I'll think about a potential solution π \r\njust FYI @Narsil who worked ont his refactor a while ago \r\n",
"Thank you for the reply @ArthurZucker! I agree it would be weird to repeat the language, especially for each \"word chunk\". Is there a reason \"language\" has been placed on a chunk-level if it's not computed on the same? Why is the language not shown on the \"top\" level, similar to how \"text\" is shown? Best regards",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
@Narsil
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce can be seen here;
[https://colab.research.google.com/drive/1RplLaZdD2DBy5NDDz9pSnk53jfEFM-Cm?usp=sharing](url)
Essentially it works as expected when return_timestamps=True, but not when return_timestamps="word".
### Expected behavior
When setting;
return_timestamps="word"
return_language=True
I expect the language to be returned as well.
As shown in linked colab it does work as expected when;
return_timestamps=True
return_language=True | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27604/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27603/comments | https://api.github.com/repos/huggingface/transformers/issues/27603/events | https://github.com/huggingface/transformers/pull/27603 | 2,002,299,664 | PR_kwDOCUB6oc5f6lrR | 27,603 | Report to none for reduce_lr_on_plateu | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Modifies test to explicitly use `report_to="none"` rather than the default "all", as if trackers are installed it can lead to issues (https://github.com/huggingface/transformers/pull/27352#issuecomment-1819131456) and is a current failing test in Accelerate.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27603/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27603",
"html_url": "https://github.com/huggingface/transformers/pull/27603",
"diff_url": "https://github.com/huggingface/transformers/pull/27603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27603.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27602/comments | https://api.github.com/repos/huggingface/transformers/issues/27602/events | https://github.com/huggingface/transformers/issues/27602 | 2,002,280,958 | I_kwDOCUB6oc53WGH- | 27,602 | Converting facebook-encodec to onnx fails with KeyError: 'encodec' | {
"login": "kalradivyanshu",
"id": 12642750,
"node_id": "MDQ6VXNlcjEyNjQyNzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/12642750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kalradivyanshu",
"html_url": "https://github.com/kalradivyanshu",
"followers_url": "https://api.github.com/users/kalradivyanshu/followers",
"following_url": "https://api.github.com/users/kalradivyanshu/following{/other_user}",
"gists_url": "https://api.github.com/users/kalradivyanshu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kalradivyanshu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kalradivyanshu/subscriptions",
"organizations_url": "https://api.github.com/users/kalradivyanshu/orgs",
"repos_url": "https://api.github.com/users/kalradivyanshu/repos",
"events_url": "https://api.github.com/users/kalradivyanshu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kalradivyanshu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Hey! Thanks for this, I'll add this as a feature request as the error I am getting is the following:\r\n```python \r\nFramework not requested. Using torch to export to ONNX.\r\n/Users/arthurzucker/.pyenv/versions/py310/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.\r\n warnings.warn(\"torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.\")\r\nTraceback (most recent call last):\r\n File \"/Users/arthurzucker/.pyenv/versions/3.10.13/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/Users/arthurzucker/.pyenv/versions/3.10.13/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/Users/arthurzucker/Work/transformers/src/transformers/onnx/__main__.py\", line 242, in <module>\r\n main()\r\n File \"/Users/arthurzucker/Work/transformers/src/transformers/onnx/__main__.py\", line 234, in main\r\n export_with_transformers(args)\r\n File \"/Users/arthurzucker/Work/transformers/src/transformers/onnx/__main__.py\", line 79, in export_with_transformers\r\n model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=args.feature)\r\n File \"/Users/arthurzucker/Work/transformers/src/transformers/onnx/features.py\", line 728, in check_supported_model_or_raise\r\n model_features = FeaturesManager.get_supported_features_for_model_type(model_type, model_name=model_name)\r\n File \"/Users/arthurzucker/Work/transformers/src/transformers/onnx/features.py\", line 575, in get_supported_features_for_model_type\r\n raise KeyError(\r\nKeyError: \"encodec is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet-v1', 'mobilenet-v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'poolformer', 'rembert', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support encodec please propose a PR or open up an issue.\"\r\n```\r\nwhich is expected. \r\n",
"I can work on this @ArthurZucker, can you guide me on how i can get started?",
"Hey! π€ I think @fxmarty will be of better help than me.",
"@kalradivyanshu closing as duplicate of https://github.com/huggingface/optimum/issues/1545\r\n\r\ntransformers.onnx is deprecated in favor of optimum.exporters.onnx, for reference: https://huggingface.co/docs/transformers/serialization\r\n\r\nThere is an open PR for the ONNX support for encodec: https://github.com/huggingface/optimum/pull/1620"
] | 1,700 | 1,704 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Download all files in https://huggingface.co/facebook/encodec_24khz/tree/main to a folder `encodec24khz`
2. run `python -m transformers.onnx --model=encodec24khz onnx/`
### Expected behavior
The encodec model should be exported to ONNX as described here: https://huggingface.co/docs/transformers/v4.17.0/en/serialization#exporting-a-model-to-onnx.
But it fails and gives the following error:
```
β― python -m transformers.onnx --model=encodec24khz onnx/
Local PyTorch model found.
Framework not requested. Using torch to export to ONNX.
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/user/.local/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 240, in <module>
main()
File "/home/user/.local/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 232, in main
export_with_transformers(args)
File "/home/user/.local/lib/python3.9/site-packages/transformers/onnx/__main__.py", line 75, in export_with_transformers
model = FeaturesManager.get_model_from_feature(
File "/home/user/.local/lib/python3.9/site-packages/transformers/onnx/features.py", line 701, in get_model_from_feature
model = model_class.from_pretrained(model, cache_dir=cache_dir)
File "/home/user/.local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/home/user/.local/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 937, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/home/user/.local/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 643, in __getitem__
raise KeyError(key)
KeyError: 'encodec'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27602/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27601/comments | https://api.github.com/repos/huggingface/transformers/issues/27601/events | https://github.com/huggingface/transformers/issues/27601 | 2,002,273,073 | I_kwDOCUB6oc53WEMx | 27,601 | Getting equivalent results between Transformer's resize and tf.image.resize | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @amyeroberts "
] | 1,700 | 1,700 | null | CONTRIBUTOR | null | ### Feature request
For the SigLIP model (#26522), I'd like to get equivalent results between tf.image.resize and the resize method available in Transformers.
Here's what I tried:
```
from PIL import Image
import requests
import tensorflow as tf
import numpy as np
def resize(image, size, method="bilinear", antialias=False):
"""Resizes image to a given size."""
# Note: use TF-2 version of tf.image.resize as the version in TF-1 is
# buggy: https://github.com/tensorflow/tensorflow/issues/6720.
# In particular it was not equivariant with rotation and lead to the network
# to learn a shortcut in self-supervised rotation task, if rotation was
# applied after resize.
dtype = image.dtype
tf_dtype = tf.type_spec_from_value(image).dtype
image = tf.image.resize(image, size, method=method, antialias=antialias)
return tf.cast(tf.clip_by_value(image, tf_dtype.min, tf_dtype.max), dtype)
# load image
url = 'https://cdn.openai.com/multimodal-neurons/assets/apple/apple-ipod.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# get original pixel values
original_pixel_values = resize(np.array(image), size=(224,224))
# get our pixel values
from transformers.image_transforms import resize
pixel_values = resize(np.array(image), size=(224,224), resample=Image.Resampling.BILINEAR)
# verify results
np.testing.assert_array_equal(original_pixel_values, pixel_values)
```
This currently fails with:
```
AssertionError:
Arrays are not equal
Mismatched elements: 87370 / 150528 (58%)
Max absolute difference: 255
Max relative difference: 255.
x: array([[[127, 101, 59],
[136, 112, 72],
[129, 109, 72],...
y: array([[[131, 105, 63],
[138, 114, 74],
[126, 108, 70],...
```
### Motivation
Would be great to have equivalent results such that logits match with the original implementation.
### Your contribution
I provide a notebook [here](https://colab.research.google.com/drive/1nP8f07qd3jWBRgCnE29cagseRtqVMADa?usp=sharing) for testing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27601/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27601/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27600/comments | https://api.github.com/repos/huggingface/transformers/issues/27600/events | https://github.com/huggingface/transformers/issues/27600 | 2,002,192,230 | I_kwDOCUB6oc53Vwdm | 27,600 | How to get input sentence embedding from Llama or Llama2? | {
"login": "waterluck",
"id": 111731547,
"node_id": "U_kgDOBqjjWw",
"avatar_url": "https://avatars.githubusercontent.com/u/111731547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waterluck",
"html_url": "https://github.com/waterluck",
"followers_url": "https://api.github.com/users/waterluck/followers",
"following_url": "https://api.github.com/users/waterluck/following{/other_user}",
"gists_url": "https://api.github.com/users/waterluck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/waterluck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/waterluck/subscriptions",
"organizations_url": "https://api.github.com/users/waterluck/orgs",
"repos_url": "https://api.github.com/users/waterluck/repos",
"events_url": "https://api.github.com/users/waterluck/events{/privacy}",
"received_events_url": "https://api.github.com/users/waterluck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @waterluck π \r\n\r\nFollowing our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) or our [discord](https://discord.com/invite/hugging-face-879548962464493619) π€ If you still believe there is a bug in the code, check [this guide](https://huggingface.co/course/chapter8/5?fw=pt).\r\n\r\nSince this is your first issue with us, I'm going to share a few pointers:\r\n1. To get the best embeddings, models trained to embed the whole sentence should be used, not standard generative LLMs. See our [sentence similarity task page](https://huggingface.co/tasks/sentence-similarity) and check out the `sentence-transformers` repo\r\n2. If you want to use an LLM, I'd say to average the hidden states for all the tokens, not simply for the last token (as you have in your commented code)",
"@gante Well noted the issue guidelines, I'll follow them next time! \r\nAlso great thanks for your kindness in sharing which embedding should be used in by using LLMs. That's helped a lot! "
] | 1,700 | 1,700 | 1,700 | NONE | null | I'm trying to get the sentence embedding that I input, I checked some common practice to do it, but I'm not sure I'm doing the it right. Who may be help? @gante Thanks if you can be help. my code is as below:
```
model = LlamaForCausalLM.from_pretrained(
args.pretrained_name_or_path,
torch_dtype=torch.float16,
device_map=device,
)
tokenizer = LlamaTokenizer.from_pretrained(args.pretrained_name_or_path, fast_tokenizer=True)
model.to(device)
model.eval()
tokenizer.pad_token_id = 0
tokenizer.padding_side = "left"
for i in range(0, len(sentences), batch_size):
batch_sentences = sentences[i: i+batch_size]
inputs = tokenizer(batch_sentences, padding=True, truncation=False, return_tensors='pt')
inputs = inputs.to(device)
with torch.no_grad():
outputs = model(**inputs, output_hidden_states=True)
hidden_states = outputs.hidden_states[-1]
sentence_embeddings = hidden_states[:, -1, :] # # here is using the **last token's** last layer hidden states as sentence embeddings,
# or sentence_embeddings = outputs.hidden_states[-1].mean(dim=1) # here use average sentence embedding.
# and I'm not sure which one is better.
embeddings.append(sentence_embeddings.cpu())
embeddings = torch.cat(embeddings, dim=0)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27600/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27599/comments | https://api.github.com/repos/huggingface/transformers/issues/27599/events | https://github.com/huggingface/transformers/pull/27599 | 2,001,912,637 | PR_kwDOCUB6oc5f5Quc | 27,599 | Enable safetensors conversion from PyTorch to other frameworks without the torch requirement | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27599). All of your documentation changes will be reflected on that endpoint.",
"Should be good for a second look @sanchit-gandhi if you have time for it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Would still like a review @sanchit-gandhi if you have the time for it :ok_man: "
] | 1,700 | 1,706 | 1,706 | MEMBER | null | This removes the need to have `torch` installed to proceed to a safetensors (serialized from PyTorch) checkpoint converted into Flax.
This therefore, enables loading any model in Flax that has a safetensors file on the Hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27599/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27599/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27599",
"html_url": "https://github.com/huggingface/transformers/pull/27599",
"diff_url": "https://github.com/huggingface/transformers/pull/27599.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27599.patch",
"merged_at": 1706002103000
} |
https://api.github.com/repos/huggingface/transformers/issues/27598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27598/comments | https://api.github.com/repos/huggingface/transformers/issues/27598/events | https://github.com/huggingface/transformers/issues/27598 | 2,001,588,573 | I_kwDOCUB6oc53TdFd | 27,598 | Hyperparameter search error with Ray tune | {
"login": "Shamik-07",
"id": 39588365,
"node_id": "MDQ6VXNlcjM5NTg4MzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/39588365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shamik-07",
"html_url": "https://github.com/Shamik-07",
"followers_url": "https://api.github.com/users/Shamik-07/followers",
"following_url": "https://api.github.com/users/Shamik-07/following{/other_user}",
"gists_url": "https://api.github.com/users/Shamik-07/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shamik-07/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shamik-07/subscriptions",
"organizations_url": "https://api.github.com/users/Shamik-07/orgs",
"repos_url": "https://api.github.com/users/Shamik-07/repos",
"events_url": "https://api.github.com/users/Shamik-07/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shamik-07/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, this PR https://github.com/huggingface/transformers/pull/26499 might fix this issue. Could you please try it out and let us know?",
"I tried running the notebook with the PR, however, i found a different error now:\r\n```py\r\n2023-11-20 16:02:53,411\tINFO worker.py:1673 -- Started a local Ray instance.\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py](https://localhost:8080/#) in put_object(self, value, object_ref, owner_address)\r\n 702 try:\r\n--> 703 serialized_value = self.get_serialization_context().serialize(value)\r\n 704 except TypeError as e:\r\n\r\n18 frames\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py](https://localhost:8080/#) in serialize(self, value)\r\n 493 else:\r\n--> 494 return self._serialize_to_msgpack(value)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py](https://localhost:8080/#) in _serialize_to_msgpack(self, value)\r\n 471 metadata = ray_constants.OBJECT_METADATA_TYPE_PYTHON\r\n--> 472 pickle5_serialized_object = self._serialize_to_pickle5(\r\n 473 metadata, python_objects\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py](https://localhost:8080/#) in _serialize_to_pickle5(self, metadata, value)\r\n 424 self.get_and_clear_contained_object_refs()\r\n--> 425 raise e\r\n 426 finally:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/serialization.py](https://localhost:8080/#) in _serialize_to_pickle5(self, metadata, value)\r\n 419 self.set_in_band_serialization()\r\n--> 420 inband = pickle.dumps(\r\n 421 value, protocol=5, buffer_callback=writer.buffer_callback\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle_fast.py](https://localhost:8080/#) in dumps(obj, protocol, buffer_callback)\r\n 87 cp = CloudPickler(file, protocol=protocol, buffer_callback=buffer_callback)\r\n---> 88 cp.dump(obj)\r\n 89 return file.getvalue()\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/cloudpickle/cloudpickle_fast.py](https://localhost:8080/#) in dump(self, obj)\r\n 732 try:\r\n--> 733 return Pickler.dump(self, obj)\r\n 734 except RuntimeError as e:\r\n\r\nTypeError: cannot pickle '_thread.lock' object\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTypeError Traceback (most recent call last)\r\n[<ipython-input-38-12c3f54763db>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 best_run = trainer.hyperparameter_search(n_trials=10, direction=\"maximize\")\r\n\r\n[/content/transformers/src/transformers/trainer.py](https://localhost:8080/#) in hyperparameter_search(self, hp_space, compute_objective, n_trials, direction, backend, hp_name, **kwargs)\r\n 2548 self.compute_objective = default_compute_objective if compute_objective is None else compute_objective\r\n 2549 \r\n-> 2550 best_run = backend_obj.run(self, n_trials, direction, **kwargs)\r\n 2551 \r\n 2552 self.hp_search_backend = None\r\n\r\n[/content/transformers/src/transformers/hyperparameter_search.py](https://localhost:8080/#) in run(self, trainer, n_trials, direction, **kwargs)\r\n 85 \r\n 86 def run(self, trainer, n_trials: int, direction: str, **kwargs):\r\n---> 87 return run_hp_search_ray(trainer, n_trials, direction, **kwargs)\r\n 88 \r\n 89 def default_hp_space(self, trial):\r\n\r\n[/content/transformers/src/transformers/integrations/integration_utils.py](https://localhost:8080/#) in run_hp_search_ray(trainer, n_trials, direction, **kwargs)\r\n 352 dynamic_modules_import_trainable.__mixins__ = trainable.__mixins__\r\n 353 \r\n--> 354 analysis = ray.tune.run(\r\n 355 dynamic_modules_import_trainable,\r\n 356 config=trainer.hp_space(None),\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/tune/tune.py](https://localhost:8080/#) in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, storage_path, storage_filesystem, search_alg, scheduler, checkpoint_config, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, reuse_actors, raise_on_failed_trial, callbacks, max_concurrent_trials, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, chdir_to_trial_dir, local_dir, _experiment_checkpoint_dir, _remote, _remote_string_queue, _entrypoint)\r\n 509 }\r\n 510 \r\n--> 511 _ray_auto_init(entrypoint=error_message_map[\"entrypoint\"])\r\n 512 \r\n 513 if _remote is None:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/tune/tune.py](https://localhost:8080/#) in _ray_auto_init(entrypoint)\r\n 217 logger.info(\"'TUNE_DISABLE_AUTO_INIT=1' detected.\")\r\n 218 elif not ray.is_initialized():\r\n--> 219 ray.init()\r\n 220 logger.info(\r\n 221 \"Initializing Ray automatically. \"\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/client_mode_hook.py](https://localhost:8080/#) in wrapper(*args, **kwargs)\r\n 101 if func.__name__ != \"init\" or is_client_mode_enabled_by_default:\r\n 102 return getattr(ray, func.__name__)(*args, **kwargs)\r\n--> 103 return func(*args, **kwargs)\r\n 104 \r\n 105 return wrapper\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py](https://localhost:8080/#) in init(address, num_cpus, num_gpus, resources, labels, object_store_memory, local_mode, ignore_reinit_error, include_dashboard, dashboard_host, dashboard_port, job_config, configure_logging, logging_level, logging_format, log_to_driver, namespace, runtime_env, storage, **kwargs)\r\n 1700 \r\n 1701 for hook in _post_init_hooks:\r\n-> 1702 hook()\r\n 1703 \r\n 1704 node_id = global_worker.core_worker.get_current_node_id()\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/tune/registry.py](https://localhost:8080/#) in flush(self)\r\n 306 self.references[k] = v\r\n 307 else:\r\n--> 308 self.references[k] = ray.put(v)\r\n 309 self.to_flush.clear()\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/auto_init_hook.py](https://localhost:8080/#) in auto_init_wrapper(*args, **kwargs)\r\n 22 def auto_init_wrapper(*args, **kwargs):\r\n 23 auto_init_ray()\r\n---> 24 return fn(*args, **kwargs)\r\n 25 \r\n 26 return auto_init_wrapper\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/client_mode_hook.py](https://localhost:8080/#) in wrapper(*args, **kwargs)\r\n 101 if func.__name__ != \"init\" or is_client_mode_enabled_by_default:\r\n 102 return getattr(ray, func.__name__)(*args, **kwargs)\r\n--> 103 return func(*args, **kwargs)\r\n 104 \r\n 105 return wrapper\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py](https://localhost:8080/#) in put(value, _owner)\r\n 2634 with profiling.profile(\"ray.put\"):\r\n 2635 try:\r\n-> 2636 object_ref = worker.put_object(value, owner_address=serialize_owner_address)\r\n 2637 except ObjectStoreFullError:\r\n 2638 logger.info(\r\n\r\n[/usr/local/lib/python3.10/dist-packages/ray/_private/worker.py](https://localhost:8080/#) in put_object(self, value, object_ref, owner_address)\r\n 710 f\"{sio.getvalue()}\"\r\n 711 )\r\n--> 712 raise TypeError(msg) from e\r\n 713 # This *must* be the first place that we construct this python\r\n 714 # ObjectRef because an entry with 0 local references is created when\r\n\r\nTypeError: Could not serialize the put value <transformers.trainer.Trainer object at 0x7e90dd830340>:\r\n================================================================================\r\nChecking Serializability of <transformers.trainer.Trainer object at 0x7e90dd830340>\r\n================================================================================\r\n!!! FAIL serialization: cannot pickle '_thread.lock' object\r\n Serializing 'compute_metrics' <function compute_metrics at 0x7e90dd9123b0>...\r\n !!! FAIL serialization: cannot pickle '_thread.lock' object\r\n Detected 3 global variables. Checking serializability...\r\n Serializing 'task' cola...\r\n Serializing 'np' <module 'numpy' from '/usr/local/lib/python3.10/dist-packages/numpy/__init__.py'>...\r\n Serializing 'metric' Metric(name: \"glue\", features: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)}, usage: \"\"\"\r\nCompute GLUE evaluation metric associated to each GLUE dataset.\r\nArgs:\r\n predictions: list of predictions to score.\r\n Each translation should be tokenized into a list of tokens.\r\n references: list of lists of references for each translation.\r\n Each reference should be tokenized into a list of tokens.\r\nReturns: depending on the GLUE subset, one or several of:\r\n \"accuracy\": Accuracy\r\n \"f1\": F1 score\r\n \"pearson\": Pearson Correlation\r\n \"spearmanr\": Spearman Correlation\r\n \"matthews_correlation\": Matthew Correlation\r\nExamples:\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'sst2') # 'sst2' or any of [\"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'mrpc') # 'mrpc' or 'qqp'\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0, 'f1': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'stsb')\r\n >>> references = [0., 1., 2., 3., 4., 5.]\r\n >>> predictions = [0., 1., 2., 3., 4., 5.]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print({\"pearson\": round(results[\"pearson\"], 2), \"spearmanr\": round(results[\"spearmanr\"], 2)})\r\n {'pearson': 1.0, 'spearmanr': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'cola')\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'matthews_correlation': 1.0}\r\n\"\"\", stored examples: 0)...\r\n !!! FAIL serialization: cannot pickle '_thread.lock' object\r\n Serializing '_build_data_dir' <bound method Metric._build_data_dir of Metric(name: \"glue\", features: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)}, usage: \"\"\"\r\nCompute GLUE evaluation metric associated to each GLUE dataset.\r\nArgs:\r\n predictions: list of predictions to score.\r\n Each translation should be tokenized into a list of tokens.\r\n references: list of lists of references for each translation.\r\n Each reference should be tokenized into a list of tokens.\r\nReturns: depending on the GLUE subset, one or several of:\r\n \"accuracy\": Accuracy\r\n \"f1\": F1 score\r\n \"pearson\": Pearson Correlation\r\n \"spearmanr\": Spearman Correlation\r\n \"matthews_correlation\": Matthew Correlation\r\nExamples:\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'sst2') # 'sst2' or any of [\"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'mrpc') # 'mrpc' or 'qqp'\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0, 'f1': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'stsb')\r\n >>> references = [0., 1., 2., 3., 4., 5.]\r\n >>> predictions = [0., 1., 2., 3., 4., 5.]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print({\"pearson\": round(results[\"pearson\"], 2), \"spearmanr\": round(results[\"spearmanr\"], 2)})\r\n {'pearson': 1.0, 'spearmanr': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'cola')\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'matthews_correlation': 1.0}\r\n\"\"\", stored examples: 0)>...\r\n !!! FAIL serialization: cannot pickle '_thread.lock' object\r\n Serializing '_add_sm_patterns_to_gitignore' <bound method Trainer._add_sm_patterns_to_gitignore of <transformers.trainer.Trainer object at 0x7e90dd830340>>...\r\n !!! FAIL serialization: cannot pickle '_thread.lock' object\r\n Serializing '__func__' <function Trainer._add_sm_patterns_to_gitignore at 0x7e90dd95d7e0>...\r\n WARNING: Did not find non-serializable object in <bound method Trainer._add_sm_patterns_to_gitignore of <transformers.trainer.Trainer object at 0x7e90dd830340>>. This may be an oversight.\r\n================================================================================\r\nVariable: \r\n\r\n\tFailTuple(_build_data_dir [obj=<bound method Metric._build_data_dir of Metric(name: \"glue\", features: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)}, usage: \"\"\"\r\nCompute GLUE evaluation metric associated to each GLUE dataset.\r\nArgs:\r\n predictions: list of predictions to score.\r\n Each translation should be tokenized into a list of tokens.\r\n references: list of lists of references for each translation.\r\n Each reference should be tokenized into a list of tokens.\r\nReturns: depending on the GLUE subset, one or several of:\r\n \"accuracy\": Accuracy\r\n \"f1\": F1 score\r\n \"pearson\": Pearson Correlation\r\n \"spearmanr\": Spearman Correlation\r\n \"matthews_correlation\": Matthew Correlation\r\nExamples:\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'sst2') # 'sst2' or any of [\"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'mrpc') # 'mrpc' or 'qqp'\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0, 'f1': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'stsb')\r\n >>> references = [0., 1., 2., 3., 4., 5.]\r\n >>> predictions = [0., 1., 2., 3., 4., 5.]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print({\"pearson\": round(results[\"pearson\"], 2), \"spearmanr\": round(results[\"spearmanr\"], 2)})\r\n {'pearson': 1.0, 'spearmanr': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'cola')\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'matthews_correlation': 1.0}\r\n\"\"\", stored examples: 0)>, parent=Metric(name: \"glue\", features: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)}, usage: \"\"\"\r\nCompute GLUE evaluation metric associated to each GLUE dataset.\r\nArgs:\r\n predictions: list of predictions to score.\r\n Each translation should be tokenized into a list of tokens.\r\n references: list of lists of references for each translation.\r\n Each reference should be tokenized into a list of tokens.\r\nReturns: depending on the GLUE subset, one or several of:\r\n \"accuracy\": Accuracy\r\n \"f1\": F1 score\r\n \"pearson\": Pearson Correlation\r\n \"spearmanr\": Spearman Correlation\r\n \"matthews_correlation\": Matthew Correlation\r\nExamples:\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'sst2') # 'sst2' or any of [\"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'mrpc') # 'mrpc' or 'qqp'\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'accuracy': 1.0, 'f1': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'stsb')\r\n >>> references = [0., 1., 2., 3., 4., 5.]\r\n >>> predictions = [0., 1., 2., 3., 4., 5.]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print({\"pearson\": round(results[\"pearson\"], 2), \"spearmanr\": round(results[\"spearmanr\"], 2)})\r\n {'pearson': 1.0, 'spearmanr': 1.0}\r\n\r\n >>> glue_metric = datasets.load_metric('glue', 'cola')\r\n >>> references = [0, 1]\r\n >>> predictions = [0, 1]\r\n >>> results = glue_metric.compute(predictions=predictions, references=references)\r\n >>> print(results)\r\n {'matthews_correlation': 1.0}\r\n\"\"\", stored examples: 0)])\r\n\r\nwas found to be non-serializable. There may be multiple other undetected variables that were non-serializable. \r\nConsider either removing the instantiation/imports of these variables or moving the instantiation into the scope of the function/class. \r\n================================================================================\r\nCheck https://docs.ray.io/en/master/ray-core/objects/serialization.html#troubleshooting for more information.\r\nIf you have any suggestions on how to improve this error message, please reach out to the Ray developers on github.com/ray-project/ray/issues/\r\n================================================================================\r\n```\r\n\r\nThe deprecation error has been fixed.",
"What's the above error related to ? ",
"Hey @Shamik-07, the Ray Tune integration serializes the HuggingFace Trainer along with your remote function. In this case, a non-serializable `metric` gets pickled along with the trainer via the `compute_metrics` parameter.\r\n\r\nTo fix it:\r\n\r\n```diff\r\ndef compute_metrics(eval_pred):\r\n predictions, labels = eval_pred\r\n if task != \"stsb\":\r\n predictions = np.argmax(predictions, axis=1)\r\n else:\r\n predictions = predictions[:, 0]\r\n+ metric = load_metric('glue', actual_task) # load the metric inside the method, instead of implicitly pickling it\r\n return metric.compute(predictions=predictions, references=labels)\r\n```",
"Thank you very much for the explanation @justinvyu :)",
"Closing this as this has been fixed by #26499 thanks to @justinvyu "
] | 1,700 | 1,701 | 1,701 | NONE | null | ### System Info
Hello Someone,
The version of Ray is 2.8.0 and the version of Transformers is 4.35.2
I am trying to run the hyperparameter search for this notebook with ray tune [notebooks/examples/text_classification.ipynb at main Β· huggingface/notebooks (github.com)](https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb)
and getting the following error:
```py
---------------------------------------------------------------------------
DeprecationWarning Traceback (most recent call last)
<ipython-input-33-12c3f54763db> in <cell line: 1>()
----> 1 best_run = trainer.hyperparameter_search(n_trials=10, direction="maximize")
3 frames
/usr/local/lib/python3.10/dist-packages/ray/tune/trainable/util.py in with_parameters(trainable, **kwargs)
313 )
314
--> 315 raise DeprecationWarning(_CHECKPOINT_DIR_ARG_DEPRECATION_MSG)
316
317 def inner(config):
DeprecationWarning: Accepting a `checkpoint_dir` argument in your training function is deprecated.
Please use `ray.train.get_checkpoint()` to access your checkpoint as a
`ray.train.Checkpoint` object instead. See below for an example:
Before
------
from ray import tune
def train_fn(config, checkpoint_dir=None):
if checkpoint_dir:
torch.load(os.path.join(checkpoint_dir, "checkpoint.pt"))
...
tuner = tune.Tuner(train_fn)
tuner.fit()
After
-----
from ray import train, tune
def train_fn(config):
checkpoint: train.Checkpoint = train.get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
torch.load(os.path.join(checkpoint_dir, "checkpoint.pt"))
...
tuner = tune.Tuner(train_fn)
tuner.fit()
```
### Who can help?
@muellerzr / @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Runnning the hyperparameter search with ray tune.
### Expected behavior
Hyperparameter trials with ray tune | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27598/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27597/comments | https://api.github.com/repos/huggingface/transformers/issues/27597/events | https://github.com/huggingface/transformers/pull/27597 | 2,001,579,142 | PR_kwDOCUB6oc5f4J9Y | 27,597 | Paged Attention Based the Latest Cache Design | {
"login": "liangan1",
"id": 46986936,
"node_id": "MDQ6VXNlcjQ2OTg2OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/46986936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liangan1",
"html_url": "https://github.com/liangan1",
"followers_url": "https://api.github.com/users/liangan1/followers",
"following_url": "https://api.github.com/users/liangan1/following{/other_user}",
"gists_url": "https://api.github.com/users/liangan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liangan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liangan1/subscriptions",
"organizations_url": "https://api.github.com/users/liangan1/orgs",
"repos_url": "https://api.github.com/users/liangan1/repos",
"events_url": "https://api.github.com/users/liangan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/liangan1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"This should wait until the cache refactor is finished cc @gante: https://github.com/huggingface/transformers/pull/27407",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"pls help to reopen this PR, we will refine it based on latest transformers",
"@liangan1 reopened :D",
"> @liangan1 reopened :D\r\n\r\nThanks. I will refresh this PR ASAP.",
"@gante I have rebased this PR and validated the functionality with llama model with both greedy&beam search, pls help to review. _update_ function should be a good start point to understand it. ",
"Thanks @gante I will refine it according to your comments."
] | 1,700 | 1,708 | null | NONE | null | # What does this PR do?
Based on the latest cache design on [#PR26681](https://github.com/huggingface/transformers/pull/26681), This PR implements the Paged Attention KV cache which is proposed by this [paper](https://arxiv.org/pdf/2309.06180.pdf).
Fixes # ([issue#27303](https://github.com/huggingface/transformers/issues/27303))
# Who can review ?
@tomaarsen
@gante
@patrickvonplaten
@jgong5
@jianan-gu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27597/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27597/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27597",
"html_url": "https://github.com/huggingface/transformers/pull/27597",
"diff_url": "https://github.com/huggingface/transformers/pull/27597.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27597.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27596/comments | https://api.github.com/repos/huggingface/transformers/issues/27596/events | https://github.com/huggingface/transformers/issues/27596 | 2,001,267,031 | I_kwDOCUB6oc53SOlX | 27,596 | An error occurred when using the model.gradient_checkpointing_enable() feature. | {
"login": "CaC033",
"id": 34328295,
"node_id": "MDQ6VXNlcjM0MzI4Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/34328295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaC033",
"html_url": "https://github.com/CaC033",
"followers_url": "https://api.github.com/users/CaC033/followers",
"following_url": "https://api.github.com/users/CaC033/following{/other_user}",
"gists_url": "https://api.github.com/users/CaC033/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaC033/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaC033/subscriptions",
"organizations_url": "https://api.github.com/users/CaC033/orgs",
"repos_url": "https://api.github.com/users/CaC033/repos",
"events_url": "https://api.github.com/users/CaC033/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaC033/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"the same issues with transformers version: 4.35.2 when load baichuan2-13b-chat model .",
"cc @younesbelkada who worked on this recently π ",
"Hi @CaC033 @rangehow \r\n\r\nhttps://github.com/huggingface/transformers/pull/27610 should fix the issue. However, note that with respect to the new refactor of gradient checkpointing, the models that use code on the Hub should not define a `_set_gradient_checkpointing` method (as it is done for baichuan models), as modules that support GC are automatically inferred thanks to the `gradient_checkpointing` attribute. \r\nA long-term fix would be to remove that method as done in https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat/discussions/27 as currently you cannot pass checkpointing arguments such as `use_reentrant`",
"Hi everyone, it should be now resolved on transformers main, again, bear in mind that you need to remove the `_set_gradient_checkpointing` method to avoid these issues in the future as the support for old GC will be removed "
] | 1,700 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 1.14.0a0+410ce96 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = AutoModelForCausalLM.from_pretrained(
args.load,
from_tf=False,
config=config,
revision='main',
use_auth_token=None,
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True,
trust_remote_code=True,
local_files_only=True
)
if args.enable_gradient_checkpointing:
model.gradient_checkpointing_enable()
n_params = model.num_parameters()
logger.info(f"Training model with {n_params * 1e-9:.2f}B model")
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
def tokenize_function(examples):
sources = examples['instruction']
targets = examples['content']
data_dict = preprocess(sources, targets, tokenizer)
return data_dict
with training_args.main_process_first(desc="dataset map tokenization"):
lm_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=64
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_datasets["train"],
eval_dataset=lm_datasets["validation"],
tokenizer=tokenizer,
data_collator=default_data_collator,
neftune_noise_alpha=0.1,
)
trainer.train()
### error:
Traceback (most recent call last):
File "/mnt/workspace/peipao/jichunengli/test_qwen_hf/ds_train_huggingface_Ulama-py",line322,in<module>
File "/mnt/workspace/peipao/jichunengli/test_qwen_h/ds_train_huggingface_llama-py",line288,inmain model.gradient_checkpointing_enable ()
File "/us/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1872, in gradient_checkpointing_enable self._set_gradient_checkpointing(enable=True, gradient_checkpointing_func-gradient_checkpointing_func)
TypeError:
_set_gradient_checkpointing() got an unexpected kevword argument 'enable'
### I checked the source code of _set_gradient_checkpointing and found that the input parameter includes "enable".
def gradient_checkpointing_enable(self, gradient_checkpointing_kwargs=None):
"""
Activates gradient checkpointing for the current model.
Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint
activations".
We pass the `__call__` method of the modules instead of `forward` because `__call__` attaches all the hooks of
the module. https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2
Args:
gradient_checkpointing_kwargs (dict, *optional*):
Additional keyword arguments passed along to the `torch.utils.checkpoint.checkpoint` function.
"""
if not self.supports_gradient_checkpointing:
raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.")
if gradient_checkpointing_kwargs is None:
gradient_checkpointing_kwargs = {}
gradient_checkpointing_func = functools.partial(checkpoint, **gradient_checkpointing_kwargs)
self._set_gradient_checkpointing(enable=True, gradient_checkpointing_func=gradient_checkpointing_func)
if getattr(self, "_hf_peft_config_loaded", False):
# When using PEFT + gradient checkpointing + Trainer we need to make sure the input has requires_grad=True
# we do it also on PEFT: https://github.com/huggingface/peft/blob/85013987aa82aa1af3da1236b6902556ce3e483e/src/peft/peft_model.py#L334
# When training with PEFT, only LoRA layers will have requires grad set to True, but the output of frozen layers need to propagate
# the gradients to make sure the gradient flows.
self.enable_input_require_grads()
def _set_gradient_checkpointing(self, enable: bool = True, gradient_checkpointing_func: Callable = checkpoint):
is_gradient_checkpointing_set = False
# Apply it on the top-level module in case the top-level modules supports it
# for example, LongT5Stack inherits from `PreTrainedModel`.
if hasattr(self, "gradient_checkpointing"):
self._gradient_checkpointing_func = gradient_checkpointing_func
self.gradient_checkpointing = enable
is_gradient_checkpointing_set = True
for module in self.modules():
if hasattr(module, "gradient_checkpointing"):
module._gradient_checkpointing_func = gradient_checkpointing_func
module.gradient_checkpointing = enable
is_gradient_checkpointing_set = True
if not is_gradient_checkpointing_set:
raise ValueError(
f"{self.__class__.__name__} is not compatible with gradient checkpointing. Make sure all the architecture support it by setting a boolean attribute"
" `gradient_checkpointing` to modules of the model that uses checkpointing."
)
### Expected behavior
Please fix this bug. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27596/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27595/comments | https://api.github.com/repos/huggingface/transformers/issues/27595/events | https://github.com/huggingface/transformers/pull/27595 | 2,001,184,462 | PR_kwDOCUB6oc5f20VU | 27,595 | Fixed passing scheduler-specific kwargs via TrainingArguments lr_scheduler_kwargs | {
"login": "CharbelAD",
"id": 45701489,
"node_id": "MDQ6VXNlcjQ1NzAxNDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45701489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CharbelAD",
"html_url": "https://github.com/CharbelAD",
"followers_url": "https://api.github.com/users/CharbelAD/followers",
"following_url": "https://api.github.com/users/CharbelAD/following{/other_user}",
"gists_url": "https://api.github.com/users/CharbelAD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CharbelAD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CharbelAD/subscriptions",
"organizations_url": "https://api.github.com/users/CharbelAD/orgs",
"repos_url": "https://api.github.com/users/CharbelAD/repos",
"events_url": "https://api.github.com/users/CharbelAD/events{/privacy}",
"received_events_url": "https://api.github.com/users/CharbelAD/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Thanks LGTM! Would mind integrating the small snippet to the testing suite? π\r\n\r\nSure thing! However, I notice that the `reduce_lr_on_plateau` scheduler is not set up to accept kwargs (see line 362 in `src/transformers/optimization.py`).\r\n\r\nShould I change that and allow arguments to be passed to this scheduler and test for it or should I instead write the test for another scheduler (that is already set up to accept kwargs) to not expand the scope of this PR? It seems useful to me to allow passing args to it, however, it is different in its behavior to all the other schedulers so far implemented.\r\n\r\nSorry if this is a bothersome question :)",
"Yep let's not expend the scope of the pr and just use another scheduler! π feel free to open another PR afterwards",
"Done. Please let me know if any other changes should be made :) @ArthurZucker "
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the `lr_scheduler_kwargs` argument of TrainingArguments being passed incorrectly to `get_scheduler()` in `src/transformers/optimization.py` which raises a TypeError.
Minimum code to reproduce the issue (code snippets taken from `test_trainer.py`):
```
import numpy as np
import torch
from torch import nn
from transformers import TrainingArguments, Trainer
class RegressionDataset:
def __init__(self, a=2, b=3, length=64, seed=42, label_names=None):
np.random.seed(seed)
self.label_names = ["labels"] if label_names is None else label_names
self.length = length
self.x = np.random.normal(size=(length,)).astype(np.float32)
self.ys = [a * self.x + b + np.random.normal(scale=0.1, size=(length,)) for _ in self.label_names]
self.ys = [y.astype(np.float32) for y in self.ys]
def __len__(self):
return self.length
def __getitem__(self, i):
result = {name: y[i] for name, y in zip(self.label_names, self.ys)}
result["input_x"] = self.x[i]
return result
class RegressionModel(nn.Module):
def __init__(self, a=0, b=0, double_output=False):
super().__init__()
self.a = nn.Parameter(torch.tensor(a).float())
self.b = nn.Parameter(torch.tensor(b).float())
self.double_output = double_output
self.config = None
def forward(self, input_x, labels=None, **kwargs):
y = input_x * self.a + self.b
if labels is None:
return (y, y) if self.double_output else (y,)
loss = nn.functional.mse_loss(y, labels)
return (loss, y, y) if self.double_output else (loss, y)
train_dataset = RegressionDataset(length=64)
eval_dataset = RegressionDataset(length=64)
args = TrainingArguments(
"./regression",
lr_scheduler_type="reduce_lr_on_plateau",
lr_scheduler_kwargs={'factor':0.5, 'verbose':True},
evaluation_strategy="epoch",
metric_for_best_model="eval_loss",
num_train_epochs=10,
learning_rate=0.2,
)
model = RegressionModel()
trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset)
trainer.train()
```
Please let me know if any modification (such as extra tests) is needed, and I will follow up on it.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr and @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27595/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27595",
"html_url": "https://github.com/huggingface/transformers/pull/27595",
"diff_url": "https://github.com/huggingface/transformers/pull/27595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27595.patch",
"merged_at": 1701156825000
} |
https://api.github.com/repos/huggingface/transformers/issues/27594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27594/comments | https://api.github.com/repos/huggingface/transformers/issues/27594/events | https://github.com/huggingface/transformers/issues/27594 | 2,001,021,392 | I_kwDOCUB6oc53RSnQ | 27,594 | prompt_ids are included in distil-whisper response when using ASR pipeline() | {
"login": "benniekiss",
"id": 63211101,
"node_id": "MDQ6VXNlcjYzMjExMTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/63211101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benniekiss",
"html_url": "https://github.com/benniekiss",
"followers_url": "https://api.github.com/users/benniekiss/followers",
"following_url": "https://api.github.com/users/benniekiss/following{/other_user}",
"gists_url": "https://api.github.com/users/benniekiss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benniekiss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benniekiss/subscriptions",
"organizations_url": "https://api.github.com/users/benniekiss/orgs",
"repos_url": "https://api.github.com/users/benniekiss/repos",
"events_url": "https://api.github.com/users/benniekiss/events{/privacy}",
"received_events_url": "https://api.github.com/users/benniekiss/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I believe this part of the code, [tokenization_whisper.py#L853](https://github.com/huggingface/transformers/blob/dc68a39c8111217683bf49a4912d0c9018bab33d/src/transformers/models/whisper/tokenization_whisper.py#L853), would have to be changed to include an argument containing the prompt and an argument to remove the prompt, as well as an `if...else` block to strip the prompt from the response.",
"Correcting the ping to cc @sanchit-gandhi. \r\nThe addition of the `tokenizer_kwargs` to the call is planned but no-one is working on it at the moment. You cannot do this yet indeed. ",
"I think the `prompt_ids` need to be stripped from the `token_ids` before we compute the longest common sub-sequence in the chunking algorithm. Otherwise, we try to match the generated token ids on the right with the prompt input ids on the left (inevitably going to mis-match!). Opened a PR for this here: #27836",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"should I close this issue since there is an open PR, or should I keep the issue open until the PR is merged?",
"Let's keep it open until the PR is merged!",
"Just a quick update -- I haven't tested this PR with an updated main yet because currently, prompt_ids are no longer returned with long-form transcription; however, they are still returned with short-form transcription.\r\ntransformers v 4.38.0dev0"
] | 1,700 | 1,706 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: macOS-14.1.1-arm64-arm-64bit
- Python version: 3.11.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Narsil @sanchit-gandhi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "distil-whisper/distil-large-v2"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=15,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
## MODIFIED:
decoder_kwargs={
"skip_special_tokens": True
}
)
## MODIFIED:
whisper_prompt = "This is a sample prompt before a transcription."
prompt_ids = pipe.tokenizer.get_prompt_ids(whisper_prompt, return_tensors='pt')
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(
sample,
## MODIFIED:
generate_kwargs={
"prompt_ids": prompt_ids
}
)
print(result["text"])
```
### Expected behavior
I would expect that the prompt tokens would be stripped from the response given by `pipeline()`.
It is possible to strip the tokens by using `skip_special_tokens=True` with `decode()`, however, it is not possible to pass this variable via `decoder_kwargs` to `pipeline()`, unless I am mistaken. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27594/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27594/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27593/comments | https://api.github.com/repos/huggingface/transformers/issues/27593/events | https://github.com/huggingface/transformers/pull/27593 | 2,000,988,045 | PR_kwDOCUB6oc5f2K3v | 27,593 | [JAX] Replace uses of jax.devices("cpu") with jax.local_devices(backend="cpu") | {
"login": "hvaara",
"id": 1535968,
"node_id": "MDQ6VXNlcjE1MzU5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hvaara",
"html_url": "https://github.com/hvaara",
"followers_url": "https://api.github.com/users/hvaara/followers",
"following_url": "https://api.github.com/users/hvaara/following{/other_user}",
"gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hvaara/subscriptions",
"organizations_url": "https://api.github.com/users/hvaara/orgs",
"repos_url": "https://api.github.com/users/hvaara/repos",
"events_url": "https://api.github.com/users/hvaara/events{/privacy}",
"received_events_url": "https://api.github.com/users/hvaara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"/cc @sanchit-gandhi ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27593). All of your documentation changes will be reflected on that endpoint."
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | An upcoming change to JAX will include non-local (addressable) CPU devices in jax.devices() when JAX is used multicontroller-style, where there are multiple Python processes.
This change preserves the current behavior by replacing uses of jax.devices("cpu"), which previously only returned local devices, with jax.local_devices("cpu"), which will return local devices both now and in the future.
This change is always safe (i.e., it should always preserve the previous behavior), but it may sometimes be unnecessary if code is never used in a multicontroller setting.
For a similar PR in `diffusers` see huggingface/diffusers#5864. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27593/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27593",
"html_url": "https://github.com/huggingface/transformers/pull/27593",
"diff_url": "https://github.com/huggingface/transformers/pull/27593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27593.patch",
"merged_at": 1701671789000
} |
https://api.github.com/repos/huggingface/transformers/issues/27592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27592/comments | https://api.github.com/repos/huggingface/transformers/issues/27592/events | https://github.com/huggingface/transformers/issues/27592 | 2,000,981,685 | I_kwDOCUB6oc53RI61 | 27,592 | How to always use initial prompt in Whisper? | {
"login": "GanymedeNil",
"id": 9687786,
"node_id": "MDQ6VXNlcjk2ODc3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9687786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GanymedeNil",
"html_url": "https://github.com/GanymedeNil",
"followers_url": "https://api.github.com/users/GanymedeNil/followers",
"following_url": "https://api.github.com/users/GanymedeNil/following{/other_user}",
"gists_url": "https://api.github.com/users/GanymedeNil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GanymedeNil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GanymedeNil/subscriptions",
"organizations_url": "https://api.github.com/users/GanymedeNil/orgs",
"repos_url": "https://api.github.com/users/GanymedeNil/repos",
"events_url": "https://api.github.com/users/GanymedeNil/events{/privacy}",
"received_events_url": "https://api.github.com/users/GanymedeNil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey π€ thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"> Hey π€ thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n> \r\n> Thanks!\r\n\r\nI will go to the forum to ask questions. "
] | 1,700 | 1,700 | 1,700 | NONE | null | I checked this PR (#22496 ) but still can't figure out how to always use the initial prompt. is it possible to provide a use case? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27592/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27591/comments | https://api.github.com/repos/huggingface/transformers/issues/27591/events | https://github.com/huggingface/transformers/issues/27591 | 2,000,910,575 | I_kwDOCUB6oc53Q3jv | 27,591 | Seq2Seq trainer cannot generate tokens with deepspeed and Zero3 | {
"login": "Lingy12",
"id": 54443474,
"node_id": "MDQ6VXNlcjU0NDQzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/54443474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lingy12",
"html_url": "https://github.com/Lingy12",
"followers_url": "https://api.github.com/users/Lingy12/followers",
"following_url": "https://api.github.com/users/Lingy12/following{/other_user}",
"gists_url": "https://api.github.com/users/Lingy12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lingy12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lingy12/subscriptions",
"organizations_url": "https://api.github.com/users/Lingy12/orgs",
"repos_url": "https://api.github.com/users/Lingy12/repos",
"events_url": "https://api.github.com/users/Lingy12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lingy12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tried to upgrade the transformers and deepspeed version, I think the training loop stuck when evaluate is triggered. ",
"Hi @Lingy12 π \r\n\r\nI see that the exception is `TypeError: 'NoneType' object is not subscriptable`. Can you confirm:\r\n1. The type of `past_key_values` before the exception?\r\n2. If `past_key_values` is `None`, whether the model's `config.use_cache` is `True`? ",
"Hi @gante.\r\n\r\nRefering to this \r\nhttps://github.com/huggingface/transformers/blob/8eb9e29d8dc8b8bd98b4dd48317d1d596ec548f3/src/transformers/models/llama/modeling_llama.py#L845\r\n\r\nThe type of it should be List[torch.floatTensor]. \r\n\r\nI am using the default llama2 config. Hence config.use_cache should be true.\r\n",
"Hey @Lingy12 π \r\n\r\nI'm aware of what the answers to the questions I asked above should be :) However, since you didn't share a script to reproduce the issue, the best I can do is to ask you to double-check a few things for us.",
"Hi @gante \r\n\r\nSure. I try to log the past key type with \r\n`logger.info(\"past key Type \" + str(type(past_key_values)))` \r\n\r\nAnd here is the output. \r\n\r\n<img width=\"341\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/54443474/fb98bc07-4174-4e3a-80c2-80488995eb01\">\r\n\r\n",
"If you store your model before the first evaluation round, can you generate with it?\r\n\r\nIf yes π then it is a distributed-related issue, and we will need a short script to reproduce the issue\r\nIf no π I may be able to provide further pointers depending on the error. A short script for reproducibility would help.",
"Yes. The model can generate if it's a saved checkpoint. It only have problem when using model parallel.\r\n",
"Let me try to put up a sample script here. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\nI am facing the same issue, \r\n\r\n`[/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py](https://localhost:8080/#) in prepare_inputs_for_generation(self, input_ids, past_key_values, attention_mask, inputs_embeds, **kwargs)\r\n 1082 ):\r\n 1083 if past_key_values is not None:\r\n-> 1084 past_length = past_key_values[0][0].shape[2]`\r\n\r\nIs there a fix/what can be done to solve this problem? \r\nThanks.\r\n",
"Hey @Nitish5499 if you want help we are going to need a reproducer! "
] | 1,700 | 1,707 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.0-54-generic-x86_64-with-glibc2.27
- Python version: 3.10.11
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes 4 x A100 40GB
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@ArthurZucker @pacman100 @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am sorry that I cannot give out the code directly here since the code have some confidential information, however, I think it's a general issue , rather than usecase dependent. Please let me know anything that is unclear so that I can give more detail.
I am using **torch.distributed.lanch** to lanch a finetuning on LLama2-7b model. I tried to use Seq2SeqTrainer and try to get the generated tokens. It does not work when I set **generation_max_length: Optional[int] = field(default=1024)**. The deepspeed is used here, provided the deepspeed config
```
{
"bf16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 100,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1e-10
},
"zero_optimization": {
"stage": 3,
"allgather_partitions": true,
"allgather_bucket_size": 1e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 1e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
The error I get is
```
lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 633, in forward
past_key_values_length = past_key_values[0][0].shape[2]
TypeError: 'NoneType' object is not subscriptable
```
I have tried some walkaround:
1. Setting **generation_max_length=20**, the code will give warning and it seems that it does not generate anything after the input.
2. Customize predict_step function for seq2seqtrainer, add **if self.state.is_world_process_zero**. The code will hang in the generate function. I think it's due to some synchronization issue between processes.
Please let me know if more detail is required.
### Expected behavior
Seq2SeqTrainer should be able to generate tokens. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27591/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27590/comments | https://api.github.com/repos/huggingface/transformers/issues/27590/events | https://github.com/huggingface/transformers/pull/27590 | 2,000,890,567 | PR_kwDOCUB6oc5f13CZ | 27,590 | Add `convert_hf_to_openai.py` script to Whisper documentation resources | {
"login": "zuazo",
"id": 1878434,
"node_id": "MDQ6VXNlcjE4Nzg0MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1878434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zuazo",
"html_url": "https://github.com/zuazo",
"followers_url": "https://api.github.com/users/zuazo/followers",
"following_url": "https://api.github.com/users/zuazo/following{/other_user}",
"gists_url": "https://api.github.com/users/zuazo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zuazo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zuazo/subscriptions",
"organizations_url": "https://api.github.com/users/zuazo/orgs",
"repos_url": "https://api.github.com/users/zuazo/repos",
"events_url": "https://api.github.com/users/zuazo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zuazo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
This PR updates the Whisper model documentation by adding a link to the [`convert_hf_to_openai.py`](https://github.com/zuazo-forks/transformers/blob/convert_hf_to_openai/src/transformers/models/whisper/convert_hf_to_openai.py) script in the Resources section. The script converts Whisper models from the Hugging Face format back to the original OpenAI format.
A brief example is included to demonstrate how to use the script. This addition will help users who prefer working with the original OpenAI implementation or require specific features from it.
Of course, any feedback is welcome! And sorry for the delay!
Fixes #26854
## Before submitting
- [x] This PR fixes a typo or improves the docs.
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes, see here: https://github.com/huggingface/transformers/pull/26854#issuecomment-1765861491
- [x] Did you make sure to update the documentation with your changes?
- [x] Did you write any new necessary tests? Yes, there are doctests inside the script code.
## Who can review?
Possible candidates:
- `convert_openai_to_hf.py` script creator: @ArthurZucker
- Speech models: @sanchit-gandhi | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27590/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27590",
"html_url": "https://github.com/huggingface/transformers/pull/27590",
"diff_url": "https://github.com/huggingface/transformers/pull/27590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27590.patch",
"merged_at": 1700464120000
} |
https://api.github.com/repos/huggingface/transformers/issues/27589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27589/comments | https://api.github.com/repos/huggingface/transformers/issues/27589/events | https://github.com/huggingface/transformers/pull/27589 | 2,000,871,677 | PR_kwDOCUB6oc5f1zT9 | 27,589 | Fix idx2sym not loaded from pretrained vocab file in Transformer XL | {
"login": "jtang98",
"id": 44188317,
"node_id": "MDQ6VXNlcjQ0MTg4MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/44188317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jtang98",
"html_url": "https://github.com/jtang98",
"followers_url": "https://api.github.com/users/jtang98/followers",
"following_url": "https://api.github.com/users/jtang98/following{/other_user}",
"gists_url": "https://api.github.com/users/jtang98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jtang98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtang98/subscriptions",
"organizations_url": "https://api.github.com/users/jtang98/orgs",
"repos_url": "https://api.github.com/users/jtang98/repos",
"events_url": "https://api.github.com/users/jtang98/events{/privacy}",
"received_events_url": "https://api.github.com/users/jtang98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixes [27584](https://github.com/huggingface/transformers/issues/27584).
When loading vocab file from a pretrained tokenizer for Transformer XL, although the pickled vocabulary file contains a idx2sym key, it isn't loaded, because it is discarded as the empty list already exists as an attribute.
Solution is to explicitly take it into account, just like for sym2idx.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27589/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27589",
"html_url": "https://github.com/huggingface/transformers/pull/27589",
"diff_url": "https://github.com/huggingface/transformers/pull/27589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27589.patch",
"merged_at": 1700463378000
} |
https://api.github.com/repos/huggingface/transformers/issues/27588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27588/comments | https://api.github.com/repos/huggingface/transformers/issues/27588/events | https://github.com/huggingface/transformers/pull/27588 | 2,000,786,315 | PR_kwDOCUB6oc5f1ij6 | 27,588 | translation main-class files to chinese | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu @statelesshz \r\n\r\nHi, here is remaining translation work of main-class folder.\r\n\r\nFor these translation, I just keep many title or sub-title in origin format as they are just class attributes.\r\n\r\nYou can double check of these files.\r\n\r\nBest",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27588). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu \r\n\r\nI just update the review. Sorry that I forget to change these two line during checking.\r\n\r\nBest"
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27588/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27588/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27588",
"html_url": "https://github.com/huggingface/transformers/pull/27588",
"diff_url": "https://github.com/huggingface/transformers/pull/27588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27588.patch",
"merged_at": 1701117397000
} |
https://api.github.com/repos/huggingface/transformers/issues/27587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27587/comments | https://api.github.com/repos/huggingface/transformers/issues/27587/events | https://github.com/huggingface/transformers/pull/27587 | 2,000,771,170 | PR_kwDOCUB6oc5f1fil | 27,587 | Fuyu Multi-image interleaved processor | {
"login": "cliangyu",
"id": 45140242,
"node_id": "MDQ6VXNlcjQ1MTQwMjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/45140242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cliangyu",
"html_url": "https://github.com/cliangyu",
"followers_url": "https://api.github.com/users/cliangyu/followers",
"following_url": "https://api.github.com/users/cliangyu/following{/other_user}",
"gists_url": "https://api.github.com/users/cliangyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cliangyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cliangyu/subscriptions",
"organizations_url": "https://api.github.com/users/cliangyu/orgs",
"repos_url": "https://api.github.com/users/cliangyu/repos",
"events_url": "https://api.github.com/users/cliangyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cliangyu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hey! Feel free to ping @molbap once this is ready for review and cis are green or me if you need help with the CIs! ",
"Hi Seungyoun,\r\n\r\nThanks for working on this. I suggest integrating the changes with mine\r\nfirst.\r\n\r\nThank you!\r\n\r\nBest regards,\r\nLiangyu Chen\r\nMMLab, School of Computer Science and Engineering, Nanyang Technological\r\nUniversity\r\nP: 65-82811955 | E: ***@***.***\r\nhttps://cliangyu.com/\r\n\r\n\r\n\r\nOn Sat, 2 Dec 2023 at 19:41, Seungyoun, Shin ***@***.***>\r\nwrote:\r\n\r\n> Hi @cliangyu <https://github.com/cliangyu>,\r\n>\r\n> I've developed enhancements for multi-device support in PR #27587\r\n> <https://github.com/huggingface/transformers/pull/27587>, building upon\r\n> your work. Before proceeding with a new PR, I'd like to discuss integrating\r\n> these changes with yours. I can submit a PR to your fork or detail the\r\n> changes here for your review. Please let me know your preferred approach.\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/27587#issuecomment-1837127873>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AKYMSERPXSTHWHLG37KE6HDYHMHU3AVCNFSM6AAAAAA7RTLUBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZXGEZDOOBXGM>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 1,700 | 1,706 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fuyu Multi-image interleaved processor. Test example:
```python
from transformers import FuyuProcessor, FuyuForCausalLM
from PIL import Image
import requests
import torch
# load model and processor
model_id = "adept/fuyu-8b"
processor = FuyuProcessor.from_pretrained(model_id)
model = FuyuForCausalLM.from_pretrained(model_id, device_map="cuda:0", torch_dtype=torch.bfloat16)
def convert(list_of_dicts):# Convert to a dictionary of lists
dict_of_lists = {}
for d in list_of_dicts:
for key, value in d.items():
if key not in dict_of_lists:
dict_of_lists[key] = []
dict_of_lists[key].append(value)
return dict_of_lists
text_prompt1 = "|IMAGESTART| Generate a coco-style caption. |IMAGESTART| Be reminded that the caption should be longer than 2000 words but shorter than 1 million words. \n"
url1 = "https://huggingface.co/adept/fuyu-8b/resolve/main/bus.png"
image1 = Image.open(requests.get(url1, stream=True).raw)
text_prompt2 = "What doesn this chart describe?\n"
url2 = "https://huggingface.co/adept/fuyu-8b/resolve/main/chart.png"
image2 = Image.open(requests.get(url2, stream=True).raw)
test_examples = [
# {"text": "|IMAGESTART| Generate a coco-style caption. |IMAGESTART| Be reminded that the caption should be longer than 2000 words but shorter than 1 million words. \n", "images": image1}, # should assert error
{"text": text_prompt1, "images": [image1, image2]}, # normal
{"text": text_prompt2, "images": [image2 for i in range(40)]}, # should add indicator
{"text": "|IMAGESTART||IMAGESTART| Generate a coco-style caption. Be reminded that the caption should be longer than 2000 words but shorter than 1 million words. \n", "images": [image1, image2]}, # normal
{"text": " Generate a coco-style caption. Be reminded that the caption should be longer than 2000 words but shorter than 1 million words. \n|IMAGESTART||IMAGESTART|", "images": [image1, image2]}, # normal
# {"text": " Generate a coco-style caption. Be reminded that the caption should be longer than 2000 words but shorter than 1 million words.", "images": None}, # no image, we had error with this case
{"text": None, "images": [image1]}, # no text
]
inputs_to_model = processor(**convert(test_examples), return_tensors="pt", truncation=True).to("cuda:0")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@amyeroberts, @ArthurZucker and @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27587/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27587",
"html_url": "https://github.com/huggingface/transformers/pull/27587",
"diff_url": "https://github.com/huggingface/transformers/pull/27587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27587.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27586/comments | https://api.github.com/repos/huggingface/transformers/issues/27586/events | https://github.com/huggingface/transformers/issues/27586 | 2,000,734,164 | I_kwDOCUB6oc53QMfU | 27,586 | Outputs are not consistent when using DeBERTa for inference | {
"login": "polarispw",
"id": 78252964,
"node_id": "MDQ6VXNlcjc4MjUyOTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/78252964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polarispw",
"html_url": "https://github.com/polarispw",
"followers_url": "https://api.github.com/users/polarispw/followers",
"following_url": "https://api.github.com/users/polarispw/following{/other_user}",
"gists_url": "https://api.github.com/users/polarispw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polarispw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polarispw/subscriptions",
"organizations_url": "https://api.github.com/users/polarispw/orgs",
"repos_url": "https://api.github.com/users/polarispw/repos",
"events_url": "https://api.github.com/users/polarispw/events{/privacy}",
"received_events_url": "https://api.github.com/users/polarispw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, the mismatch seems to be related to the classification layer, namely `cls.predictions.transform.dense.weight`, whose weights are randomly initialized at each loading from the pretrained model. I assume it is expected since it is a pretrained model, hence this layer should depend on fine-tuning.",
"#22790 It seems that there is a problem with the connection between Microsoft and Huggingface?\r\nIs there any alternative methods like downgrading the version of transformers?",
"When running your code snippet, the `_load_pretrained_model()` method expects (but doesn't find)\r\n`{'lm_predictions.lm_head.dense.weight', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.LayerNorm.bias', 'lm_predictions.lm_head.LayerNorm.weight', 'deberta.embeddings.position_embeddings.weight', 'lm_predictions.lm_head.dense.bias'}` weights in the state dict, whereas `{'deberta.embeddings.position_embeddings.weight', 'lm_predictions.lm_head.dense.weight', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.dense.bias', 'lm_predictions.lm_head.LayerNorm.weight', 'lm_predictions.lm_head.LayerNorm.bias'}` are in the state dict but never used. I think this is the cause of the issue, and it might as well for [#22790](https://github.com/huggingface/transformers/issues/22790).\r\n\r\nI'd still need to investigate, but at least, it explains why those weights are randomly initialized even if we start from a LM pretrained model.",
"Yep, the porting of the model probably went wrong at some point and we have to refactor that part of the code. It's long due and I don't have quick fix for you appart from just using other more recent and better models! π
",
"Is this issue open for contribution?",
"If you have time you can take over #22105, but it should be a 3rd contribution or more, needs to be BC and properly done. I should be able to tackle this this month! ",
"@ArthurZucker this would be my first contribution to hf. However, I am familiar with the project and down to help out with #22105. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.8.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am running a simple mask-filling task on DeBERTa and find that the output logits of the same input sentence vary every run (reloading the model from the disk).
I have set `eval()` `manual_seed()`, but they do not work. And the outputs are very different and do not look like they are caused by random seeds.
Even the [official discript](https://huggingface.co/docs/transformers/model_doc/deberta#transformers.DebertaForMaskedLM) shows the same problem. By the way, it works well when holding the model in memory and feed the same input twice to it.
```
from transformers import AutoTokenizer, DebertaForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-base', cache_dir='model_cache')
model = DebertaForMaskedLM.from_pretrained('microsoft/deberta-base', cache_dir='model_cache').eval()
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
print(round(outputs.loss.item(), 2))
# Run twice without reloading models work fine
# with torch.no_grad():
# logits = model(**inputs).logits
#
# # retrieve index of [MASK]
# mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
#
# predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
# tokenizer.decode(predicted_token_id)
#
# labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# # mask labels of non-[MASK] tokens
# labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
#
# outputs = model(**inputs, labels=labels)
# print(round(outputs.loss.item(), 2))
```
### Expected behavior
By feeding the same input, I should get the same outputs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27586/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27585/comments | https://api.github.com/repos/huggingface/transformers/issues/27585/events | https://github.com/huggingface/transformers/pull/27585 | 2,000,719,255 | PR_kwDOCUB6oc5f1VN8 | 27,585 | update d_kv'annotation in mt5'configuration | {
"login": "callanwu",
"id": 63695429,
"node_id": "MDQ6VXNlcjYzNjk1NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/63695429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/callanwu",
"html_url": "https://github.com/callanwu",
"followers_url": "https://api.github.com/users/callanwu/followers",
"following_url": "https://api.github.com/users/callanwu/following{/other_user}",
"gists_url": "https://api.github.com/users/callanwu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/callanwu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/callanwu/subscriptions",
"organizations_url": "https://api.github.com/users/callanwu/orgs",
"repos_url": "https://api.github.com/users/callanwu/repos",
"events_url": "https://api.github.com/users/callanwu/events{/privacy}",
"received_events_url": "https://api.github.com/users/callanwu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for your contribution! Please make sure that the quality checks pass - here's the [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md). It looks like you need to run `make style`.",
"> Thank you for your contribution! Please make sure that the quality checks pass - here's the [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md). It looks like you need to run `make style`.\r\n\r\n@MKhalusova Hi, thx for your reminder. I have passed all checks!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27585). All of your documentation changes will be reflected on that endpoint.",
"Add reference:\r\nhttps://huggingface.co/google/mt5-small/blob/main/config.json\r\nmodules output of a loaded mt5-small\r\n```txt\r\nT5Block(\r\n (layer): ModuleList(\r\n (0): T5LayerSelfAttention(\r\n (SelfAttention): T5Attention(\r\n (q): Linear(in_features=512, out_features=384, bias=False)\r\n (k): Linear(in_features=512, out_features=384, bias=False)\r\n (v): Linear(in_features=512, out_features=384, bias=False)\r\n (o): Linear(in_features=384, out_features=512, bias=False)\r\n )\r\n (layer_norm): T5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (1): T5LayerFF(\r\n (DenseReluDense): T5DenseGatedGeluDense(\r\n (wi_0): Linear(in_features=512, out_features=1024, bias=False)\r\n (wi_1): Linear(in_features=512, out_features=1024, bias=False)\r\n (wo): Linear(in_features=1024, out_features=512, bias=False)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (layer_norm): T5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n)\r\n```",
"@MKhalusova @patrickvonplaten Can it be merged now? Are there any further improvements needed:)",
"Thanks @callanwu π€ "
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu and @MKhalusova
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27585/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27585",
"html_url": "https://github.com/huggingface/transformers/pull/27585",
"diff_url": "https://github.com/huggingface/transformers/pull/27585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27585.patch",
"merged_at": 1700726996000
} |
https://api.github.com/repos/huggingface/transformers/issues/27584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27584/comments | https://api.github.com/repos/huggingface/transformers/issues/27584/events | https://github.com/huggingface/transformers/issues/27584 | 2,000,666,059 | I_kwDOCUB6oc53P73L | 27,584 | tokenizer.decode throwing an error for TransfoXLTokenizer | {
"login": "dsplog",
"id": 5105387,
"node_id": "MDQ6VXNlcjUxMDUzODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5105387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsplog",
"html_url": "https://github.com/dsplog",
"followers_url": "https://api.github.com/users/dsplog/followers",
"following_url": "https://api.github.com/users/dsplog/following{/other_user}",
"gists_url": "https://api.github.com/users/dsplog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsplog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsplog/subscriptions",
"organizations_url": "https://api.github.com/users/dsplog/orgs",
"repos_url": "https://api.github.com/users/dsplog/repos",
"events_url": "https://api.github.com/users/dsplog/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsplog/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reporting π @jtang98 fixed this π₯ "
] | 1,700 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import TransfoXLTokenizer
tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103")
enc = tokenizer.encode("Hello, my dog is cute")
enc
[14049, 2, 617, 3225, 23, 16072]
tokenizer.decode(enc)
Traceback (most recent call last):
File "", line 1, in
File "/home/home/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3738, in decode
return self._decode(
File "/home/home/.local/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 1001, in _decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/home/home/.local/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 982, in convert_ids_to_tokens
tokens.append(self._convert_id_to_token(index))
File "/home/home/.local/lib/python3.8/site-packages/transformers/models/transfo_xl/tokenization_transfo_xl.py", line 451, in _convert_id_to_token
return self.idx2sym[idx]
IndexError: list index out of range
```
### Expected behavior
`tokenizer.decode(enc) `should not throw an error
additional note :
could see tokenizer.sym2idx defined, but tokenizer.idx2sym is an empty list.
this issue is not there till transformer version 4.33.3
after that, from v4.34.0 till v4.35.2, we have this issue. could see tokenizer.sym2idx defined, but tokenizer.idx2sym is an empty list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27584/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27583/comments | https://api.github.com/repos/huggingface/transformers/issues/27583/events | https://github.com/huggingface/transformers/issues/27583 | 2,000,660,651 | I_kwDOCUB6oc53P6ir | 27,583 | Whisper is not learning a new tokenizer, even when i make test and train dataset the same | {
"login": "P-Sood",
"id": 55671093,
"node_id": "MDQ6VXNlcjU1NjcxMDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/55671093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/P-Sood",
"html_url": "https://github.com/P-Sood",
"followers_url": "https://api.github.com/users/P-Sood/followers",
"following_url": "https://api.github.com/users/P-Sood/following{/other_user}",
"gists_url": "https://api.github.com/users/P-Sood/gists{/gist_id}",
"starred_url": "https://api.github.com/users/P-Sood/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/P-Sood/subscriptions",
"organizations_url": "https://api.github.com/users/P-Sood/orgs",
"repos_url": "https://api.github.com/users/P-Sood/repos",
"events_url": "https://api.github.com/users/P-Sood/events{/privacy}",
"received_events_url": "https://api.github.com/users/P-Sood/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey π€ thanks a lot for opening an issue and using transformers! \r\n\r\nWe try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nOtherwise you should follow the tutorial ressources on how to train a whisper model see:\r\n- https://github.com/huggingface/distil-whisper\r\n- https://discuss.huggingface.co/t/fine-tuning-whisper-on-my-own-dataset-with-a-customized-tokenizer/25903\r\n\r\nThanks!",
"Hello @ArthurZucker I shall post it on the huggingface forums as you request. \r\n\r\nI saw that second post with training on the custom tokenizer. However, the fix they used was to switch it back to the regular pretrained tokenizer and just train for longer. So that doesn't seem like it would have too much effect on me. \r\n\r\nThe other issue I looked at [here](https://github.com/huggingface/transformers/issues/25503) was on the huggingface bugs page so I decided to post it here as well.\r\n\r\nThey also had a similar issue, but they needed help to get the model to train, and had no information on the results after the code was correct. Maybe I should leave a comment at the author of that issue, seeing if he got it work.\r\n\r\nAnyways, thanks for the info, ill post it on the forums.",
"I am not sure why you need to train a new tokenizer but I don't recommend it. You are completely losing the mapping from input_ids and tokens, thus the preptrained model is rendered useless. You should add tokens to the tokenizers rather than train a new one from scratch if you want to leverage the pretrained checkpoint",
"Do you know ahead of time what the kind of jargon is? You could first try Whisper prompting by putting your 'jargon' as the prompt:\r\n```python\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-tiny\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-tiny\")\r\ninput_features = processor(input_speech, return_tensors=\"pt\").input_features\r\n\r\n# --- Without prompt ---\r\nprompt_ids = processor.get_prompt_ids(\"Leighton\")\r\noutput_without_prompt = model.generate(input_features)\r\nprint(processor.decode(output_without_prompt[0]))\r\n# \"<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|endoftext|>\"\r\n\r\n# --- With prompt ---\r\nprompt_ids = processor.get_prompt_ids(\"Leighton\")\r\noutput_with_prompt = model.generate(input_features, prompt_ids=prompt_ids)\r\nprint(processor.decode(output_with_prompt[0]))\r\n# \"<|startofprev|> Leighton<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Leighton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|endoftext|>\"\r\n```\r\n\r\nYour next best method would be fine-tuning using the original tokenizer on your dataset, using as much data as possible: https://huggingface.co/blog/fine-tune-whisper\r\n\r\nIf you're in a low-data regime, freezing the encoder is recommended. Call this line before you do `trainer.train()`:\r\n```\r\nmodel.freeze_encoder()\r\n```\r\n\r\nAfter that, see this issue for recommendations for custom vocabulary: https://discuss.huggingface.co/t/adding-custom-vocabularies-on-whisper/29311?u=nbroad. Note that this will require **more** data than standard fine-tuning, so you should be completely sure standard fine-tuning with the original tokenizer doesn't work before trying this. Also note that as @ArthurZucker mentioned, it is not recommended to completely reset the tokenizer, but rather append the new vocabulary to the tokenizer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello, I want to take the audio at my workplace and transform it into a transcription; however, with base whisper, it seems as though it isn't that good. So, I have been wanting to create my own tokenizer that can understand jargon and output that jargon better. Stuff similar to acronyms. Below I have shown my steps in
1) Creating Tokenizer
2) Preprocessing data pipeline
3) Model init, and configuration
4) Model outputs
I run this using huggingface trainer, with the generate option. Is it my data size? i have scoured online to try and find some sort of solution but they all just say it works. I am at my wits end and would appreciate any help on getting this tokenizer to learn my jargon.
Thank you in advance :)
## Creating the tokenizer
```python
from tokenizers import Tokenizer, models, pre_tokenizers, decoders, trainers
from transformers import WhisperTokenizer
# Initialize a tokenizer
tokenizer = Tokenizer(models.BPE())
# Pre-tokenizer responsible for converting the text to a stream of characters
tokenizer.pre_tokenizer = pre_tokenizers.Whitespace()#ByteLevel(add_prefix_space=False)
# Decoder responsible for converting the tokens back to a string
tokenizer.decoder = decoders.ByteLevel()
# Trainer responsible for training the BPE model
tokenizer.trainers = trainers.BpeTrainer(vocab_size=1000, min_frequency=2 , special_tokens=spec_tok)
# Training the tokenizer
tokenizer.train(["file.txt"])
# Save the tokenizer
tokenizer.save("NewWhisperTokenizer.json")
f = open('NewWhisperTokenizer.json')
# returns JSON object as
# a dictionary
data = json.load(f)
with open("vocab.json", "w") as outfile:
json.dump(data['model']['vocab'], outfile)
with open("merges.txt", "w") as outfile:
json.dump(data['model']['merges'], outfile)
tokenizer = WhisperTokenizer("vocab.json", "merges.txt" , errors = "replace", unk_token = "<|endoftext|>", bos_token = "<|endoftext|>", eos_token = "<|endoftext|>", pad_token = "<|endoftext|>")
tokenizer.add_special_tokens(WhisperTokenizer.from_pretrained("openai/whisper-tiny").special_tokens_map_extended)
tokenizer.save_pretrained("new_tok")
```
`len(tokenizer) == 193`
## Preprocessing steps
```python
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
temp_labels = tokenizer(batch["phonetic_detail"]["utterance"]).input_ids
batch["label"] = [label for sentence_labels in temp_labels for label in sentence_labels]
return batch
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
tokenizer: Any
feature_extractor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.feature_extractor.pad(input_features, return_tensors="pt")
label_features = [{"input_ids": feature["label"]} for feature in features]
labels_batch = self.tokenizer.pad(label_features, return_tensors="pt")
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
if (labels[:, 0] == self.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
data_collator = DataCollatorSpeechSeq2SeqWithPadding(tokenizer , feature_extractor)
```
`len(train_dataset) == 4000`
`len(test_dataset) == 1000`
## Model Config
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
voc = tokenizer.get_vocab()
model_Gen = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
model_Gen = model_Gen.to(device)
model_Gen.resize_token_embeddings(len(tokenizer))
model_Gen.config.pad_token_id = tokenizer.pad_token_id
model_Gen.config.decoder_start_token_id = voc['<|startoftranscript|>']
model_Gen.config.eos_token_id = tokenizer.eos_token_id
model_Gen.config.bos_token_id = tokenizer.bos_token_id
model_Gen.config.suppress_tokens = []
model_Gen.config.forced_decoder_ids = None
model_Gen.config.begin_suppress_tokens = [
tokenizer.pad_token_id
]
model_Gen.generation_config.pad_token_id = tokenizer.pad_token_id
model_Gen.generation_config.decoder_start_token_id = voc['<|startoftranscript|>']
model_Gen.generation_config.eos_token_id = tokenizer.eos_token_id
model_Gen.generation_config.bos_token_id = tokenizer.bos_token_id
model_Gen.generation_config.suppress_tokens = []
model_Gen.generation_config.forced_decoder_ids = None
model_Gen.generation_config.begin_suppress_tokens = [
tokenizer.pad_token_id
]
model_Gen.generation_config.no_timestamps_token_id = voc['<|notimestamps|>']
```
## Huggingface Trainer
Here I have made the dataset the same 30 examples to see if it would give me complete overprediction, but even with setting train and test to be the same, it is not overfitting at all.
```python
training_args = Seq2SeqTrainingArguments(
output_dir='training_output',
logging_dir='./logs',
group_by_length=True,
per_device_train_batch_size=1,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
per_device_eval_batch_size=1,
num_train_epochs=8,
gradient_checkpointing=True,
lr_scheduler_type = "cosine_with_restarts",
save_strategy='epoch',
evaluation_strategy='epoch',
logging_strategy='epoch',
learning_rate=1e-2,
weight_decay=0.005,
# warmup_steps=36,
save_total_limit=4,
push_to_hub=False,
predict_with_generate=True,
generation_max_length=225,
load_best_model_at_end=True,
greater_is_better=False,
generation_num_beams = 4,
# fp16 = True,
report_to="wandb", # Turn this off for pdb debug
)
trainer = CustomTrainer(
compute_metrics=compute_metrics,
args=training_args,
model=model_Gen,
data_collator=data_collator,
tokenizer=processor.feature_extractor,
train_dataset=new_test['test'] ,
eval_dataset=new_test['test'],
)
trainer.evaluate()
```
## Outputs after second epoch
```python
tokenizer.batch_decode(pred.predictions , skip_special_tokens = True)
['', '', 'uwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuw', 'k', '', 'k', 'kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk',
'awawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawaw', 'awawawaw', '', '', '', 'jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj', '', 'jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj', 'uweuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuwuw', '',
'axaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxaxax', '',
'kuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhkuhk',
'eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee',
'eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee',
'awawawaw',
'eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee',
'awawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawawaw',
'',
'jjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj']
```
### Expected behavior
More understandable text descriptions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27583/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27582/comments | https://api.github.com/repos/huggingface/transformers/issues/27582/events | https://github.com/huggingface/transformers/pull/27582 | 2,000,574,051 | PR_kwDOCUB6oc5f05RS | 27,582 | Add options to load CLIP Transformer models | {
"login": "amitkumarj441",
"id": 14039450,
"node_id": "MDQ6VXNlcjE0MDM5NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/14039450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amitkumarj441",
"html_url": "https://github.com/amitkumarj441",
"followers_url": "https://api.github.com/users/amitkumarj441/followers",
"following_url": "https://api.github.com/users/amitkumarj441/following{/other_user}",
"gists_url": "https://api.github.com/users/amitkumarj441/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amitkumarj441/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amitkumarj441/subscriptions",
"organizations_url": "https://api.github.com/users/amitkumarj441/orgs",
"repos_url": "https://api.github.com/users/amitkumarj441/repos",
"events_url": "https://api.github.com/users/amitkumarj441/events{/privacy}",
"received_events_url": "https://api.github.com/users/amitkumarj441/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hey! Could you share the motivations behind this? π€\r\n\r\n@ArthurZucker I needed to have an independent feature extractor using CLIP text model for generating (query) embeddings, though, [CLIPTextModelWithProjection](https://github.com/huggingface/transformers/blob/50726f9ea7afc6113da617f8f4ca1ab264a5e28a/src/transformers/models/clip/modeling_clip.py#L1181C7-L1181C34) and similarly for `CLIP_vision` model has `AutoModelFor...` instance. So, to standardise this process around Transformer, we need options to avail these CLIP text/image models separately.",
"This is a custom usage and would recommend you to build on top of transformers. It's not really standard as you are gonna have missing keys and unexpected keys for both. ",
"> This is a custom usage and would recommend you to build on top of transformers. It's not really standard as you are gonna have missing keys and unexpected keys for both.\r\n\r\nYes, it is, though, the tweaking options to adopt independent CLIP models (text/image) should lie within `modeling_clip.py` for flexibility.",
"Sorry but no, if you only want the vision model or only want the text model you should use the CLIPVisionModel and the CLIPTextModel rather than add this π ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | NONE | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Flexible options around loading CLIP transformer models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27582/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27582",
"html_url": "https://github.com/huggingface/transformers/pull/27582",
"diff_url": "https://github.com/huggingface/transformers/pull/27582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27582.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27581/comments | https://api.github.com/repos/huggingface/transformers/issues/27581/events | https://github.com/huggingface/transformers/pull/27581 | 2,000,536,606 | PR_kwDOCUB6oc5f0xsJ | 27,581 | [Time series] Add patchtst | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @amyeroberts , can you review the latest commits? We have made all the changes based on your requests. There is a deadline from our side, so we would appreciate your help to make it faster.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @amyeroberts, we have resolved all of your requests. Can you review the latest update? All the tests (checking the slow tests pass), the returns when return_dict=False is passed and seed parameter is removed. Thank you!",
"@namctin We recently updated our formatting libraries - using ruff instead of black. To make the quality checks pass, you'll need to: \r\n* Uninstall black: `pip uninstall black`\r\n* Update any necessary formatting settings: `pip install -e .[quality]` \r\n* Re-run formatting: `make fixup` \r\n* Push any changes made",
"For the documentation tests - it seems the tests are failing because the checkpoint being pointed to doesn't exist. Is this deliberate? ",
"I have created some on my end, let me update the docs to double check",
"@amyeroberts All the tests have passed. Can you take a look and merge if there is no other requests? Thank you!",
"@amyeroberts We have either resolved or commented on all of your requests. Can you please review?",
"Thank you @amyeroberts for your careful review which makes the codes much cleaner."
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Adds the patchtst model
re-opened from closed #25927 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27581/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27581",
"html_url": "https://github.com/huggingface/transformers/pull/27581",
"diff_url": "https://github.com/huggingface/transformers/pull/27581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27581.patch",
"merged_at": 1701261399000
} |
https://api.github.com/repos/huggingface/transformers/issues/27580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27580/comments | https://api.github.com/repos/huggingface/transformers/issues/27580/events | https://github.com/huggingface/transformers/issues/27580 | 2,000,397,110 | I_kwDOCUB6oc53O6M2 | 27,580 | ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.15.0. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main Press any key to continue . . . | {
"login": "NiuDaVinci",
"id": 39637556,
"node_id": "MDQ6VXNlcjM5NjM3NTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/39637556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NiuDaVinci",
"html_url": "https://github.com/NiuDaVinci",
"followers_url": "https://api.github.com/users/NiuDaVinci/followers",
"following_url": "https://api.github.com/users/NiuDaVinci/following{/other_user}",
"gists_url": "https://api.github.com/users/NiuDaVinci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NiuDaVinci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NiuDaVinci/subscriptions",
"organizations_url": "https://api.github.com/users/NiuDaVinci/orgs",
"repos_url": "https://api.github.com/users/NiuDaVinci/repos",
"events_url": "https://api.github.com/users/NiuDaVinci/events{/privacy}",
"received_events_url": "https://api.github.com/users/NiuDaVinci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Sorry I'm not really sure I can help you with this. Do you have a code snippet of what you were using? (Other wise this should fix it: `pip install --upgrade transformers`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | ### System Info
[DxDiag.txt](https://github.com/huggingface/transformers/files/13400236/DxDiag.txt)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was following this tutorial.
https://youtu.be/O01BrQwOd-Q
I got this error ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.15.0. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main Press any key to continue . . .
I'm an artist, I don't even know if following this tutorial , but I was because I read this in [lora reddit](https://www.reddit.com/r/StableDiffusion/comments/10ubpts/does_anyone_know_the_solution_lora/ )" in your cuda version looks like not correct
can you install cuda as shown here? but before do a fresh installation of automatic1111 and let it install latest
[8 GB LoRA Training - Fix CUDA Version For DreamBooth and Textual Inversion Training By Automatic1111
](https://www.youtube.com/watch?v=O01BrQwOd-Q) " **but I have 12gb of ram**
### Expected behavior
don't know , can anybody tell me what's the best for me? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27580/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27579/comments | https://api.github.com/repos/huggingface/transformers/issues/27579/events | https://github.com/huggingface/transformers/pull/27579 | 2,000,385,964 | PR_kwDOCUB6oc5f0UOZ | 27,579 | Fix broken distilbert url | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27579/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27579",
"html_url": "https://github.com/huggingface/transformers/pull/27579",
"diff_url": "https://github.com/huggingface/transformers/pull/27579.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27579.patch",
"merged_at": 1700328173000
} |
https://api.github.com/repos/huggingface/transformers/issues/27578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27578/comments | https://api.github.com/repos/huggingface/transformers/issues/27578/events | https://github.com/huggingface/transformers/issues/27578 | 2,000,349,721 | I_kwDOCUB6oc53OuoZ | 27,578 | "Attempted to access the data pointer on an invalid python storage" when saving model in TPU mode (Kaggle) | {
"login": "Zaphat",
"id": 93189118,
"node_id": "U_kgDOBY3z_g",
"avatar_url": "https://avatars.githubusercontent.com/u/93189118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zaphat",
"html_url": "https://github.com/Zaphat",
"followers_url": "https://api.github.com/users/Zaphat/followers",
"following_url": "https://api.github.com/users/Zaphat/following{/other_user}",
"gists_url": "https://api.github.com/users/Zaphat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zaphat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zaphat/subscriptions",
"organizations_url": "https://api.github.com/users/Zaphat/orgs",
"repos_url": "https://api.github.com/users/Zaphat/repos",
"events_url": "https://api.github.com/users/Zaphat/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zaphat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I know you provided a link to the notebook but a minimal reproducer would still be welcomed here! π€ \r\n\r\ncc @LysandreJik this might be related to the latest changes? Do you want to have a look? ",
"Would be eager to hear your thoughts on it @Narsil ",
"Hellow\r\nI'm facing the same issue in Kaggle TPUs too, did anybody find a solution for it?\r\nThanks",
"Seems there's a bug in torch itself there since safetensors is only using public API.\r\n\r\n```\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\n\r\ndev = xm.xla_device()\r\n\r\nA = torch.zeros((2, 2), device=dev)\r\nA.untyped_storage() # <--- Crashes with Attempted to set the storage of a tensor on device \"cpu\" to a storage on different device \"xla:0\". This is no longer allowed; the devices must match.\r\n```\r\n\r\nThe CPU fix #27799 will work, but only by moving everything to CPU which isn't desirable imo.\r\n\r\nDo we have XLA/torch experts that could shed some light on how to detect a xla tensor specifically ? (I would implement the same in to cpu in safetensors if the tensor is on an XLA device).\r\n\r\n\r\nAlthough this could be easily brought up as a bug too to pytorch, no ? @LysandreJik \r\n\r\nMinimal repro : https://colab.research.google.com/drive/1O9EqLD-Vfp7PGGldNeJtRtq3oUpLnOJV?usp=sharing (Choose TPU runtime)",
"> Do we have XLA/torch experts that could shed some light on how to detect a xla tensor specifically ? \r\n\r\nCan check `if tensor.device.type == 'xla'` ",
"Also, https://github.com/huggingface/transformers/pull/27993 could someone help land this? This could resolve this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"As the pr was merged, let's close this"
] | 1,700 | 1,704 | 1,704 | NONE | null | ### System Info
It keeps happening whenever I try to use TPU mode to fine-tune BERT model for sentiment analysis. Everything works fine in GPU mode. I even tried to downgrade/upgrade TensorFlow & safetensors, but it didn't work either. Can you give me any suggestion?
Link to that notebook: https://www.kaggle.com/code/phttrnnguyngia/final
trainer.save_model('final-result')
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File /kaggle/working/env/safetensors/torch.py:13, in storage_ptr(tensor)
12 try:
---> 13 return tensor.untyped_storage().data_ptr()
14 except Exception:
15 # Fallback for torch==1.10
RuntimeError: Attempted to access the data pointer on an invalid python storage.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
Cell In[21], line 2
1 # save the model
----> 2 trainer.save_model('final-result')
File /kaggle/working/env/transformers/trainer.py:2804, in Trainer.save_model(self, output_dir, _internal_call)
2801 output_dir = self.args.output_dir
2803 if is_torch_tpu_available():
-> 2804 self._save_tpu(output_dir)
2805 elif is_sagemaker_mp_enabled():
2806 # Calling the state_dict needs to be done on the wrapped model and on all processes.
2807 os.makedirs(output_dir, exist_ok=True)
File /kaggle/working/env/transformers/trainer.py:2873, in Trainer._save_tpu(self, output_dir)
2871 xm.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME))
2872 else:
-> 2873 self.model.save_pretrained(output_dir, is_main_process=self.args.should_save, save_function=xm.save)
2874 if self.tokenizer is not None and self.args.should_save:
2875 self.tokenizer.save_pretrained(output_dir)
File /kaggle/working/env/transformers/modeling_utils.py:2187, in PreTrainedModel.save_pretrained(self, save_directory, is_main_process, state_dict, save_function, push_to_hub, max_shard_size, safe_serialization, variant, token, save_peft_format, **kwargs)
2183 for shard_file, shard in shards.items():
2184 if safe_serialization:
2185 # At some point we will need to deal better with save_function (used for TPU and other distributed
2186 # joyfulness), but for now this enough.
-> 2187 safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
2188 else:
2189 save_function(shard, os.path.join(save_directory, shard_file))
File /kaggle/working/env/safetensors/torch.py:281, in save_file(tensors, filename, metadata)
250 def save_file(
251 tensors: Dict[str, torch.Tensor],
252 filename: Union[str, os.PathLike],
253 metadata: Optional[Dict[str, str]] = None,
254 ):
255 """
256 Saves a dictionary of tensors into raw bytes in safetensors format.
257
(...)
279 ```
280 """
--> 281 serialize_file(_flatten(tensors), filename, metadata=metadata)
File /kaggle/working/env/safetensors/torch.py:460, in _flatten(tensors)
453 if invalid_tensors:
454 raise ValueError(
455 f"You are trying to save a sparse tensors: `{invalid_tensors}` which this library does not support."
456 " You can make it a dense tensor before saving with `.to_dense()` but be aware this might"
457 " make a much larger file than needed."
458 )
--> 460 shared_pointers = _find_shared_tensors(tensors)
461 failing = []
462 for names in shared_pointers:
File /kaggle/working/env/safetensors/torch.py:72, in _find_shared_tensors(state_dict)
70 tensors = defaultdict(set)
71 for k, v in state_dict.items():
---> 72 if v.device != torch.device("meta") and storage_ptr(v) != 0 and storage_size(v) != 0:
73 # Need to add device as key because of multiple GPU.
74 tensors[(v.device, storage_ptr(v), storage_size(v))].add(k)
75 tensors = list(sorted(tensors.values()))
File /kaggle/working/env/safetensors/torch.py:17, in storage_ptr(tensor)
14 except Exception:
15 # Fallback for torch==1.10
16 try:
---> 17 return tensor.storage().data_ptr()
18 except NotImplementedError:
19 # Fallback for meta storage
20 return 0
File /kaggle/working/env/torch/storage.py:909, in TypedStorage.data_ptr(self)
907 def data_ptr(self):
908 _warn_typed_storage_removal()
--> 909 return self._data_ptr()
File /kaggle/working/env/torch/storage.py:913, in TypedStorage._data_ptr(self)
912 def _data_ptr(self):
--> 913 return self._untyped_storage.data_ptr()
RuntimeError: Attempted to access the data pointer on an invalid python storage.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run in Kaggle TPU, Environment: Always use latest environment. Input data is included in the notebook
### Expected behavior
Expected to save successfully like when using GPU. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27578/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27577/comments | https://api.github.com/repos/huggingface/transformers/issues/27577/events | https://github.com/huggingface/transformers/issues/27577 | 2,000,297,662 | I_kwDOCUB6oc53Oh6- | 27,577 | Co-Pilot Store in Huggingface | {
"login": "vikas94",
"id": 20750960,
"node_id": "MDQ6VXNlcjIwNzUwOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/20750960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikas94",
"html_url": "https://github.com/vikas94",
"followers_url": "https://api.github.com/users/vikas94/followers",
"following_url": "https://api.github.com/users/vikas94/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikas94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas94/subscriptions",
"organizations_url": "https://api.github.com/users/vikas94/orgs",
"repos_url": "https://api.github.com/users/vikas94/repos",
"events_url": "https://api.github.com/users/vikas94/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikas94/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Would love to see what the community can do! π€ "
] | 1,700 | 1,700 | null | NONE | null | ### Feature request
I think its high time we have one open source Co-pilot store to compete with the GPT Store, where we can setup, train, and add our custom documents, tools to any open source custom model and have a option to share it with the world or keep it private with the users.
### Motivation
The motivation is to provide a alternative to GPT store, where by default you can create just the GPT based models and kill the competition.
here we don't even have option to control the cost.
### Your contribution
I can help in creating the vision and Roadmap for the feature, and maybe spend sometime in development activities aswell. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27577/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27576/comments | https://api.github.com/repos/huggingface/transformers/issues/27576/events | https://github.com/huggingface/transformers/pull/27576 | 2,000,272,459 | PR_kwDOCUB6oc5fz9GG | 27,576 | [WIP] Generate: implement prefilling | {
"login": "tom-p-reichel",
"id": 43631024,
"node_id": "MDQ6VXNlcjQzNjMxMDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tom-p-reichel",
"html_url": "https://github.com/tom-p-reichel",
"followers_url": "https://api.github.com/users/tom-p-reichel/followers",
"following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}",
"gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions",
"organizations_url": "https://api.github.com/users/tom-p-reichel/orgs",
"repos_url": "https://api.github.com/users/tom-p-reichel/repos",
"events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}",
"received_events_url": "https://api.github.com/users/tom-p-reichel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Current issues:\r\n- I forgot to handle an edge case where 1 or 0 length `input_ids` are passed in, fixable by refusing to prefill those sequences.\r\n- Contrastive search broke because it works with past_key_values and expects to expand them itself.\r\n- By passing in caches, I think I have changed the output size of the hidden states or some other part of a ModelOutput, which breaks tests that compare entire ModelOutputs.\r\n- All encoder-decoder models broken-- temporarily resolved by just not prefilling for these models.\r\n\r\nMore work & doc reading soon.",
"@tom-p-reichel ping me when you're stuck or when it's ready for review π€ ",
"@gante Down to only 6 failing tests in test_torch!\r\n\r\nAs far as I can tell, all of the remaining failing tests here are due to the fact that prefilling currently changes the length of returned `past_key_values`, attentions, etc. returned from calls to `generate`-- the search functions don't know they're taking a prefilled input and don't make any considerations towards appending prefilled `past_key_values` or attentions to the output they generate. This seems messy to fix because we would have to edit the return values or possibly logic of multiple search algorithms.\r\n\r\nAlternatively, maybe prefilling should be opt-in through a keyword arg to generate and simply shouldn't be enabled for those tests?\r\n\r\nI'm a graduate student, so I am now facing final exams for a while, so I figure now is a good time to look at this and tell me if I'm on the right track. I could also use advice on addressing the last remaining failed tests.\r\n\r\nI also checked that as of now this PR still produces a speedup in the example from my original issue on this topic.\r\n\r\nThanks in advance!",
"Hey @tom-p-reichel! In general, the PR looks in a good direction.\r\n\r\nI'm not fully aware of the details of the problem -- I would have to dig into the code. I might be able to do it next week :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27449
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27576/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27576",
"html_url": "https://github.com/huggingface/transformers/pull/27576",
"diff_url": "https://github.com/huggingface/transformers/pull/27576.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27576.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27575/comments | https://api.github.com/repos/huggingface/transformers/issues/27575/events | https://github.com/huggingface/transformers/issues/27575 | 2,000,120,586 | I_kwDOCUB6oc53N2sK | 27,575 | [docs] Quantization | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sounds good to me π ",
"Thanks for the proposal @stevhliu ! That sounds great !"
] | 1,700 | 1,701 | 1,701 | MEMBER | null | I was going through the [Quantization](https://huggingface.co/docs/transformers/main/en/main_classes/quantization) docs and noticed there is a ton of cool content in here that I wouldn't have necessarily discovered if I wasn't reviewing another PR that was translating this page.
The current API page feels cluttered, making it a bit difficult to lookup the API references (parameters, docstrings, etc.) of the different configurations. I think it'd be nicer and cleaner if we moved all of this content into its own separate doc in the Performance and scalability section and leave the API page like this:
```md
# Quantization
add link to quantization guide here
## AWQ
[[autodoc]] AwqConfig
## AutoGPTQ
[[autodoc]] GPTQConfig
## bitsandbytes
[[autodoc]] BitsAndBytesConfig
```
This way, the quantization docs would also be more visible on their own and we can avoid blurring the lines between our guides and API docs.
Would love to hear what you think @younesbelkada @SunMarc @ArthurZucker ! π | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27575/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27575/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27574/comments | https://api.github.com/repos/huggingface/transformers/issues/27574/events | https://github.com/huggingface/transformers/pull/27574 | 2,000,021,396 | PR_kwDOCUB6oc5fzHGq | 27,574 | Adding leaky relu in dict ACT2CLS | {
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Simple thing.... it includes LeakyReLU in the `ACT2CLS`.
Noticed that the RTDetr official implementation supports "leaky_relu" ([here](https://github.com/lyuwenyu/RT-DETR/blob/3330eca679a7d7cce16bbb10509099174a2f40bf/rtdetr_pytorch/src/nn/backbone/common.py#L70)), which is not being mapped in our ACT2CLS.
Edited: CI failing tests are not related to this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27574/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27574",
"html_url": "https://github.com/huggingface/transformers/pull/27574",
"diff_url": "https://github.com/huggingface/transformers/pull/27574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27574.patch",
"merged_at": 1700408521000
} |
https://api.github.com/repos/huggingface/transformers/issues/27573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27573/comments | https://api.github.com/repos/huggingface/transformers/issues/27573/events | https://github.com/huggingface/transformers/issues/27573 | 1,999,917,988 | I_kwDOCUB6oc53NFOk | 27,573 | Support mix/complex FSDP wrap policy in Trainer | {
"login": "lchu-ibm",
"id": 20955448,
"node_id": "MDQ6VXNlcjIwOTU1NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/20955448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lchu-ibm",
"html_url": "https://github.com/lchu-ibm",
"followers_url": "https://api.github.com/users/lchu-ibm/followers",
"following_url": "https://api.github.com/users/lchu-ibm/following{/other_user}",
"gists_url": "https://api.github.com/users/lchu-ibm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lchu-ibm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lchu-ibm/subscriptions",
"organizations_url": "https://api.github.com/users/lchu-ibm/orgs",
"repos_url": "https://api.github.com/users/lchu-ibm/repos",
"events_url": "https://api.github.com/users/lchu-ibm/events{/privacy}",
"received_events_url": "https://api.github.com/users/lchu-ibm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Figured we can leverage `trainer.accelerator.state.fsdp_plugin.auto_wrap_policy()` to override the policy. closing this for now."
] | 1,700 | 1,700 | 1,700 | NONE | null | ### Feature request
Support complex FSDP wrapping policy in Trainer api.
### Motivation
The current [implementation](https://github.com/huggingface/transformers/blob/2fc33ebead50383f7707b17f0e2a178d86347d10/src/transformers/trainer.py#L1387-L1404) seems to assume the wrap policy is either size based or transformer block(s) based, but there are many scenarios that might need other kinds of policy or a mix of different policies.
One example is [llama-recipes ](https://github.com/facebookresearch/llama-recipes/blob/cf678b9bf0af1c0e68e83b6c378e98125e0bc132/src/llama_recipes/utils/fsdp_utils.py#L4) uses a combination of lambda policy and transformer wrap policy. So this makes it a little hard to use Trainer api to achieve the same thing.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27572/comments | https://api.github.com/repos/huggingface/transformers/issues/27572/events | https://github.com/huggingface/transformers/issues/27572 | 1,999,792,298 | I_kwDOCUB6oc53Mmiq | 27,572 | Oneformer throws exception for when training for instance segmentation | {
"login": "nickponline",
"id": 590151,
"node_id": "MDQ6VXNlcjU5MDE1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/590151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickponline",
"html_url": "https://github.com/nickponline",
"followers_url": "https://api.github.com/users/nickponline/followers",
"following_url": "https://api.github.com/users/nickponline/following{/other_user}",
"gists_url": "https://api.github.com/users/nickponline/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickponline/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickponline/subscriptions",
"organizations_url": "https://api.github.com/users/nickponline/orgs",
"repos_url": "https://api.github.com/users/nickponline/repos",
"events_url": "https://api.github.com/users/nickponline/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickponline/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] | [
"@amyeroberts is the issue potentially: `preprocessor.image_processor.num_text = model.config.num_queries - model.config.text_encoder_n_ctx` line? Does that not work for instance segmentation?\r\n\r\nAdditionally here: https://github.com/NielsRogge/Transformers-Tutorials/issues/370",
"It's still an issue, forward pass for `instance segmentation` using Oneformer.",
"Hi @nickponline, thanks for raising this issue! \r\n\r\nIn the example provided, the error is occurring because none of the objects in the image correspond to a \"thing\" as defined in the [metadata](https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/cityscapes_panoptic.json). \r\n\r\nSo, when preparing the inputs to the model, all of the masks are filtered out in [this check here](https://github.com/huggingface/transformers/blob/2272ab57a99bcac972b5252b87c31e24d0b25538/src/transformers/models/oneformer/image_processing_oneformer.py#L883C19-L883C19). The class_ids of the image being passed in don't correspond to the model's mapping. \r\n\r\nAlthough this behaviour is expected - it does highlight a general difficulty of using this model, and is an issue [that's been raised in the past](https://github.com/huggingface/transformers/issues/23116). We should be able to load in alternative (local or repo) metadata paths and load those in. I've opened a PR to address this - #28398 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,707 | 1,707 | NONE | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.35.0.dev0
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): 2.13.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts @NielsRogge @praeclarumjj3
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
[inputs.zip](https://github.com/huggingface/transformers/files/13396503/inputs.zip)
```python
import numpy as np
from transformers import AutoProcessor, AutoModelForUniversalSegmentation
id2label = {
0 : "background",
1 : "triangle",
2 : "circle",
3 : "rectangle",
}
preprocessor = AutoProcessor.from_pretrained("shi-labs/oneformer_cityscapes_swin_large", do_resize=True, do_normalize=True, size=dict(width=500, height=500))
model = AutoModelForUniversalSegmentation.from_pretrained("shi-labs/oneformer_cityscapes_swin_large", is_training=True, id2label=id2label, ignore_mismatched_sizes=True)
preprocessor.image_processor.num_text = model.config.num_queries - model.config.text_encoder_n_ctx
image = np.load("image.npy", allow_pickle=True)
instance_seg = np.load("instance_seg.npy", allow_pickle=True)
inst2class = {0: 0, 3: 1, 4: 1, 6: 1, 9: 1, 10: 1, 11: 1, 13: 1, 16: 1, 17: 1, 18: 1, 20: 1, 21: 1, 22: 1, 23: 1, 24: 1, 26: 1, 28: 1, 30: 1, 35: 1, 36: 1, 39: 1, 2: 2, 5: 2, 8: 2, 12: 2, 15: 2, 19: 2, 25: 2, 27: 2, 31: 2, 32: 2, 34: 2, 37: 2, 38: 2, 1: 3, 14: 3, 33: 3, 40: 3}
inputs = preprocessor(image, segmentation_maps=[instance_seg], instance_id_to_semantic_id=inst2class, task_inputs=["instance"], return_tensors="pt")
```
```python
inputs = preprocessor(image, segmentation_maps=[instance_seg], instance_id_to_semantic_id=inst2class, task_inputs=["instance"], return_tensors="pt")
File "/opt/anaconda3/envs/dev/lib/python3.10/site-packages/transformers/models/oneformer/processing_oneformer.py", line 119, in __call__
encoded_inputs = self.image_processor(images, task_inputs, segmentation_maps, **kwargs)
File "/opt/anaconda3/envs/dev/lib/python3.10/site-packages/transformers/models/oneformer/image_processing_oneformer.py", line 535, in __call__
return self.preprocess(images, task_inputs=task_inputs, segmentation_maps=segmentation_maps, **kwargs)
File "/opt/anaconda3/envs/dev/lib/python3.10/site-packages/transformers/models/oneformer/image_processing_oneformer.py", line 738, in preprocess
encoded_inputs = self.encode_inputs(
File "/opt/anaconda3/envs/dev/lib/python3.10/site-packages/transformers/models/oneformer/image_processing_oneformer.py", line 1051, in encode_inputs
masks = np.concatenate(masks, axis=0)
File "<__array_function__ internals>", line 180, in concatenate
```
### Expected behavior
No exception. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27572/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27571/comments | https://api.github.com/repos/huggingface/transformers/issues/27571/events | https://github.com/huggingface/transformers/pull/27571 | 1,999,768,071 | PR_kwDOCUB6oc5fyPCM | 27,571 | Make using safetensors files automated. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27571). All of your documentation changes will be reflected on that endpoint.",
"It's starting to look good! I've tried it with [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1), a 7b model, and it took ~3 minutes between when I sent the initial request and when it started downloading the safetensors checkpoint. The first minute was spent waiting for previous conversions to wrap up.\r\n\r\nIs there a possibility for us to speed this up even more/parallelize it?",
"@ArthurZucker can you take a look when you have a second?",
"Reviewing now",
"Thanks for the reviews, merging!"
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | If `use_safetensors=True` is used, and it doesn't exist:
- Don't crash just yet
- Lookup for an open PR containing it.
- If yes, use that instead
- If not, touch the space to convert, wait for conversion to be finished
and the PR to be opened
- Use that new PR
- Profit.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27571/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27571/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27571",
"html_url": "https://github.com/huggingface/transformers/pull/27571",
"diff_url": "https://github.com/huggingface/transformers/pull/27571.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27571.patch",
"merged_at": 1701442270000
} |
https://api.github.com/repos/huggingface/transformers/issues/27570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27570/comments | https://api.github.com/repos/huggingface/transformers/issues/27570/events | https://github.com/huggingface/transformers/pull/27570 | 1,999,735,132 | PR_kwDOCUB6oc5fyHxV | 27,570 | Fix torch.fx import issue for torch 1.12 | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
Directly imports `torch.fx` before calling `torch.fx.wrap`. In later versions of pytorch this works, but for torch 1.12 it breaks.
Fixes #27534 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27570",
"html_url": "https://github.com/huggingface/transformers/pull/27570",
"diff_url": "https://github.com/huggingface/transformers/pull/27570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27570.patch",
"merged_at": 1700518972000
} |
https://api.github.com/repos/huggingface/transformers/issues/27569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27569/comments | https://api.github.com/repos/huggingface/transformers/issues/27569/events | https://github.com/huggingface/transformers/pull/27569 | 1,999,707,737 | PR_kwDOCUB6oc5fyBzH | 27,569 | Broken links fixed related to datasets docs | {
"login": "VpkPrasanna",
"id": 30804112,
"node_id": "MDQ6VXNlcjMwODA0MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/30804112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VpkPrasanna",
"html_url": "https://github.com/VpkPrasanna",
"followers_url": "https://api.github.com/users/VpkPrasanna/followers",
"following_url": "https://api.github.com/users/VpkPrasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/VpkPrasanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VpkPrasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VpkPrasanna/subscriptions",
"organizations_url": "https://api.github.com/users/VpkPrasanna/orgs",
"repos_url": "https://api.github.com/users/VpkPrasanna/repos",
"events_url": "https://api.github.com/users/VpkPrasanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/VpkPrasanna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27569). All of your documentation changes will be reflected on that endpoint."
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Fixed all the Broken links related to dataset library issue
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu and @MKhalusova
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27569/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27569",
"html_url": "https://github.com/huggingface/transformers/pull/27569",
"diff_url": "https://github.com/huggingface/transformers/pull/27569.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27569.patch",
"merged_at": 1700257450000
} |
https://api.github.com/repos/huggingface/transformers/issues/27568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27568/comments | https://api.github.com/repos/huggingface/transformers/issues/27568/events | https://github.com/huggingface/transformers/pull/27568 | 1,999,695,654 | PR_kwDOCUB6oc5fx_JH | 27,568 | Allow `resume_from_checkpoint` to handle `auto_find_batch_size` | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27568). All of your documentation changes will be reflected on that endpoint.",
"@ArthurZucker agreed that it's a bit overkill. Would it be better to create a new file instead (something like `training_metadata.json`) instead that only gets made for now when doing something like `auto_find_batch_size` is enabled?",
"Why don't we just overwrite the arg given from the user? ",
"@ArthurZucker we still need to store it away somewhere when we do `resume_from_checkpoint`. The assumption is given a fresh run we don't want to have to run through the iteration loop again to find the right batch size if we've found it once during a prior call. It still needs to be saved somewhere outside on the file system",
"Ah okay, we don't know if the input batch was auto-found or not. Got it. Not sure we want to create a new file for this, fine with loading the state and if we need more meta-data we'll put them there as well I guess! "
] | 1,700 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the training batch size as part of the `TrainerState`. We do this because the `TrainerState` can be loaded in on `resume_from_checkpoint`, and so if a user has set `auto_find_batch_size` to be `True`, we can keep what that batch size was in there and load it back in if it were saved
Fixes https://github.com/huggingface/transformers/issues/25956
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27568/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27568",
"html_url": "https://github.com/huggingface/transformers/pull/27568",
"diff_url": "https://github.com/huggingface/transformers/pull/27568.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27568.patch",
"merged_at": 1702054262000
} |
https://api.github.com/repos/huggingface/transformers/issues/27567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27567/comments | https://api.github.com/repos/huggingface/transformers/issues/27567/events | https://github.com/huggingface/transformers/pull/27567 | 1,999,683,113 | PR_kwDOCUB6oc5fx8Y5 | 27,567 | Add patchtst | {
"login": "namctin",
"id": 8682412,
"node_id": "MDQ6VXNlcjg2ODI0MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8682412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namctin",
"html_url": "https://github.com/namctin",
"followers_url": "https://api.github.com/users/namctin/followers",
"following_url": "https://api.github.com/users/namctin/following{/other_user}",
"gists_url": "https://api.github.com/users/namctin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/namctin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namctin/subscriptions",
"organizations_url": "https://api.github.com/users/namctin/orgs",
"repos_url": "https://api.github.com/users/namctin/repos",
"events_url": "https://api.github.com/users/namctin/events{/privacy}",
"received_events_url": "https://api.github.com/users/namctin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27567/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27567",
"html_url": "https://github.com/huggingface/transformers/pull/27567",
"diff_url": "https://github.com/huggingface/transformers/pull/27567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27567.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27566/comments | https://api.github.com/repos/huggingface/transformers/issues/27566/events | https://github.com/huggingface/transformers/issues/27566 | 1,999,638,147 | I_kwDOCUB6oc53MA6D | 27,566 | Tokenizer loading: This breaks quite a few things in a lot of places | {
"login": "DevasiaThomas",
"id": 14965729,
"node_id": "MDQ6VXNlcjE0OTY1NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14965729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DevasiaThomas",
"html_url": "https://github.com/DevasiaThomas",
"followers_url": "https://api.github.com/users/DevasiaThomas/followers",
"following_url": "https://api.github.com/users/DevasiaThomas/following{/other_user}",
"gists_url": "https://api.github.com/users/DevasiaThomas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DevasiaThomas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DevasiaThomas/subscriptions",
"organizations_url": "https://api.github.com/users/DevasiaThomas/orgs",
"repos_url": "https://api.github.com/users/DevasiaThomas/repos",
"events_url": "https://api.github.com/users/DevasiaThomas/events{/privacy}",
"received_events_url": "https://api.github.com/users/DevasiaThomas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! π€ Thanks for reporting this! \r\nIs there anyway you share the errors you were getting. \r\nthe goal behind `init_kwargs[key] = added_tokens_map.get(init_kwargs[key], init_kwargs[key])` is to make sure that the `special_tokens` passed as initi_kwargs are replace with their `AddedToken` version if it exists. \r\nBut this should not have broken things (AFAIK) and was tested quite a bit! \r\nWould make sure the version of transformers is `>=4.34.1` because I had to do a patch! \r\n\r\nVery core issue so I'll be answering quick! ",
"@ArthurZucker \r\nIt was shared earlier as a link to what I was using :)\r\n\r\nhere's the [issue](https://github.com/oobabooga/text-generation-webui/issues/4370) \r\n\r\nI faced the same issue, when trying to load the LLaMa2 model. The issue has others with similar issues on other models: ``uhashable type dict``\r\n\r\n\r\n\r\n> the goal behind `init_kwargs[key] = added_tokens_map.get(init_kwargs[key], init_kwargs[key])` is to make sure that the `special_tokens` passed as initi_kwargs are replace with their `AddedToken` version if it exists. But this should not have broken things (AFAIK) and was tested quite a bit! Would make sure the version of transformers is `>=4.34.1` because I had to do a patch!\r\n\r\nIf that was your intent - then all you need to do is what I did\r\n``init_kwargs[key] = added_tokens_map.get(key, init_kwargs[key]``.\r\n Since you want to use the values for the same ``keys`` in ``AddedToken`` if they exist (not their values). Unless you actually want to retrieve based on the ``values`` from ``AddedToken`` instead of ``keys``\r\nI am using the latest ``transformers``. ",
"Thanks, yeah I have a PR draft at #27099 I'll work on it in priority! Thanks for the detailed report",
"> Thanks, yeah I have a PR draft at #27099 I'll work on it in priority! Thanks for the detailed report\r\n\r\nThanks :)",
"PR is ready and should fix! π€ ",
"@ArthurZucker Still get `unhashable type: 'AddedToken'` error, when try `tokenizer = CLIPTokenizer.from_pretrained('openai/clip-vit-large-patch14', cache_dir=cache_dir)`",
"Yep I'll merge this to main soon the PR introduced one regression for a model",
"@ArthurZucker This problem does not seem to be completely resolved, when try\r\n`tokenizer = CLIPTokenizer.from_pretrained('openai/clip-vit-large-patch14', cache_dir=cache_dir)`\r\nThe error in tokenization_utils_base disappeared, but there are still errors in tokenization_utils.\r\n```\r\n File \"Stable_Diffusion_QCOM/stable_diffusion.py\", line 41, in <module>\r\n tokenizer = CLIPTokenizer.from_pretrained('openai/clip-vit-large-patch14', cache_dir=cache_dir)\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils_base.py\", line 2028, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils_base.py\", line 2260, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/models/clip/tokenization_clip.py\", line 343, in __init__\r\n super().__init__(\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils.py\", line 368, in __init__\r\n [token for token in self.all_special_tokens_extended if token not in self._added_tokens_encoder],\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils.py\", line 368, in <listcomp>\r\n [token for token in self.all_special_tokens_extended if token not in self._added_tokens_encoder],\r\nTypeError: unhashable type: 'AddedToken'\r\n```",
"Make sure you are running this on the latest version, I can't reproduce this with:\r\n```python \r\nfrom transformers import CLIPTokenizer,AutoTokenizer\r\ntokenizer = CLIPTokenizer.from_pretrained('openai/clip-vit-large-patch14'\r\ntokenizer = AutoTokenizer.from_pretrained('openai/clip-vit-large-patch14')\r\n```",
"@ArthurZucker Thank you for your patience, here is my operation track, it works fine a few months ago.\r\n```\r\nroot@23c6d347b0c3:/home/linyun/Huggingface_models# git clone -b v4.36.2 https://github.com/huggingface/transformers\r\nCloning into 'transformers'...\r\nremote: Enumerating objects: 175620, done.\r\nremote: Counting objects: 100% (143/143), done.\r\nremote: Compressing objects: 100% (92/92), done.\r\nremote: Total 175620 (delta 69), reused 75 (delta 40), pack-reused 175477\r\nReceiving objects: 100% (175620/175620), 174.69 MiB | 6.00 MiB/s, done.\r\nResolving deltas: 100% (133029/133029), done.\r\nNote: switching to 'a7cab3c283312b8d4de5df3bbe719971e24f4281'.\r\n\r\nYou are in 'detached HEAD' state. You can look around, make experimental\r\nchanges and commit them, and you can discard any commits you make in this\r\nstate without impacting any branches by switching back to a branch.\r\n\r\nIf you want to create a new branch to retain commits you create, you may\r\ndo so (now or later) by using -c with the switch command. Example:\r\n\r\n git switch -c <new-branch-name>\r\n\r\nOr undo this operation with:\r\n\r\n git switch -\r\n\r\nTurn off this advice by setting config variable advice.detachedHead to false\r\n\r\nroot@23c6d347b0c3:/home/linyun/Huggingface_models# python\r\nPython 3.8.10 (default, Nov 22 2023, 10:22:35) \r\n[GCC 9.4.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import sys\r\n>>> sys.path.append(\"transformers/src\")\r\n>>> from transformers import CLIPTokenizer\r\n>>> cache_dir = \"./_data_/cache/huggingface/diffusers\"\r\n>>> tokenizer = CLIPTokenizer.from_pretrained('openai/clip-vit-large-patch14', cache_dir=cache_dir)\r\ntokenizer_config.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 905/905 [00:00<00:00, 68.8kB/s]\r\nvocab.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 961k/961k [00:00<00:00, 1.02MB/s]\r\nmerges.txt: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 525k/525k [00:00<00:00, 994kB/s]\r\nspecial_tokens_map.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 389/389 [00:00<00:00, 190kB/s]\r\ntokenizer.json: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.22M/2.22M [00:00<00:00, 4.11MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils_base.py\", line 2028, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils_base.py\", line 2260, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/models/clip/tokenization_clip.py\", line 343, in __init__\r\n super().__init__(\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils.py\", line 368, in __init__\r\n [token for token in self.all_special_tokens_extended if token not in self._added_tokens_encoder],\r\n File \"/home/linyun/Huggingface_models/transformers/src/transformers/tokenization_utils.py\", line 368, in <listcomp>\r\n [token for token in self.all_special_tokens_extended if token not in self._added_tokens_encoder],\r\nTypeError: unhashable type: 'AddedToken'\r\n>>> \r\n```\r\n",
"Could you push this tokenizer to the hub? This way I can have access to the tokenizer_config.json",
"@ArthurZucker Thanks a lot. After I changed `token` to `str(token)` in the following 2 places, the error disappeared.\r\n```\r\n def _update_trie(self, unique_no_split_tokens: Optional[str] = []):\r\n for token in self._added_tokens_decoder.values():\r\n if str(token) not in self.tokens_trie._tokens:\r\n self.tokens_trie.add(token.content)\r\n```\r\n```\r\n # 4. If some of the special tokens are not part of the vocab, we add them, at the end.\r\n # the order of addition is the same as self.SPECIAL_TOKENS_ATTRIBUTES following `tokenizers`\r\n self._add_tokens(\r\n [token for token in self.all_special_tokens_extended if str(token) not in self._added_tokens_encoder],\r\n special_tokens=True,\r\n )\r\n```",
"you change this in your custom `tokenization_clip.py` and are using remote code if I understood correctly? π€ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,707 | 1,707 | NONE | null | https://github.com/huggingface/transformers/blame/638d49983f36af910934b38771b4e55c835c1774/src/transformers/tokenization_utils_base.py#L2253
I was trying to use [text-generation-webui](https://github.com/oobabooga/text-generation-webui/issues/4370) and all tokenizer loads, using quite a few loaders that depend on transformers libraries, are broken.
@ArthurZucker Were you just trying to override the special token values in ``init_kwargs``?
I changed the statement as below, locally, to get things to work:
``init_kwargs[key] = added_tokens_map.get(key, init_kwargs[key])``
I didn't make a PR because I'm not sure what the overall intent is. Prior to this, the flow was different.
Appreciate your attention on this. Thanks :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27566/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27565/comments | https://api.github.com/repos/huggingface/transformers/issues/27565/events | https://github.com/huggingface/transformers/issues/27565 | 1,999,495,513 | I_kwDOCUB6oc53LeFZ | 27,565 | T5 Model on seq2seq task encountered a run time error. | {
"login": "Leonezz",
"id": 33564074,
"node_id": "MDQ6VXNlcjMzNTY0MDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33564074?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leonezz",
"html_url": "https://github.com/Leonezz",
"followers_url": "https://api.github.com/users/Leonezz/followers",
"following_url": "https://api.github.com/users/Leonezz/following{/other_user}",
"gists_url": "https://api.github.com/users/Leonezz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leonezz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leonezz/subscriptions",
"organizations_url": "https://api.github.com/users/Leonezz/orgs",
"repos_url": "https://api.github.com/users/Leonezz/repos",
"events_url": "https://api.github.com/users/Leonezz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leonezz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! \r\nI am not entirely sure which code should run the evaluation, you did not provide a full reproducer. \r\nDoes the model use `self.prune_heads`? is `has_relative_attention_bias` properly set? \r\nCould you share a full reproducer with a usable checkpoint and make sure you are properly.\r\nI am also not really sure if `output_hidden_states = True` if we are not doing distillation.",
"I also encounter this issue.\r\n\r\nThe complete code to reproduce the issue is as follows (adapted from my actual code to use the imdb dataset).\r\nRemoving the line `\"decoder_attention_mask\": labels[\"attention_mask\"]` makes the code run without errors. Otherwise it breaks on evaluation with\r\n```\r\n File \"[...]/env/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"[...]/env/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"[...]/env/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py\", line 560, in forward\r\n scores += position_bias_masked\r\nRuntimeError: output with shape [8, 12, 1, 1] doesn't match the broadcast shape [8, 12, 1, 8]\r\n```\r\n\r\n```python\r\n#!/usr/bin/env python3\r\n\r\nimport datasets\r\nimport transformers\r\n\r\nimport evaluate\r\n\r\nmetrics = evaluate.combine([\"f1\", \"precision\", \"recall\"])\r\n\r\nmodel_name = \"t5-base\"\r\nmodel = transformers.T5ForConditionalGeneration.from_pretrained( model_name)\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(model_name)\r\n\r\nid2label = {0: \"NEGATIVE\", 1: \"POSITIVE\"}\r\nlabel2id = {\"NEGATIVE\": 0, \"POSITIVE\": 1}\r\n\r\ndef preprocess(examples):\r\n input_encodings = tokenizer(examples[\"text\"], max_length=128, padding=\"max_length\", truncation=True)\r\n labels = [id2label[i] for i in examples[\"label\"]]\r\n labels = tokenizer(labels, max_length=8, padding=\"max_length\")\r\n encodings = {\r\n \"input_ids\": input_encodings[\"input_ids\"],\r\n \"attention_mask\": input_encodings[\"attention_mask\"],\r\n \"labels\": labels[\"input_ids\"],\r\n \"decoder_attention_mask\": labels[\"attention_mask\"]\r\n }\r\n\r\n return encodings\r\n\r\ndef compute_metrics(self, pred):\r\n pred_str = tokenizer.batch_decode(pred.predictions, skip_special_tokens=True)\r\n pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id\r\n label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)\r\n preds = list(map(str.strip, pred_str))\r\n preds = [label2id.get(l, -1) for l in preds]\r\n true_labels = list(map(str.strip, label_str))\r\n true_labels = [label_to_int[l] for l in true_labels]\r\n\r\n return metrics.compute(references=true_labels, predictions=preds,\r\n average=\"macro\")\r\n\r\nimdb = datasets.load_dataset(\"imdb\")\r\nfor key in [\"train\", \"test\"]:\r\n imdb[key] = imdb[key].map(preprocess, batched=True, load_from_cache_file=False)\r\n imdb[key] = imdb[key].remove_columns([\"label\"])\r\n\r\noptimizer = transformers.optimization.Adafactor(model.parameters(),\r\n scale_parameter=False, relative_step=False, lr=0.0001)\r\nlr_scheduler = transformers.optimization.AdafactorSchedule(optimizer)\r\n\r\ndata_collator = transformers.DataCollatorForSeq2Seq(tokenizer=tokenizer)\r\ntraining_args = transformers.Seq2SeqTrainingArguments(\r\n output_dir=\"t5-base-fine-tuning\",\r\n per_device_train_batch_size=8,\r\n num_train_epochs=5,\r\n evaluation_strategy=\"steps\",\r\n eval_steps=500,\r\n save_strategy=\"steps\",\r\n save_total_limit=3,\r\n load_best_model_at_end=True,\r\n predict_with_generate=True,\r\n fp16=True,\r\n)\r\n\r\ntrainer = transformers.Seq2SeqTrainer(\r\n model=model,\r\n optimizers=(optimizer, lr_scheduler),\r\n args=training_args,\r\n train_dataset=imdb[\"train\"],\r\n eval_dataset=imdb[\"test\"],\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics,\r\n)\r\nprint(trainer.train())\r\n```\r\n\r\n\r\n## System info\r\n```\r\n$ pip freeze | grep -E \"(transformers|torch|datasets|evaluate)\"\r\ndatasets==2.15.0\r\nevaluate==0.4.1\r\ntorch==2.1.0\r\ntransformers==4.35.2\r\n$ python --version\r\nPython 3.11.5\r\n$ nvidia-smi \r\nWed Dec 6 13:43:45 2023 \r\n+---------------------------------------------------------------------------------------+\r\n| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |\r\n|-----------------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|=========================================+======================+======================|\r\n| 0 Tesla T4 On | 00000001:00:00.0 Off | 0 |\r\n| N/A 28C P8 9W / 70W | 2MiB / 15360MiB | 0% Default |\r\n| | | N/A |\r\n+-----------------------------------------+----------------------+----------------------+\r\n \r\n+---------------------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=======================================================================================|\r\n| No running processes found |\r\n+---------------------------------------------------------------------------------------+\r\n```",
"I think this issue was fixed on main, would you mind trying with the latest version? π€ ",
"The latest transformers version has the same behaviour for me.\r\n```\r\n$ pip freeze | grep transformers\r\ntransformers @ git+https://github.com/huggingface/transformers@0410a29a2d5c798b2c0c1ca28398e0ddcf3384f2\r\n```",
"Ah sorry but if you don't pass the decoder input ids they are initialized from the labels, thus the mask from the ids is not of the same shape",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the issue:
```python
training_args = Seq2SeqTrainingArguments(
output_dir="./output",
evaluation_strategy="steps",
eval_steps=50,
predict_with_generate=True,
learning_rate=5e-4,
lr_scheduler_type="inverse_sqrt",
warmup_ratio=0.1,
push_to_hub=False,
per_device_train_batch_size=1,
per_device_eval_batch_size=4,
num_train_epochs=10,
logging_strategy="steps",
logging_steps=50,
logging_first_step=True,
seed=42,
bf16=True,
generation_max_length=1024,
generation_num_beams=5,
)
model = T5ForConditionalGeneration.from_pretrained(
PRETRAINED_MODEL,
cache_dir=CACHE_DIR,
output_hidden_states=True,
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
compute_metrics=calulate_metrics,
train_dataset=encoding_dataset["train"],
eval_dataset=encoding_dataset["validation"],
data_collator=data_collator
)
trainer.evaluate()
```
The above code runs a T5 model on a seq2seq task, but the evaluation reports a run time error:
`RuntimeError: output with shape [20, 8, 1, 1] doesn't match the broadcast shape [20, 8, 1, 1024]`
at line 561 of `modeling_t5.py`: `scores += position_bias_masked`.
### Expected behavior
The code should run evaluation on the given dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27565/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27564/comments | https://api.github.com/repos/huggingface/transformers/issues/27564/events | https://github.com/huggingface/transformers/pull/27564 | 1,999,446,946 | PR_kwDOCUB6oc5fxJmw | 27,564 | Harmonize HF environment variables + other cleaning | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for the explanation. Just one more nit question, in https://github.com/huggingface/huggingface_hub/pull/1786, there is a table of the re-mapped names, like `HUGGINGFACE_HUB_CACHE` -> `HF_CACHE`. I am wondering why we still have the usage of `constants.HUGGINGFACE_HUB_CACHE`.",
"Thanks for the reviews @LysandreJik and @ydshieh! \r\n\r\nI addressed 2 points:\r\n- do not use `use_auth_token` in deprecated code (see [here](https://github.com/huggingface/transformers/pull/27564#discussion_r1398999241))\r\n- use `huggingface_hub.constants.HF_HUB_CACHE` instead of `huggingface_hub.constants.HUGGINGFACE_HUB_CACHE` (see [here](https://github.com/huggingface/transformers/pull/27564#issuecomment-1819011254))\r\n\r\nAbout using `from huggingface_hub.utils._deprecation import _deprecate_method` I would be keen to keep it as it is for now if that's ok with you @LysandreJik. It's not ideal but still ok I think. In next release of `hfh` I'll make it more official and in any case `_deprecate_method` will be used only in the next 3 versions of `transformers` (so EoL quite soon :) ).\r\n\r\nFinally, regarding removing `get_file_from_repo` in favor of a unified cached_file` let's do it as a separate PR as suggested by @ydshieh. It is not really related to env variable harmonization.\r\n\r\n---\r\n\r\n@LysandreJik @ydshieh could you re-review the last changes + message above and approve the PR? I think we are good to merge the current version if that is ok with you. Thanks in advance!",
"_The documentation is not available anymore as the PR was closed or merged._",
"I'll review tomorrow! Thanks for this refactor π ",
"Yay! 3.5 approvals, that's enough! Thanks everyone :hugs: "
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | This PR aims at harmonizing environment variables in the HF ecosystem (following up on https://github.com/huggingface/huggingface_hub/pull/1786). I also took the time to review some integration of `huggingface_hub` into `transformers` (trying to break nothing). The goal is simply to have less duplication in the codebase. The PR is not ready to be merged as is (see comments) but can be reviewed to discuss the few points. Feedback is very welcome @LysandreJik @amyeroberts @ArthurZucker @ydshieh as I'm necesarily missing some context sometimes.
**List of changes:**
- `HF_HOME` is the preferred way to set a custom hf path. Second best solution is `HF_HUB_CACHE` (hf_hub_cache is only for model/dataset caching while hf_home also contains the token and the cached modules -when trust_remote_code-). Therefore:
- using `PYTORCH_PRETRAINED_BERT_CACHE` is deprecated (v5) but still respected
- using `PYTORCH_TRANSFORMERS_CACHE` is deprecated (v5) but still respected
- using `TRANSFORMERS_CACHE` is deprecated (v5) but still respected
- `DISABLE_TELEMETRY` is deprecated in favor of [`HF_HUB_DISABLE_TELEMETRY`](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhubdisabletelemetry)
- added a deprecation warning around `get_cached_models` => looks like a legacy unused method (and not compatible with new cache system)
- removed `try_to_load_from_cache` (moved to `huggingface_hub`)
- **unsure**: _I think_ we should harmonize between `get_file_from_repo` and `cached_file` to keep only one of them. The first one raises if the file doesn't exist while the second returns None in such a case. I think having a single method and a public `raise_on_error` argument should be enough (but open to discussion). Also both methods have the exact same docstring which is misleading.
- use `huggingface_hub.send_telemetry` to send telemetry data. Result is exactly the same but the HTTP call in made in a separate thread (meaning better UX as the user is not blocked)
- (nit) in `download_url` (already deprecated): it's safer to close the file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27564/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27564",
"html_url": "https://github.com/huggingface/transformers/pull/27564",
"diff_url": "https://github.com/huggingface/transformers/pull/27564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27564.patch",
"merged_at": 1700588187000
} |
https://api.github.com/repos/huggingface/transformers/issues/27563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27563/comments | https://api.github.com/repos/huggingface/transformers/issues/27563/events | https://github.com/huggingface/transformers/pull/27563 | 1,999,231,248 | PR_kwDOCUB6oc5fwbmt | 27,563 | [WIP] Testing safetensors==0.4.1rc1 | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27563/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27563",
"html_url": "https://github.com/huggingface/transformers/pull/27563",
"diff_url": "https://github.com/huggingface/transformers/pull/27563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27563.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27562/comments | https://api.github.com/repos/huggingface/transformers/issues/27562/events | https://github.com/huggingface/transformers/pull/27562 | 1,999,202,624 | PR_kwDOCUB6oc5fwVQE | 27,562 | Translate configuration.md to chinese | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,703 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Part of https://github.com/huggingface/transformers/issues/26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27562/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27562",
"html_url": "https://github.com/huggingface/transformers/pull/27562",
"diff_url": "https://github.com/huggingface/transformers/pull/27562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27562.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27561/comments | https://api.github.com/repos/huggingface/transformers/issues/27561/events | https://github.com/huggingface/transformers/pull/27561 | 1,999,171,234 | PR_kwDOCUB6oc5fwOe8 | 27,561 | Fix tracing dinov2 | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@ArthurZucker - nice idea, I'll add it! "
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
Tracing currently fails for DinoV2 due to an issue when calling torch's `torch._C._nn._upsample_bicubic` function through [nn.functional.interpolate](https://github.com/huggingface/transformers/blob/5330b83bc5637b8e7eafe095c22ef19e21baff2d/src/transformers/models/dinov2/modeling_dinov2.py#L106C1-L106C1). There is an issue if the passed in `scale_factor` is `(tensor(float), tensor(float))` so we must convert to a tuple of floats `(float, float)` as per [this PR](https://github.com/facebookresearch/dinov2/pull/247) to enable tracing.
I have run the slow tracing tests to make sure everything works.
```
tests/models/dinov2/test_modeling_dinov2.py::Dinov2ModelTest::test_torchscript_simple <- tests/test_modeling_common.py PASSED [ 33%]
tests/models/dinov2/test_modeling_dinov2.py::Dinov2ModelTest::test_torchscript_output_hidden_state <- tests/test_modeling_common.py PASSED [ 66%]
tests/models/dinov2/test_modeling_dinov2.py::Dinov2ModelTest::test_torchscript_output_attentions <- tests/test_modeling_common.py PASSED [100%]
```
The following now works:
```py
import torch
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
model = AutoModel.from_pretrained('facebook/dinov2-base')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs[0] #.last_hidden_state
# We have to force return_dict=False for tracing
model.config.return_dict = False
with torch.no_grad():
traced_model = torch.jit.trace(model, [inputs.pixel_values])
traced_outputs = traced_model(inputs.pixel_values)
print((last_hidden_states - traced_outputs[0]).abs().max())
```
Note: although the model outputs are close, they still have a significant absolute difference on the order of ~1e-4.
```
/Users/amyroberts/code/transformers/src/transformers/models/dinov2/modeling_dinov2.py:162: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_channels != self.num_channels:
/Users/amyroberts/code/transformers/src/transformers/models/dinov2/modeling_dinov2.py:94: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_patches == num_positions and height == width:
/Users/amyroberts/code/transformers/src/transformers/models/dinov2/modeling_dinov2.py:104: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
patch_pos_embed = patch_pos_embed.reshape(1, int(math.sqrt(num_positions)), int(math.sqrt(num_positions)), dim)
/Users/amyroberts/code/transformers/src/transformers/models/dinov2/modeling_dinov2.py:108: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
scale_factor=(float(height / math.sqrt(num_positions)), float(width / math.sqrt(num_positions))),
/Users/amyroberts/code/transformers/src/transformers/models/dinov2/modeling_dinov2.py:112: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if int(height) != patch_pos_embed.shape[-2] or int(width) != patch_pos_embed.shape[-1]:
/Users/amyroberts/code/transformers/src/transformers/models/dinov2/modeling_dinov2.py:112: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if int(height) != patch_pos_embed.shape[-2] or int(width) != patch_pos_embed.shape[-1]:
/Users/amyroberts/opt/miniconda3/envs/ml/lib/python3.10/site-packages/torch/jit/_trace.py:1093: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Tensor-likes are not close!
Mismatched elements: 1693 / 197376 (0.9%)
Greatest absolute difference: 0.00012087821960449219 at index (0, 46, 415) (up to 1e-05 allowed)
Greatest relative difference: 0.4337851929092805 at index (0, 206, 249) (up to 1e-05 allowed)
_check_trace(
/Users/amyroberts/opt/miniconda3/envs/ml/lib/python3.10/site-packages/torch/jit/_trace.py:1093: TracerWarning: Output nr 2. of the traced function does not match the corresponding output of the Python function. Detailed error:
Tensor-likes are not close!
Mismatched elements: 5 / 768 (0.7%)
Greatest absolute difference: 1.6689300537109375e-05 at index (0, 688) (up to 1e-05 allowed)
Greatest relative difference: 0.0002756976223801738 at index (0, 6) (up to 1e-05 allowed)
_check_trace(
tensor(0.0001, grad_fn=<MaxBackward1>)
```
Fixes #27537
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? No - but tests added
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27561/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27561",
"html_url": "https://github.com/huggingface/transformers/pull/27561",
"diff_url": "https://github.com/huggingface/transformers/pull/27561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27561.patch",
"merged_at": 1700576918000
} |
https://api.github.com/repos/huggingface/transformers/issues/27560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27560/comments | https://api.github.com/repos/huggingface/transformers/issues/27560/events | https://github.com/huggingface/transformers/pull/27560 | 1,998,982,277 | PR_kwDOCUB6oc5fvkhY | 27,560 | fixed broken link | {
"login": "VpkPrasanna",
"id": 30804112,
"node_id": "MDQ6VXNlcjMwODA0MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/30804112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VpkPrasanna",
"html_url": "https://github.com/VpkPrasanna",
"followers_url": "https://api.github.com/users/VpkPrasanna/followers",
"following_url": "https://api.github.com/users/VpkPrasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/VpkPrasanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VpkPrasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VpkPrasanna/subscriptions",
"organizations_url": "https://api.github.com/users/VpkPrasanna/orgs",
"repos_url": "https://api.github.com/users/VpkPrasanna/repos",
"events_url": "https://api.github.com/users/VpkPrasanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/VpkPrasanna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fixed the broken link for flatten method of dataset process
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)

the link is 404 due to URL is wrong with .html at the end.

## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu and @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27560/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27560",
"html_url": "https://github.com/huggingface/transformers/pull/27560",
"diff_url": "https://github.com/huggingface/transformers/pull/27560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27560.patch",
"merged_at": 1700238042000
} |
https://api.github.com/repos/huggingface/transformers/issues/27559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27559/comments | https://api.github.com/repos/huggingface/transformers/issues/27559/events | https://github.com/huggingface/transformers/issues/27559 | 1,998,897,540 | I_kwDOCUB6oc53JMGE | 27,559 | Visualbert VQA model inference lower accuracy in validation | {
"login": "guanhdrmq",
"id": 81207745,
"node_id": "MDQ6VXNlcjgxMjA3NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/81207745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guanhdrmq",
"html_url": "https://github.com/guanhdrmq",
"followers_url": "https://api.github.com/users/guanhdrmq/followers",
"following_url": "https://api.github.com/users/guanhdrmq/following{/other_user}",
"gists_url": "https://api.github.com/users/guanhdrmq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guanhdrmq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guanhdrmq/subscriptions",
"organizations_url": "https://api.github.com/users/guanhdrmq/orgs",
"repos_url": "https://api.github.com/users/guanhdrmq/repos",
"events_url": "https://api.github.com/users/guanhdrmq/events{/privacy}",
"received_events_url": "https://api.github.com/users/guanhdrmq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @guanhdrmq, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"Hi @amyeroberts Thanks for letting me know. I reckon the probelem already posts in hugginface.\r\nJust share another link alreayd talk about it: https://github.com/huggingface/transformers/issues/17360",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27558/comments | https://api.github.com/repos/huggingface/transformers/issues/27558/events | https://github.com/huggingface/transformers/pull/27558 | 1,998,837,109 | PR_kwDOCUB6oc5fvD0N | 27,558 | Generate: update compute transition scores doctest | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,700 | 1,700 | 1,700 | MEMBER | null | # What does this PR do?
#27351 Corrected a detail in the beam score computation: when applying the length penalty to the generated response, which is active by default, the prompt length should not be included in the penalty computation.
This PR corrects the doctest accordingly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27558/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27558",
"html_url": "https://github.com/huggingface/transformers/pull/27558",
"diff_url": "https://github.com/huggingface/transformers/pull/27558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27558.patch",
"merged_at": 1700220190000
} |
https://api.github.com/repos/huggingface/transformers/issues/27557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27557/comments | https://api.github.com/repos/huggingface/transformers/issues/27557/events | https://github.com/huggingface/transformers/pull/27557 | 1,998,781,732 | PR_kwDOCUB6oc5fu3v1 | 27,557 | Context Free Grammar Constrained Decoding (ebnf interface, compatible with llama-cpp) | {
"login": "Saibo-creator",
"id": 53392976,
"node_id": "MDQ6VXNlcjUzMzkyOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saibo-creator",
"html_url": "https://github.com/Saibo-creator",
"followers_url": "https://api.github.com/users/Saibo-creator/followers",
"following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}",
"gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions",
"organizations_url": "https://api.github.com/users/Saibo-creator/orgs",
"repos_url": "https://api.github.com/users/Saibo-creator/repos",
"events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saibo-creator/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I think it is a great idea to be compatible with llama.cpp! ",
"Hey @Saibo-creator,\r\n\r\nI tried running `grammar_utils.py` on `json.gbnf`, but I get the following error:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/home/user/grammar.py\", line 657, in <module>\r\n state = parse_ebnf(input_text)\r\n File \"/home/user/grammar.py\", line 249, in parse_ebnf\r\n grammar_repr = parse_rule(state, grammar_repr)\r\n File \"/home/user/grammar.py\", line 231, in parse_rule\r\n pos = parse_alternates(state, pos, name, rule_id, False)\r\n File \"/home/user/grammar.py\", line 212, in parse_alternates\r\n while pos[0] == \"|\":\r\nIndexError: string index out of range\r\n```",
"Hello @abhinavkulkarni ,\r\n This is probably because the missing new line at the end of the grammar. \r\n \r\nTry \r\n```\r\nroot ::= object\r\n\r\nobject ::= \"{\" ws ( string \":\" ws value (\",\" ws string \":\" ws value)* )? \"}\" ws\r\n\r\nvalue ::= object | array | string | number | (\"true\" | \"false\" | \"null\") ws\r\n\r\narray ::= \"[\" ws ( value (\",\" ws value)* )? \"]\" ws\r\n\r\nstring ::= \"\\\"\" ( [a-zA-Z0-9] )* \"\\\"\" ws\r\n\r\nnumber ::= (\"-\"? ([0-9] | [1-9] [0-9]*)) (\".\" [0-9]+)? ([eE] [-+]? [0-9]+)? ws\r\n\r\n\r\nws ::= ([ \\t\\n] ws)?\r\n\r\n```\r\n\r\ninstead of \r\n\r\n```\r\nroot ::= object\r\n\r\nobject ::= \"{\" ws ( string \":\" ws value (\",\" ws string \":\" ws value)* )? \"}\" ws\r\n\r\nvalue ::= object | array | string | number | (\"true\" | \"false\" | \"null\") ws\r\n\r\narray ::= \"[\" ws ( value (\",\" ws value)* )? \"]\" ws\r\n\r\nstring ::= \"\\\"\" ( [a-zA-Z0-9] )* \"\\\"\" ws\r\n\r\nnumber ::= (\"-\"? ([0-9] | [1-9] [0-9]*)) (\".\" [0-9]+)? ([eE] [-+]? [0-9]+)? ws\r\n\r\n\r\nws ::= ([ \\t\\n] ws)?\r\n```\r\n\r\nLet me know if this doesn't work \r\n",
"Thanks @Saibo-creator, that works.\r\n\r\nI have the following piece of code:\r\n\r\n```python\r\nmodel_id = \"TheBloke/zephyr-7B-alpha-AWQ\"\r\ntokenizer = LlamaTokenizerFast.from_pretrained(model_id)\r\nstreamer = TextStreamer(tokenizer, skip_special_tokens=True)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, device_map=\"cuda:0\")\r\n\r\nwith open(\"./json.gbnf\", \"r\") as file:\r\n grammar_str = file.read()\r\n grammar = IncrementalGrammarAcceptor(grammar_str, \"root\", tokenizer)\r\n logits_processor = GrammarConstrainedLogitsProcessor(grammar, batch_size=2, num_beams=1)\r\n\r\nprompt = f'''What is the difference between nuclear fusion and fission?\r\n###Response:'''\r\n\r\ninput_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()\r\noutput = model.generate(\r\n inputs=input_ids, \r\n # do_sample=True,\r\n # temperature=0.7,\r\n # top_p=0.15,\r\n # top_k=0,\r\n max_new_tokens=512,\r\n repetition_penalty=1.1,\r\n eos_token_id=tokenizer.eos_token_id,\r\n logits_processor=[logits_processor],\r\n streamer=streamer)\r\n```\r\n\r\nI get a response that starts with:\r\n\r\n```\r\n{\r\n\r\n\"Nuclear\" \r\n```\r\n\r\nbut then continues to output `\\n` till it reaches max token limit.\r\n\r\nPlease note, if I don't specify custom `logits_processor`, I get a pretty valid output:\r\n\r\n```\r\nWhat is the difference between nuclear fusion and fission?\r\n###Response:\r\nNuclear fusion and fission are two different processes that occur in the nucleus of an atom. \r\n\r\n1. Nuclear Fusion: In this process, two or more atomic nuclei combine to form a heavier nucleus. This process releases a tremendous amount of energy, which is used as a source of power in stars and in controlled environments like nuclear fusion reactors. The most common example of nuclear fusion is the reaction that occurs inside the sun.\r\n\r\n2. Nuclear Fission: In this process, a heavy nucleus splits into two lighter nuclei, releasing a significant amount of energy. This process is used to generate electricity in nuclear power plants. However, it also has the potential for catastrophic consequences if not properly controlled.\r\n\r\nIn summary, while both nuclear fusion and fission involve changes in the nucleus of an atom, they differ in terms of the number of nuclei involved and the type of reaction that takes place.\r\n```",
"@abhinavkulkarni \r\n\r\nThank you for testing!\r\n\r\nI will look into this issue! \r\n\r\nBy the way, I just integrated the `gcd` feature into `generation_api`, now you can run it with\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nfrom transformers.generation.grammar_utils import IncrementalGrammarConstraint\r\n\r\n\r\nif __name__ == '__main__':\r\n torch.manual_seed(2)\r\n\r\n model_id = \"gpt2\"\r\n tokenizer = AutoTokenizer.from_pretrained(model_id)\r\n tokenizer.pad_token = tokenizer.eos_token\r\n model = AutoModelForCausalLM.from_pretrained(model_id)\r\n with open(\"examples/grammars/json.gbnf\", \"r\") as file:\r\n grammar_str = file.read()\r\n grammar = IncrementalGrammarConstraint(grammar_str, \"root\", tokenizer)\r\n\r\n prefix1= \"This is a valid json string for email:\"\r\n prefix2= \"This is a valid json string for shopping cart:\"\r\n input_ids = tokenizer([prefix1, prefix2],add_special_tokens=False, return_tensors=\"pt\", padding=True)[\"input_ids\"]\r\n\r\n output = model.generate(input_ids, do_sample=False, max_length=30, num_beams=2, grammar=grammar,\r\n num_return_sequences=2)\r\n # decode output\r\n generations = tokenizer.batch_decode(output, skip_special_tokens=True)\r\n print(generations)\r\n\r\n \"\"\"\r\n 'This is a valid json string for email:{ \"title\": \"Theory\", \"text\": \"Theory\", \"type\": \"text\", \"text\": \"Theory\", \"type',\r\n 'This is a valid json string for shopping cart:{ \"name\": \"MyCart\", \"price\": \"10\", \"price\": \"10\", \"price\": \"10\", \"price\": \"'\r\n \"\"\"\r\n```\r\n\r\nIf you have time, could you try to call via above api and confirm if the problem remains?\r\n\r\nFor GPT2, it works as expected, so this may be related to the specific implementation of llama-tokenizer. I will try to fix it asap",
"For prompt:\r\n\r\n```\r\nprompt = f\"A sample JSON for employee record in a database: \"\r\n```\r\n\r\nI do get a JSON-looking response, but then again, the model continues to output newlines until it hits the token limit:\r\n\r\n```\r\n{\r\n \"id\": 1,\r\n \"name\": \"John\",\r\n \"age\": 25,\r\n \"salary\": 30000,\r\n \"department\": {\r\n \"id\": 1,\r\n \"name\": \"Sales\"\r\n }\r\nempty stack\r\n}\r\n....\r\n....\r\n....\r\n```",
"@abhinavkulkarni \r\nI'm able to reproduce the \"strange behavior\" you reported and it actually not a bug but rather an \"expected behavior\".\r\n\r\nIn the json grammar, we have \r\n`object ::= \"{\" ws ( string \":\" ws value (\",\" ws string \":\" ws value)* )? \"}\" ws`\r\n\r\nand the last `ws` basically allows the model to generate arbitrary white space(including new line) after the json object because such white space doesn't break the json syntax.\r\n\r\nThis may not be a desired behavior, so I removed that `ws` from the json grammar and it seems to work correctly.\r\n\r\nBut it does surprise me that the model didn't pick `EOS` after finishing the json object. Maybe the EOS was not added to the allowed token list due to a bug.\r\nI will double-check if I treated the EOS correctly in the grammar implementation.\r\n\r\n ",
"@Saibo-creator: Thanks for the changes. \r\n\r\nRemoving the whitespace `ws` fixes the newline problem.\r\n\r\nFor the simple prompt, `prompt = f\"A sample JSON for employee record in a database: \"`, I still see a `WARN` log line:\r\n\r\n```\r\nWARNING:grammar_utils:empty stack\r\n```\r\n\r\nA few points:\r\n\r\n1. Should the grammar processor not reset its state after one call of `model.generate`? Calling `model.generate` on the same grammar processor throws an expectation. It would be expensive to have to parse the grammar afresh for every single `model.generate` call.\r\n2. Should amping up the `repetition_penalty` not fix the whitespace issue? Unless there is a bug that doesn't include EOS in the state transition machine which you alluded to.",
"@abhinavkulkarni \r\n\r\nRegarding resetting the state of grammar processor, here is my consideration:\r\n\r\nCurrently the `GrammarConstrainedLogitsProcessor` contains the parsing state, and I think it may be useful to not reset the state after every generation, because this could allow the user to continue the grammar-constrained generation, see the code example below.\r\n\r\nAnd if the user wants to start a new generation, a new instance of `LogitProcessor` is indeed needed (here we can also add a `reset` method to make it more user-friendly)\r\n\r\n> It would be expensive to have to parse the grammar afresh for every single model.generate call.\r\n\r\nI don't get this point though. \r\n- If the user's goal is to start another generation from scratch, then the grammar has to be parsed afresh, I don't think there is a way to avoid it ?\r\n- If the user's goal is to continue the generation, then the example I showed below should solve the problem. The parsing would not need to start from scratch but simply continue from the old parsing state\r\n\r\nDoes this sound reasonable for you?\r\n\r\nRegarding the design choice to put the parsing state inside the `LogitProcessor`, I'm not sure if this is the best way to do it. So I would like to have your opinion @gante :)\r\n\r\n\r\n\r\n\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer,TextStreamer, set_seed\r\nfrom transformers.generation.grammar_utils import IncrementalGrammarConstraint\r\nfrom transformers.generation.logits_process import GrammarConstrainedLogitsProcessor\r\n\r\n\r\nif __name__ == '__main__':\r\n\r\n import logging\r\n logging.getLogger(\"transformers.generation\").setLevel(logging.INFO)\r\n\r\n # model_id = \"saibo/llama-1B\"\r\n model_id = \"gpt2\"\r\n tokenizer = AutoTokenizer.from_pretrained(model_id)\r\n streamer = TextStreamer(tokenizer, skip_special_tokens=True)\r\n tokenizer.pad_token = tokenizer.eos_token\r\n model = AutoModelForCausalLM.from_pretrained(model_id)\r\n with open(\"examples/grammars/json.gbnf\", \"r\") as file:\r\n grammar_str = file.read()\r\n grammar = IncrementalGrammarConstraint(grammar_str, \"root\", tokenizer)\r\n\r\n prefix1= \"This is a valid json string for email:\"\r\n prefix2= \"This is a valid json string for shopping cart:\"\r\n input_ids = tokenizer([prefix2],add_special_tokens=False, return_tensors=\"pt\", padding=True)[\"input_ids\"]\r\n\r\n logits_processor = GrammarConstrainedLogitsProcessor(grammar)\r\n\r\n ###################################################\r\n # generation under the Grammar constraint for 10 tokens\r\n ##################################################\r\n\r\n output = model.generate(input_ids, do_sample=False, max_new_tokens=10, num_beams=2, logits_processor=[logits_processor],\r\n num_return_sequences=1, repetition_penalty=1.5)\r\n\r\n generations = tokenizer.batch_decode(output, skip_special_tokens=True)\r\n print(generations)\r\n # 'This is a valid json string for shopping cart:{ \"name\": \"MyCart\", \"price'\r\n\r\n ###################################################\r\n # Continue the generation under the same constraint for 10 tokens\r\n #\r\n # 1. Need to use the output of the previous generation as the input for the next generation\r\n # 2. Reuse the same logits_processor because the parser state is stored in the logits_processor\r\n #\r\n ##################################################\r\n\r\n output = model.generate(output[0].unsqueeze(0), do_sample=False, max_new_tokens=10, num_beams=2, logits_processor=[logits_processor],\r\n num_return_sequences=1, repetition_penalty=1.5)\r\n generations = tokenizer.batch_decode(output, skip_special_tokens=True)\r\n print(generations)\r\n # 'This is a valid json string for shopping cart:{ \"name\": \"MyCart\", \"price\": \"10\", \"description\": \"MyCart'\r\n\r\n\r\n ###################################################\r\n # We want to generate another valid json string\r\n #\r\n # 1. Create a new logits_processor with empty parser state\r\n # 2. Use the same prompt as the input\r\n ##################################################\r\n\r\n logits_processor = GrammarConstrainedLogitsProcessor(grammar)\r\n\r\n output = model.generate(input_ids, do_sample=True, max_new_tokens=20, num_beams=2, logits_processor=[logits_processor],\r\n num_return_sequences=1, repetition_penalty=1.5)\r\n\r\n generations = tokenizer.batch_decode(output, skip_special_tokens=True)\r\n print(generations)\r\n # 'This is a valid json string for shopping cart:{ \"name\": \"MyCart\", \"price\": \"10\", \"description\": \"MyCart'\r\n\r\n```\r\n \r\n\r\n",
"> Does this sound reasonable for you?\r\n\r\nThanks @Saibo-creator, it makes sense not to reset the grammar state so that the user can continue the generation.\r\n\r\nOne more minor correction, the rule for generating `string` in JSON grammar should be:\r\n\r\n```\r\nstring ::= \"\\\"\" ( [ a-zA-Z0-9] )* \"\\\"\" ws\r\n```\r\n\r\ninstead of \r\n\r\n```\r\nstring ::= \"\\\"\" ( [a-zA-Z0-9] )* \"\\\"\" ws\r\n```",
"can we add a python gbnf file too ? Can take inspiration from : https://github.com/ggerganov/llama.cpp/blob/master/grammars/c.gbnf ",
"This should be related to the constrained decoding in [Picard](https://arxiv.org/abs/2109.05093) and [Synchromesh](https://openreview.net/forum?id=KmtVD97J43e).",
"Hello @gante @ArthurZucker \r\n\r\nI'm excited to share that the feature is now in great shape, and I'm eager to hear your thoughts on it.\r\n\r\nThe implementation of the grammar-constrained decoding feature is quite complex, as we aim to make it compatible with \r\n- beam search, \r\n- sampling,\r\n- all tokenizers,\r\n- Unicode\r\n- etc\r\n \r\nIt's relatively straightforward to integrate it with greedy search or greedy sampling. This leads me to my first question: Should we break down this feature into multiple versions, starting with a basic one, or would it be better to aim for a comprehensive solution in a single merge? From my perspective, once we have thoroughly tested greedy decoding and greedy sampling, it might be beneficial to merge them first, as they already cater to a wide range of use cases.\r\n\r\nAdditionally, I'm facing some challenges in devising tests for this feature. Currently, I have a setup similar to what's outlined [here](https://github.com/Saibo-creator/transformers-gcd/blob/feature/cfgcd/test_grammar_constrained_decoding.py), where I create simple grammars and verify the accuracy of the generation. However, establishing a systematic testing approach is tricky. For example, if we want to test the json grammar compatibility with all models, running the model with actual weights becomes necessary. Without the weights, the model might generate nonsensical but syntactically correct json outputs, which doesn't help in effective testing. While using actual weights does lead to valid json generation, it significantly slows down the process.\r\n\r\nI'd appreciate your insights on how to navigate these testing challenges. In the meantime, I'll continue refining the feature.\r\n\r\n: )",
"@ Saibo-creator I suggest we break down this feature into multiple versions, starting with a basic one. This creates motivation and encourages more people to collaborate, a greedy search for JSON sounds good for a start.",
"Thank you @arshadshk for the feedback. I agree with you! In terms of greedy search and random sampling-based decoding, this feature should already be solid enough. \r\n\r\nAnd indeed json is the most popular use case for this feature, so we can add Unicode support a bit later. \r\n\r\nNow I'm working on crafting tests. It's a bit challenging to write tests for this feature. For example, I really want to have a TestMixin that tries to test every model to generate json objects. But as I explained above, this seems non-trivial. \r\n\r\nI will start with more atomic tests like [this](https://github.com/huggingface/transformers/pull/27557/files#diff-368e1430d055143f6037ab2eb2eee4c15840b5d79e1cc68630313c75e57036f6)\r\n",
"btw, @arshadshk, if you have time, could you also have a look at #27676 ? That PR tries to fix a bug which is important for this CFG feature to work properly, Thanks !",
"@Saibo-creator the (https://github.com/huggingface/transformers/issues/27676) fix makes sense, I wonder if we open up probs for ```<eos>``` token too along with ```<pad>``` token, we might need to terminate the generation if nothing more is generated.",
"Hi @Saibo-creator π \r\n\r\nIt's great to see a project with a working example! I'd love to add it to `transformers` at some point, but we don't have the capacity to maintain a new text generation project at the moment -- you can probably see from my response time in the PRs that our bandwidth at the moment is quite limited :) Since `transformers` is used in production, we can't add features if we don't have the capacity to maintain them.\r\n\r\nMy suggestion: let's add the code as is under `/examples/research_projects/grammar`, for which the `transformers` team has 0 maintenance guarantees, and move it into the main `transformers` folder as soon as we have capacity on our end. Does it sound good to you? π€ \r\n\r\nP.S.: as a research project, you'd be able to make any changes you want with pretty much no barriers on our side ;)",
"> Hi @Saibo-creator π\r\n> \r\n> It's great to see a project with a working example! I'd love to add it to `transformers` at some point, but we don't have the capacity to maintain a new text generation project at the moment -- you can probably see from my response time in the PRs that our bandwidth at the moment is quite limited :) Since `transformers` is used in production, we can't add features if we don't have the capacity to maintain them.\r\n> \r\n> My suggestion: let's add the code as is under `/examples/research_projects/grammar`, for which the `transformers` team has 0 maintenance guarantees, and move it into the main `transformers` folder as soon as we have capacity on our end. Does it sound good to you? π€\r\n> \r\n> P.S.: as a research project, you'd be able to make any changes you want with pretty much no barriers on our side ;)\r\n\r\nSounds great! Thank you @gante !\r\nI'm asking a couple of friends to test it. When it's ready I would be happy to write a blog like [this one](https://huggingface.co/blog/assisted-generation) to introduce this feature.",
"@Saibo-creator sounds great!\r\n\r\nAnd don't let my conservative approach to your suggestions lower your enthusiasm, I'm enjoying your contributions :D",
"I have tested the code in this PR and found that it works very nicely, so I borrowed it for my repository (with due credits): https://github.com/oobabooga/text-generation-webui/pull/4953\r\n\r\nIt is more robust than the [torch-grammar](https://github.com/Shopify/torch-grammar) EBNF implementation that I was previously using, which would half the time throw and error while importing a seemingly valid grammar.\r\n\r\nBeing able to generate structured output like json and lists for a given prompt has many practical applications and this logits processor makes that easy to setup, so I find it extremely valuable.",
"Thanks @oobabooga, I was also able to test it successfully in HuggingFace TGI. It does work very well.",
"i did set this up on fastapi and it only return result once ",
"Could you give a working example to show the problem ? I would be happy to investigate it. ",
"Since Transformers will not merge this in the near future, I have written a small extension library. The use is very straightforward.\r\n\r\nhttps://github.com/Saibo-creator/transformers-CFG/tree/main",
"Hello! My use case requires the grammar to be dependent on the input text. I'm wondering if the current implementation supports passing a batch of grammars along with the batch of input and constrain the output based on different grammars ? ",
"> Hello! My use case requires the grammar to be dependent on the input text. I'm wondering if the current implementation supports passing a batch of grammars along with the batch of input and constrain the output based on different grammars ?\r\n\r\nHey! This is an interesting use case and I'm working on it. Will keep you updated."
] | 1,700 | 1,706 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds a new feature (Context Free Grammar Constrained Decoding) to the library.
There is already one PR(WIP) for this feature( #26520 ), but this one has a different motivation and implementation.
This implementation is inspired by and adapted from https://github.com/Shopify/torch-grammar and https://github.com/ggerganov/llama.cpp/pull/1773/files
This implementation aims to achieve the following goals:
- CFG-constrained decoding
- EBNF notation as interface for the grammar
- standalone implementation of the grammar parser(left recursive-descent parsing, the same as in llama-cpp)
- compatibility with grammars in the llama.cpp library(https://github.com/ggerganov/llama.cpp/tree/master/grammars)
- incremental parsing and also non-incremental parsing(some tokenizers doesn't support incremental parsing from my experiments)
- unicode support for the grammar(not trivial but important for any multi-lingual model)
The two main differences from PR #26520 :
- dependency on lark, which may not be a bad thing, but my experience is that it will reduce the flexibility and may be hard to adapt to our specfic need, e.g. unicode grammar support.
- ebnf interface. This PR supports the same EBNF as in llama-cpp, so that users can directly migrate from llama-cpp
Challenges for this PR:
- compatibility with all the tokenizers in the transformers library.
Current status:
- [x] The grammar parser is implemented and works well with the example grammars from llama.cpp library.
- [x] A few integration tests are added to test the combination of grammar and tokenizer.
- [x] no unicode support yet, means it will probably fail when you want to constrain with emoji or other unicode characters.
- [x] greedy search
- [x] sampling, top-k, top-p
- [ ] beam search
TODO:
- [x] Batching support
- [x] compatible with greedy decoding and sampling under `beam=1`
- [x] ~~grammar parser fails to parse llama-cpp's json grammar(more precisely the string line). Currently, a slightly simplified version of json grammar if used~~(now fixed)
- [x] ~~grammar parser requires the last rule ending with a new line, otherwise, parsing error will be raised. This is not user-friendly and should be fixed~~
- [ ] The EOS token seems not always included in the allowed tokens even when it should be, maybe due to the nature of recursive-descent parsing ?
- [ ] compatible with `beam_search` and `beam_sample`(Now throws error `RuntimeError: probability tensor contains either `inf`, `nan` or element < 0`). A good reference is the `ConstrainedBeamSearchScorer`
- [ ] unicode support
- [ ] properly test with different tokenizers(bpe, wordpiece, unigram, etc.)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # https://github.com/huggingface/transformers/issues/25778
Related to PR # https://github.com/huggingface/transformers/pull/26520
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27557/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27557",
"html_url": "https://github.com/huggingface/transformers/pull/27557",
"diff_url": "https://github.com/huggingface/transformers/pull/27557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27557.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27556/comments | https://api.github.com/repos/huggingface/transformers/issues/27556/events | https://github.com/huggingface/transformers/issues/27556 | 1,998,742,385 | I_kwDOCUB6oc53ImNx | 27,556 | [i18n-JP] Translating `en/model_doc` docs to Japanese | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Japanese-speaking community π
Who would want to translate? Please follow the π€ [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers π€).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* π If you'd like others to help you with the translation, you can also post in the π€ [forums](https://discuss.huggingface.co/).
## Model_doc section
- [x] [bark.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bark.md) #27264
- [x] [bart.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bart.md) #27264
- [x] [barthez.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/barthez.md) #27264
- [x] [bartpho.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bartpho.md) #27264
- [x] [beit.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/beit.md) #27264
- [x] [bert-generation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bert-generation.md) #27264
- [x] [bert-japanese.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bert-japanese.md) #27264
- [x] [bert.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bert.md) [](#27264)
- [x] [bertweet.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bertweet.md) #27264
- [x] [big_bird.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/big_bird.md) #27264
- [x] [bigbird_pegasus.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bigbird_pegasus.md) #27264
- [x] [biogpt.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/biogpt.md) #27264
- [x] [bit.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bit.md) #27264
- [x] [blenderbot-small.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/blenderbot-small.md) #27264
- [x] [blenderbot.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/blenderbot.md) #27264
Keep on adding more as you go π₯
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27556/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27555/comments | https://api.github.com/repos/huggingface/transformers/issues/27555/events | https://github.com/huggingface/transformers/pull/27555 | 1,998,666,597 | PR_kwDOCUB6oc5fue-n | 27,555 | Fix AMD CI not showing GPU | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After merging are we able to reenable the tests from #27541 ? ",
"> After merging are we able to reenable the tests from #27541 ?\r\n\r\nNo, these 2 are irrelevant. #27541 is about `AMD docker image can't be built` while this one is `some issue at testing time (even if we build the image manually)`"
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
For AMD CI jobs, like [this one](https://github.com/huggingface/transformers/actions/runs/6879141724/job/18711232327), in the `Environment` section, sometimes it shows `Number of GPUs available: 0`.
After investigation from the infra team (thanks to Guillaume), they told me the changes in this PR fix the issue: removing the environment variable `HIP_VISIBLE_DEVICES`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27555/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27555/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27555",
"html_url": "https://github.com/huggingface/transformers/pull/27555",
"diff_url": "https://github.com/huggingface/transformers/pull/27555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27555.patch",
"merged_at": 1700214277000
} |
https://api.github.com/repos/huggingface/transformers/issues/27554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27554/comments | https://api.github.com/repos/huggingface/transformers/issues/27554/events | https://github.com/huggingface/transformers/issues/27554 | 1,998,580,180 | I_kwDOCUB6oc53H-nU | 27,554 | Bug of check_imports | {
"login": "wqh17101",
"id": 26429138,
"node_id": "MDQ6VXNlcjI2NDI5MTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/26429138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wqh17101",
"html_url": "https://github.com/wqh17101",
"followers_url": "https://api.github.com/users/wqh17101/followers",
"following_url": "https://api.github.com/users/wqh17101/following{/other_user}",
"gists_url": "https://api.github.com/users/wqh17101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wqh17101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wqh17101/subscriptions",
"organizations_url": "https://api.github.com/users/wqh17101/orgs",
"repos_url": "https://api.github.com/users/wqh17101/repos",
"events_url": "https://api.github.com/users/wqh17101/events{/privacy}",
"received_events_url": "https://api.github.com/users/wqh17101/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"```python\r\nimport re\r\ncontent=\"\"\"\r\ntry:\r\n from .configuration_baichuan import BaichuanConfig\r\n from .generation_utils import build_chat_input, TextIterStreamer\r\nexcept:\r\n from configuration_baichuan import BaichuanConfig\r\n from generation_utils import build_chat_input, TextIterStreamer\r\n\"\"\"\r\ncontent = re.sub(r\"\\s*try\\s*:\\s*.*?\\s*except\\s*.*?:\", \"\", content, flags=re.MULTILINE | re.DOTALL)\r\nprint(content)\r\n```\r\n```\r\n from configuration_baichuan import BaichuanConfig\r\n from generation_utils import build_chat_input, TextIterStreamer\r\n```\r\nas you can see,it only removes the thing in try block",
"Hi @wqh17101, thanks for raising an issue! \r\n\r\nAm I right in understanding [this example](https://github.com/huggingface/transformers/issues/27554#issuecomment-1816304371) is referring to [this line](https://github.com/huggingface/transformers/blob/638d49983f36af910934b38771b4e55c835c1774/src/transformers/dynamic_module_utils.py#L148) in the code? \r\n\r\n> I expect no error for my script for , it is equal to the original script\r\n\r\nWithout knowing what code you're running, the full error message or where the script is relative to the modeling files we can only guess the issue. Could you provide a code snippet of what's being run and where relative to the model repo? Could you link to the \"original\" script? \r\n\r\nWhat `check_imports` is doing is seeing whether the specified module [can be imported](https://github.com/huggingface/transformers/blob/638d49983f36af910934b38771b4e55c835c1774/src/transformers/dynamic_module_utils.py#L174C9-L174C9). This can be [relative or absolute](https://docs.python.org/3/library/importlib.html#importlib.import_module): if an error is being raised then it indicates `configuration` can not be imported. ",
"@amyeroberts Thank you.After I read your the source code, I think we do not need the real script.\n\nBecause the line you found is aimed to remove the content both in try and except block. But now, it only removes try block.\n\n\nAlso let's think about transformers without check_imports. if some modules cannot be imported,python will raise the error originally.\n\nWith check_imports, you will raise the error by using importlib. I think this is superfluous.\n\nOn the other hand, python is a dynamic language, you can't check the modules to be import statically by text. Because there can be too many flexible conditional branch grammar like try...except... if...else.... and so on. You can see that the source code doesn't consider the grammar if....else....",
"@wqh17101 Yes, it's true that handling imports in python is messy, particularly considering if/else, try/except blocks. To provide some context, one of the reason for `check_imports` is to provide clear error messages for our users. Within the library there's many additional packages which can optionally be installed for certain functionality e.g. Pillow for image processing. This explicitly tells the user they need to install that package.",
"@amyeroberts Before there is a good way to check, I think it is better for you to provide some options to let user skip this check",
"@wqh17101 If there's a particular solution you have in mind, feel free to open up a PR and ping me for review! ",
"> @amyeroberts Before there is a good way to check, I think it is better for you to provide some options to let user skip this check\n\nWhat about this one? You want me to PR about this?",
"@wqh17101 What I'm suggesting is that if this is something you think should be addressed and you have a solution in mind then you can open a PR directly and ask for a review from myself (or another relevant person working at Hugging Face). ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My huggingface repo is https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
when i modify [modeling_baichuan.py](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat/blob/main/modeling_baichuan.py)
change
```
from .configuration_baichuan import BaichuanConfig
from .generation_utils import build_chat_input, TextIterStreamer
```
to
```
try:
from .configuration_baichuan import BaichuanConfig
from .generation_utils import build_chat_input, TextIterStreamer
except:
from configuration_baichuan import BaichuanConfig
from generation_utils import build_chat_input, TextIterStreamer
```
to support using the model in the model dir directly.
And get `ImportError: This modeling file requires the following packages that were not found in your environment: configuration_baichuan, generation_utils. Run `pip install configuration_baichuan generation_utils`
### Expected behavior
I don't know why we need to check_imports by transformers instead of Python own.
I expect no error for my script for , it is equal to the original script | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27554/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27553/comments | https://api.github.com/repos/huggingface/transformers/issues/27553/events | https://github.com/huggingface/transformers/pull/27553 | 1,998,556,602 | PR_kwDOCUB6oc5fuG_6 | 27,553 | Skip some fuyu tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The code assistant model will soon learn thanks == approve π ",
"@ydshieh I hope not! Often I say thanks for the work so far + requests for change π "
] | 1,700 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
The old story of offload tests for tiny models ... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27553/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27553",
"html_url": "https://github.com/huggingface/transformers/pull/27553",
"diff_url": "https://github.com/huggingface/transformers/pull/27553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27553.patch",
"merged_at": 1700213705000
} |
https://api.github.com/repos/huggingface/transformers/issues/27552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27552/comments | https://api.github.com/repos/huggingface/transformers/issues/27552/events | https://github.com/huggingface/transformers/issues/27552 | 1,998,282,371 | I_kwDOCUB6oc53G16D | 27,552 | Flash Attention 2 for audio/musicgen | {
"login": "yoinked-h",
"id": 63889420,
"node_id": "MDQ6VXNlcjYzODg5NDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/63889420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoinked-h",
"html_url": "https://github.com/yoinked-h",
"followers_url": "https://api.github.com/users/yoinked-h/followers",
"following_url": "https://api.github.com/users/yoinked-h/following{/other_user}",
"gists_url": "https://api.github.com/users/yoinked-h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoinked-h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoinked-h/subscriptions",
"organizations_url": "https://api.github.com/users/yoinked-h/orgs",
"repos_url": "https://api.github.com/users/yoinked-h/repos",
"events_url": "https://api.github.com/users/yoinked-h/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoinked-h/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @sanchit-gandhi @ylacombe ",
"Would indeed be cool having FA2 support for MusicGen! Since MusicGen copies the attention mechanism from Bart, you can copy the changes from this PR: https://github.com/huggingface/transformers/pull/27203\r\n\r\nWould you like to have a go at this integration @yoinked-h? It should be quite a fun PR where you get quite big speed-ups for not too much additional effort!",
"I've tried to implement the Bart flash attention as best as i could, but i get some error (`cu_seqlens_k must have shape (batch_size + 1)`) from the module itself, will try tomorrow!",
"I would like to help if possible",
"Hey @yoinked-h and @staghado, as @sanchit-gandhi pointed it should be fun to add! \r\nFeel free to open a PR so that you can have our help on this"
] | 1,700 | 1,702 | null | CONTRIBUTOR | null | ### Feature request
Add flash attention 2 to musicgen models (and or audio gen)
### Motivation
Musicgen is a *really* large model, and its very hard to run on consumer GPUs, adding flash attention could make the model much lighter and possibly faster.
### Your contribution
I can test on both WSL and windows 11 on a 3060 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27552/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27551/comments | https://api.github.com/repos/huggingface/transformers/issues/27551/events | https://github.com/huggingface/transformers/issues/27551 | 1,998,152,353 | I_kwDOCUB6oc53GWKh | 27,551 | Make style failed | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @jiqing-feng, thanks for raising this issue! \r\n\r\nA recent PR #27144 updated our formatting logic to use ruff formatter instead of black. Could you try updating ruff and run again? `pip install -U ruff` ",
"Had the same issue & updating ruff solved this for me",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,704 | 1,704 | CONTRIBUTOR | null | ### System Info
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.16.0-rc8-intel-next-01534-g53cb5f883cf7-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.1
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I run `make style`, I got the following error
```
ruff check examples tests src utils setup.py conftest.py --fix
error: TOML parse error at line 17, column 1
|
17 | [tool.ruff.format]
| ^^^^^^^^^^^^^^^^^^
wanted exactly 1 element, more than 1 element
make: *** [Makefile:67: style] Error 2
```
### Expected behavior
It should be reformat all codes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27551/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27550/comments | https://api.github.com/repos/huggingface/transformers/issues/27550/events | https://github.com/huggingface/transformers/issues/27550 | 1,998,104,294 | I_kwDOCUB6oc53GKbm | 27,550 | Allow independent control of logging and progress bars across Trainer methods | {
"login": "harrisonfloam",
"id": 130672912,
"node_id": "U_kgDOB8npEA",
"avatar_url": "https://avatars.githubusercontent.com/u/130672912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harrisonfloam",
"html_url": "https://github.com/harrisonfloam",
"followers_url": "https://api.github.com/users/harrisonfloam/followers",
"following_url": "https://api.github.com/users/harrisonfloam/following{/other_user}",
"gists_url": "https://api.github.com/users/harrisonfloam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harrisonfloam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harrisonfloam/subscriptions",
"organizations_url": "https://api.github.com/users/harrisonfloam/orgs",
"repos_url": "https://api.github.com/users/harrisonfloam/repos",
"events_url": "https://api.github.com/users/harrisonfloam/events{/privacy}",
"received_events_url": "https://api.github.com/users/harrisonfloam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @muellerzr @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | ### Feature request
Goal: Finer control of logging and progress tracking output for Trainer methods.
Specifically, I would like to suppress print logging and progress bars during in-training evaluation, but maintain the main training loop progress bar. A good solution would allow independent control of print logging and progress bars for each Trainer method (train, evaluate, predict).
### Motivation
Logging and progress tracking is great, but console-overflow is real. When training with an every-epoch evaluation strategy, my console (or notebook cell output) is overwhelmed with print logs and empty, completed tqdm progress bars.
The current options for logging control that I am aware of are:
- `log_level`: Setting to 'critical' seems to minimize warning-type output, but doesn't affect print logging of training metrics.
- `disable_tqdm`: Disables all tqdm progress bars globally.
- `remove_callback(PrinterCallback)`: Suppresses print logs, but only when `disable_tqdm=True` due to a conditional in [trainer.py, line 528](https://github.com/huggingface/transformers/blob/b074461ef0f54ce37c5239d30ee960ece28d11ec/src/transformers/trainer.py#L528C15-L528C15). The `ProgressCallback` or `NotebookProgressCallback` that is added when `disable_tqdm=False` enables all logging with no option for suppression.
```
self.add_callback(PrinterCallback if self.args.disable_tqdm else DEFAULT_PROGRESS_CALLBACK)
```
Ideally, I would like to be able to independently disable progress bars for the evaluate calls inside the training loop, but maintain logging and progress bars for the training loop as a whole and any subsequent evaluate or predict calls. Suppressing all logging via `disable_tqdm` and `remove_callback(PrinterCallback)` solves the console-overflow issue, but leaves me staring at a blank screen wondering if my training loop is working.
### Your contribution
I have not contributed to huggingface before, but would be happy to work or collaborate on a PR.
This solution would likely require:
- moderate changes to `ProgressCallback` in [trainer_callback.py](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py).
- moderate changes to `NotebookProgressCallback` in [utils/notebook.py](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/notebook.py).
- very minor changes to [trainer.py](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py) and [training_args.py](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27550/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27549/comments | https://api.github.com/repos/huggingface/transformers/issues/27549/events | https://github.com/huggingface/transformers/pull/27549 | 1,997,950,743 | PR_kwDOCUB6oc5fsFtS | 27,549 | Adding bbox transformatinos | {
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
This PR implements bounding box transformations among different formats:
`XYXY` <-> `XYWH` <-> `XCYCWH` <-> `RELATIVE_XYWH` <-> `RELATIVE_XCYCWH`
Object detection models can output boxes in different formats, requesting manual transformations while porting these models. The idea to have a flexible function such as the proposed `transform_box_format` is similar to our image transformation functions such as `normalize`, `center_crop`, `resize`, etc, that can be used by any model.
We currently have 2 functions that make bboxes transformations: `center_to_corners_format`, `corners_to_center_format`, which are useful, but still require an image processor to do other conversions, as transforming relative to absolute. An example is shown below - performed by [image_processing_detr.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/detr/image_processing_detr.py#L1352-L1357):
```python
# convert to [x0, y0, x1, y1] format
boxes = center_to_corners_format(out_bbox)
# and from relative [0, 1] to absolute [0, height] coordinates
img_h, img_w = target_sizes.unbind(1)
scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1).to(boxes.device)
boxes = boxes * scale_fct[:, None, :]
```
This could simply be replaced by:
```python
new_boxes = transform_box_format(out_bbox, orig_format="relative_xcycwh", dest_format="xyxy", img_shape=target_sizes.squeeze())
```
which leads to the same results, so that `torch.isclose(boxes, new_boxes)` is `True`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27549/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27549",
"html_url": "https://github.com/huggingface/transformers/pull/27549",
"diff_url": "https://github.com/huggingface/transformers/pull/27549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27549.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27548/comments | https://api.github.com/repos/huggingface/transformers/issues/27548/events | https://github.com/huggingface/transformers/pull/27548 | 1,997,924,525 | PR_kwDOCUB6oc5fr_81 | 27,548 | Fix mistral generate for long prompt / response | {
"login": "lorabit110",
"id": 108375850,
"node_id": "U_kgDOBnWvKg",
"avatar_url": "https://avatars.githubusercontent.com/u/108375850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorabit110",
"html_url": "https://github.com/lorabit110",
"followers_url": "https://api.github.com/users/lorabit110/followers",
"following_url": "https://api.github.com/users/lorabit110/following{/other_user}",
"gists_url": "https://api.github.com/users/lorabit110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorabit110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorabit110/subscriptions",
"organizations_url": "https://api.github.com/users/lorabit110/orgs",
"repos_url": "https://api.github.com/users/lorabit110/repos",
"events_url": "https://api.github.com/users/lorabit110/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorabit110/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante @ArthurZucker ",
"Hey @lorabit110 π \r\n\r\nI'm not sure if I follow the need for a fix here :) On my end, the test case you added passes:\r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\nimport torch\r\n\r\nEXPECTED_OUTPUT_TOKEN_IDS = [306, 338]\r\n\r\ninput_ids = [1] + [306, 338] * 2048\r\nmodel = AutoModelForCausalLM.from_pretrained(\"mistralai/Mistral-7B-v0.1\", device_map=\"auto\", torch_dtype=torch.float16, use_flash_attention_2=True)\r\ninput_ids = torch.tensor([input_ids]).to(model.model.embed_tokens.weight.device)\r\nprint(input_ids.shape)\r\ngenerated_ids = model.generate(input_ids, max_new_tokens=2)\r\nprint(generated_ids[0][-2:].tolist() == EXPECTED_OUTPUT_TOKEN_IDS)\r\n# True\r\n```\r\n\r\nCan you share a short reproducible script that results in the failure?",
"> Hey @lorabit110 π\r\n> \r\n> I'm not sure if I follow the need for a fix here :) On my end, the test case you added passes:\r\n> \r\n> ```python\r\n> from transformers import AutoModelForCausalLM\r\n> import torch\r\n> \r\n> EXPECTED_OUTPUT_TOKEN_IDS = [306, 338]\r\n> \r\n> input_ids = [1] + [306, 338] * 2048\r\n> model = AutoModelForCausalLM.from_pretrained(\"mistralai/Mistral-7B-v0.1\", device_map=\"auto\", torch_dtype=torch.float16, use_flash_attention_2=True)\r\n> input_ids = torch.tensor([input_ids]).to(model.model.embed_tokens.weight.device)\r\n> print(input_ids.shape)\r\n> generated_ids = model.generate(input_ids, max_new_tokens=2)\r\n> print(generated_ids[0][-2:].tolist() == EXPECTED_OUTPUT_TOKEN_IDS)\r\n> # True\r\n> ```\r\n> \r\n> Can you share a short reproducible script that results in the failure?\r\n\r\n@gante \r\n\r\nActually, we need to set max_new_tokens to at least 3 to trigger the error. The below code can reproduce the issue: \r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nimport torch\r\n\r\nEXPECTED_OUTPUT_TOKEN_IDS = [306, 338]\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"mistralai/Mistral-7B-v0.1\", device_map=\"auto\", torch_dtype=torch.float16, use_flash_attention_2=True)\r\ninput_ids = [1] + [306, 338] * 2048\r\ninput_ids = torch.tensor([input_ids]).to(model.model.embed_tokens.weight.device)\r\ngenerated_ids = model.generate(input_ids, max_new_tokens=4)\r\nprint(generated_ids[0][-2:].tolist() == EXPECTED_OUTPUT_TOKEN_IDS)\r\n```\r\n\r\n<img width=\"1494\" alt=\"Screenshot 2023-11-17 at 11 14 35 AM\" src=\"https://github.com/huggingface/transformers/assets/108375850/41249363-9b85-40e9-82f7-a63d9bc1e1f0\">\r\n",
"Can anyone take a look?",
"@younesbelkada I see. I'm missing one variable here, which is how many tokens the model has seen so far through its cache -- does any of the inputs to `prepare_inputs_for_generation` have this information? \r\n\r\nWithout it, I'm not sure how to properly slice `input_ids` without the setting the assumption that only one token can be consumed at a time (which would disable the use of `mistral` with assisted generation and the [newly added ability](https://huggingface.co/docs/transformers/main/en/llm_tutorial_optimization#321-multi-round-conversation) to pass `past_key_values` across generations)",
"Let's make sure the models are quantized for testing",
"Thanks everyone for the discussion. I will need to spend sometime to figure out how to implement assisted decoding test case. ",
"All comments have been addressed. I need a maintainer to approve and run a workflow in order to merge the PR.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27548). All of your documentation changes will be reflected on that endpoint.",
"Thanks @lorabit110 for the fix π "
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Fix the below issue:
When use mistral model to generate texts, if prompt + max_tokens > 4095 and use_cache=True, you would get the below error.
ValueError: past key much have a shape of (`batch_size, num_heads, self.config.sliding_window-1, head_dim`), got torch.Size([1, 8, 2989, 128]).
This PR fixes the logic that determine which part of the cached key and value should be used for predicting future tokens.
Fixes https://github.com/huggingface/transformers/issues/27682
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Bam4d
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27548/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27548/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27548",
"html_url": "https://github.com/huggingface/transformers/pull/27548",
"diff_url": "https://github.com/huggingface/transformers/pull/27548.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27548.patch",
"merged_at": 1701076722000
} |
https://api.github.com/repos/huggingface/transformers/issues/27547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27547/comments | https://api.github.com/repos/huggingface/transformers/issues/27547/events | https://github.com/huggingface/transformers/pull/27547 | 1,997,663,495 | PR_kwDOCUB6oc5frGA- | 27,547 | CLVP Fixes | {
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @amyeroberts, @ArthurZucker, I have worked on the comments and added the logic to add `bos` and `eos` tokens according to this [comment](https://github.com/huggingface/transformers/pull/27547#discussion_r1399547566). \r\n\r\nPlease let me know if you are Ok with this change.",
"Hi @ArthurZucker, I have pushed the changes. ",
"Thanks you again for going the extra mile and fixing this! π₯ ",
"Hey, it's always a pleasure to contribute! :raised_hands: ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27547). All of your documentation changes will be reflected on that endpoint."
] | 1,700 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes one problem in the rotary embedding implementation, adds an argument to the `ClvpModelForConditionalGeneration.generate` and adds necessary fixes to the test logits.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker, @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27547/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27547",
"html_url": "https://github.com/huggingface/transformers/pull/27547",
"diff_url": "https://github.com/huggingface/transformers/pull/27547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27547.patch",
"merged_at": 1701189602000
} |
https://api.github.com/repos/huggingface/transformers/issues/27546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27546/comments | https://api.github.com/repos/huggingface/transformers/issues/27546/events | https://github.com/huggingface/transformers/issues/27546 | 1,997,647,246 | I_kwDOCUB6oc53Ea2O | 27,546 | Error when passing past_key_values back to generate() | {
"login": "dto",
"id": 22819,
"node_id": "MDQ6VXNlcjIyODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dto",
"html_url": "https://github.com/dto",
"followers_url": "https://api.github.com/users/dto/followers",
"following_url": "https://api.github.com/users/dto/following{/other_user}",
"gists_url": "https://api.github.com/users/dto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dto/subscriptions",
"organizations_url": "https://api.github.com/users/dto/orgs",
"repos_url": "https://api.github.com/users/dto/repos",
"events_url": "https://api.github.com/users/dto/events{/privacy}",
"received_events_url": "https://api.github.com/users/dto/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante ",
"Hi @dto π Thank you for raising this issue!\r\n\r\n`generate` expects the full input prompt to be passed, even if `past_key_values` is passed too. In this case, the input prompt is the full multi-round conversation. In your script, if you replace the `input_ids` input by `torch.cat((marvin_response.sequences, marvin_input_ids.to('cuda')), dim=-1)`, it will generate something along these lines:\r\n\r\n```\r\n<s><s> [INST] (+ 2 3) [/INST] The result of the expression `(+ 2 3)` is 5.</s><s> [INST] Please introduce yourself. [/INST] Hello! I'm an AI language model. I'm here to help you with any questions or tasks you might have. How may I assist you today?</s>\r\n```\r\n\r\n___________________________________________________\r\n\r\nThe fact that you need the full multi-round conversation may not be obvious, so I'm taking this issue as an opportunity to write documentation for this feature and clear things out :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,700 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+rocm5.6 (True)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The test python script attempts to retrieve past_key_values from the output of generate() and pass this value back to generate() to continue a chat style conversation. However, Transformers does not like the value and gives "RuntimeError: The size of tensor a (12) must match the size of tensor b (31) at non-singleton dimension 3" when it is passed back verbatim. Please see attached Python code and traceback output
[test.py.txt](https://github.com/huggingface/transformers/files/13382751/test.py.txt)
[traceback.txt](https://github.com/huggingface/transformers/files/13382752/traceback.txt)
### Expected behavior
I would expect that passing past_key_values back to generate() allows to continue the conversation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27546/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.