url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/27445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27445/comments | https://api.github.com/repos/huggingface/transformers/issues/27445/events | https://github.com/huggingface/transformers/issues/27445 | 1,988,561,202 | I_kwDOCUB6oc52hwky | 27,445 | Whisper model max_length doesn't work as expected | {
"login": "RohitMidha23",
"id": 38888530,
"node_id": "MDQ6VXNlcjM4ODg4NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/38888530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RohitMidha23",
"html_url": "https://github.com/RohitMidha23",
"followers_url": "https://api.github.com/users/RohitMidha23/followers",
"following_url": "https://api.github.com/users/RohitMidha23/following{/other_user}",
"gists_url": "https://api.github.com/users/RohitMidha23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RohitMidha23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RohitMidha23/subscriptions",
"organizations_url": "https://api.github.com/users/RohitMidha23/orgs",
"repos_url": "https://api.github.com/users/RohitMidha23/repos",
"events_url": "https://api.github.com/users/RohitMidha23/events{/privacy}",
"received_events_url": "https://api.github.com/users/RohitMidha23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sanchit-gandhi @sgugger @ArthurZucker @patrickvonplaten any update on this? ",
"@RohitMidha23, please provide a reproducible code snippet so that we can reproduce this bug? ",
"Hey @RohitMidha23 - the `max_length` attribute is not stored in the Whisper config on the HF Hub: https://huggingface.co/openai/whisper-large-v3/blob/9ee8ba8d7e3eb15fafd5be628011c3399f85a2de/config.json#L36\r\n\r\nThat means when we try to access the `max_length`, we get the *default* max length of 20 tokens: https://github.com/huggingface/transformers/blob/ee292615559834ae2ba5b3aae3abe3f54bc81ac2/src/transformers/configuration_utils.py#L287\r\n\r\nThe correct max length is 448 tokens, which I've opened a PR to update: https://huggingface.co/openai/whisper-large-v3/discussions/38\r\n\r\nNote that the current recommended method for storing/accessing generation related parameters is in the generation config. Here, you'll find that the max length is indeed set correctly:\r\n\r\n```python\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-large-v3\")\r\nprint(model.generation_config.max_length)\r\n```\r\n**Print Output:**\r\n```\r\n448\r\n```\r\n\r\n=> going forwards, it is recommended that you prioritise the values in the generation config over the config for generation parameters. The PR I've opened to update the config is purely for backwards compatibility.\r\n\r\nNote that you cannot train Whisper on sequence longer than 448 tokens. This is because Whisper has a maximum of 448 positional embeddings in the decoder, and so a hard-coded max length of 448. You should filter any tokens longer than this in your training script (_c.f._ https://discuss.huggingface.co/t/open-to-the-community-whisper-fine-tuning-event/26681/21?u=sanchit-gandhi and https://github.com/huggingface/distil-whisper/blob/914dcdf3919552d5a3826a9d5db99b059ddcc16e/training/run_distillation.py#L1189)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): 2.14.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v3")
max_label_length = model.config.max_length
print(max_label_length)
```
Output: 20
This same for `openai/whisper-small` returns 448.
As suggested [here](https://discuss.huggingface.co/t/open-to-the-community-whisper-fine-tuning-event/26681/21) by you, we should ignore all the samples where length is >= max_length.
This is not viable in the case of `openai/whisper-large-v3` as lot of transcripts are going to be > 20 tokens.
[Here](https://github.com/huggingface/datasets/issues/5391#issuecomment-1372180962) you've mentioned that we can change the max_length of the model.
I've tried that as well.
```
model.generation_config.max_length = 1024
```
The issue comes when we have transcripts that are longer than 448 tokens. We get an error that says:
```
The size of tensor a (921) must match the size of tensor b (448) at non-singleton dimension 1
```
**NOTE** The rest of the code follows from the Whisper Fine Tuning Blog post.
### Expected behavior
1. Ignoring the transcripts > 448 seems like a hacky solution.
2. Setting the `max_length` of the `config` of the model *and* the `generation_config` don't actually seem to change anything while training.
Is there a workaround to this? What are we to do with transcripts that are longer than 448 tokens?
If this is not the expected behaviour and I'm missing something, please mark the same.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27445/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27444/comments | https://api.github.com/repos/huggingface/transformers/issues/27444/events | https://github.com/huggingface/transformers/pull/27444 | 1,988,326,896 | PR_kwDOCUB6oc5fLdah | 27,444 | WIP - Add Flash Attention CLIP | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27444). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,702 | null | COLLABORATOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27444/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27444",
"html_url": "https://github.com/huggingface/transformers/pull/27444",
"diff_url": "https://github.com/huggingface/transformers/pull/27444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27444.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27443/comments | https://api.github.com/repos/huggingface/transformers/issues/27443/events | https://github.com/huggingface/transformers/pull/27443 | 1,988,079,492 | PR_kwDOCUB6oc5fKmDQ | 27,443 | Update and reorder docs for chat templates | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @NielsRogge and @philschmid who gave me the idea for this!",
"_The documentation is not available anymore as the PR was closed or merged._",
"Pinging @NielsRogge and @philschmid if you have any comments - I'll merge tomorrow if not!"
] | 1,699 | 1,699 | 1,699 | MEMBER | null | Based on feedback, some bits of chat templates were unclear even after reading the docs (particularly around using chat templates for training). I've added a section on that, and also reorganized the entire doc to do the following:
- Stop using gated models (so the doctests actually work)
- Push advanced topics (writing chat templates and pushing them to the Hub) to the end, and put the most widely useful information (using chat templates in inference and training) at the start.
- Cut down on the far-too-verbose opening, and get straight to more concrete example code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27443/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27443",
"html_url": "https://github.com/huggingface/transformers/pull/27443",
"diff_url": "https://github.com/huggingface/transformers/pull/27443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27443.patch",
"merged_at": 1699986373000
} |
https://api.github.com/repos/huggingface/transformers/issues/27442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27442/comments | https://api.github.com/repos/huggingface/transformers/issues/27442/events | https://github.com/huggingface/transformers/pull/27442 | 1,988,077,860 | PR_kwDOCUB6oc5fKls1 | 27,442 | [Whisper] Fix pipeline test | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Fixes typo in Whisper pipeline test that was causing a failure on the daily CI (cc @ydshieh): https://github.com/huggingface/transformers/actions/runs/6792980649/job/18467224285
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27442/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27442/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27442",
"html_url": "https://github.com/huggingface/transformers/pull/27442",
"diff_url": "https://github.com/huggingface/transformers/pull/27442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27442.patch",
"merged_at": 1699960706000
} |
https://api.github.com/repos/huggingface/transformers/issues/27441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27441/comments | https://api.github.com/repos/huggingface/transformers/issues/27441/events | https://github.com/huggingface/transformers/issues/27441 | 1,988,075,173 | I_kwDOCUB6oc52f56l | 27,441 | Add Flash Attention 2.0 for T5 Family | {
"login": "jkswin",
"id": 86236378,
"node_id": "MDQ6VXNlcjg2MjM2Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/86236378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jkswin",
"html_url": "https://github.com/jkswin",
"followers_url": "https://api.github.com/users/jkswin/followers",
"following_url": "https://api.github.com/users/jkswin/following{/other_user}",
"gists_url": "https://api.github.com/users/jkswin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jkswin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jkswin/subscriptions",
"organizations_url": "https://api.github.com/users/jkswin/orgs",
"repos_url": "https://api.github.com/users/jkswin/repos",
"events_url": "https://api.github.com/users/jkswin/events{/privacy}",
"received_events_url": "https://api.github.com/users/jkswin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 6202871275,
"node_id": "LA_kwDOCUB6oc8AAAABcbhN6w",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flash%20Attention",
"name": "Flash Attention",
"color": "201FF8",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Is this issue open for contribution?\r\n",
"The main Flash Attention 2 discussion tracking issue and discussion seems to be https://github.com/huggingface/transformers/issues/26350"
] | 1,699 | 1,708 | null | NONE | null | Encountered the following when trying to incorporate Flash attention into a previously devved byt5-small finetuning script.
Code to produce:
```
from transformers import T5ForConditionalGeneration, AutoTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq
model_path = "google/byt5-small"
model = T5ForConditionalGeneration.from_pretrained(model_path,
use_flash_attention_2=True,
)
```
Error:
```
ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27441/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27441/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27440/comments | https://api.github.com/repos/huggingface/transformers/issues/27440/events | https://github.com/huggingface/transformers/pull/27440 | 1,988,069,037 | PR_kwDOCUB6oc5fKjvY | 27,440 | [MusicGen] Fix audio channel attribute | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27440). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
`audio_channels` attribute is stored in the decoder sub-model config, not the overall top-level config.
The logits test for MusicGen mono previously failed on the daily CI tests (cc @ydshieh): https://github.com/huggingface/transformers/actions/runs/6806630933/job/18508277638
It passes when we fix this bug. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27440/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27440/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27440",
"html_url": "https://github.com/huggingface/transformers/pull/27440",
"diff_url": "https://github.com/huggingface/transformers/pull/27440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27440.patch",
"merged_at": 1701450603000
} |
https://api.github.com/repos/huggingface/transformers/issues/27439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27439/comments | https://api.github.com/repos/huggingface/transformers/issues/27439/events | https://github.com/huggingface/transformers/issues/27439 | 1,988,028,614 | I_kwDOCUB6oc52fujG | 27,439 | Trainer misshaping input_ids | {
"login": "lhallee",
"id": 72926928,
"node_id": "MDQ6VXNlcjcyOTI2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhallee",
"html_url": "https://github.com/lhallee",
"followers_url": "https://api.github.com/users/lhallee/followers",
"following_url": "https://api.github.com/users/lhallee/following{/other_user}",
"gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhallee/subscriptions",
"organizations_url": "https://api.github.com/users/lhallee/orgs",
"repos_url": "https://api.github.com/users/lhallee/repos",
"events_url": "https://api.github.com/users/lhallee/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhallee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The updated example gives the same issue\r\nhttps://huggingface.co/docs/transformers/tasks/sequence_classification\r\n\r\n```\r\nclass ClassificationHead(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.dense = nn.Linear(config.hidden_size, config.hidden_size)\r\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\r\n self.out_proj = nn.Linear(config.hidden_size, config.num_labels)\r\n\r\n def forward(self, features, **kwargs):\r\n x = features[:, 0, :]\r\n x = self.dropout(x)\r\n x = self.dense(x)\r\n x = torch.tanh(x)\r\n x = self.dropout(x)\r\n x = self.out_proj(x)\r\n return x\r\n\r\nclass SequenceClassifier(nn.Module):\r\n def __init__(self, model_path, num_labels, weight_path=None, class_type='multilabel'):\r\n super(SequenceClassifier, self).__init__()\r\n self.config = AutoConfig.from_pretrained(model_path)\r\n self.config.num_labels = num_labels\r\n self.config.output_hidden_states = True\r\n self.plm = AutoModelForMaskedLM.from_pretrained(model_path, config=self.config)\r\n self.classifier = ClassificationHead(self.config)\r\n if weight_path is not None:\r\n self.plm.load_state_dict(torch.load(weight_path))\r\n if class_type == 'singlelabel':\r\n self.loss_fct = nn.CrossEntropyLoss()\r\n elif class_type == 'multilabel':\r\n self.loss_fct = nn.BCEWithLogitsLoss()\r\n self.num_labels = num_labels\r\n\r\n def forward(self, input_ids, attention_mask, labels=None):\r\n hidden = self.plm(input_ids=input_ids, attention_mask=attention_mask, labels=labels).hidden_states[-1]\r\n logits = self.classifier(hidden[:, 0, :])\r\n if labels is not None:\r\n loss = self.loss_fct(logits.view(-1, self.num_labels), labels.view(-1, self.num_labels))\r\n return SequenceClassifierOutput(\r\n loss=loss,\r\n logits=logits,\r\n hidden_states=None,\r\n attentions=None\r\n )\r\n\r\ndef preprocess_function(examples):\r\n return tokenizer(examples['sequences'], truncation=True, max_length=512)\r\n\r\ntrain_dataset = Dataset.from_dict({'sequences': train_seqs, 'labels': train_labels})\r\nvalid_dataset = Dataset.from_dict({'sequences': valid_seqs, 'labels': valid_labels})\r\ntest_dataset = Dataset.from_dict({'sequences': test_seqs, 'labels': test_labels})\r\n\r\ndataset_dict = DatasetDict({\r\n 'train': train_dataset,\r\n 'validation': valid_dataset,\r\n 'test': test_dataset\r\n})\r\n\r\ntokenized_data = dataset_dict.map(preprocess_function, batched=True)\r\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\nmodel = SequenceClassifier(model_path, len(train_labels[0]), weight_path, 'multilabel')\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=f'./results_{experiment}',\r\n num_train_epochs=100,\r\n per_device_train_batch_size=2,\r\n per_device_eval_batch_size=2,\r\n warmup_steps=1000,\r\n weight_decay=0.01,\r\n do_train=True,\r\n do_eval=True,\r\n logging_dir=f'./logs_{experiment}',\r\n logging_steps=100,\r\n learning_rate=2e-5,\r\n adam_beta1 = 0.9,\r\n adam_beta2 = 0.98,\r\n evaluation_strategy='epoch',\r\n gradient_accumulation_steps=16,\r\n fp16=False,\r\n fp16_opt_level='02',\r\n run_name=experiment,\r\n seed=42,\r\n load_best_model_at_end=True,\r\n metric_for_best_model='fmax',\r\n greater_is_better=True,\r\n save_strategy='epoch'\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_data['train'],\r\n eval_dataset=tokenized_data['validation'],\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics,\r\n callbacks=[EarlyStoppingCallback(early_stopping_patience=5)]\r\n)\r\n\r\ntrainer.train()\r\npredictions, labels, metrics_output = trainer.predict(test_dataset)\r\nmetrics_output\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-18-a560f54159b5>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 trainer.train()\r\n 2 predictions, labels, metrics_output = trainer.predict(test_dataset)\r\n 3 metrics_output\r\n\r\n13 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1553 hf_hub_utils.enable_progress_bars()\r\n 1554 else:\r\n-> 1555 return inner_training_loop(\r\n 1556 args=args,\r\n 1557 resume_from_checkpoint=resume_from_checkpoint,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n 1858 \r\n 1859 with self.accelerator.accumulate(model):\r\n-> 1860 tr_loss_step = self.training_step(model, inputs)\r\n 1861 \r\n 1862 if (\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)\r\n 2723 \r\n 2724 with self.compute_loss_context_manager():\r\n-> 2725 loss = self.compute_loss(model, inputs)\r\n 2726 \r\n 2727 if self.args.n_gpu > 1:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs)\r\n 2746 else:\r\n 2747 labels = None\r\n-> 2748 outputs = model(**inputs)\r\n 2749 # Save past state if it exists\r\n 2750 # TODO: this needs to be fixed and made cleaner later.\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[<ipython-input-13-cf6ab19ee4f9>](https://localhost:8080/#) in forward(self, input_ids, attention_mask, labels)\r\n 59 \r\n 60 def forward(self, input_ids, attention_mask, labels=None):\r\n---> 61 hidden = self.plm(input_ids=input_ids, attention_mask=attention_mask, labels=labels).hidden_states[-1]\r\n 62 logits = self.classifier(hidden[:, 0, :])\r\n 63 if labels is not None:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/esm/modeling_esm.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, output_attentions, output_hidden_states, return_dict)\r\n 1026 \r\n 1027 labels = labels.to(prediction_scores.device)\r\n-> 1028 masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\r\n 1029 \r\n 1030 if not return_dict:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py](https://localhost:8080/#) in forward(self, input, target)\r\n 1177 \r\n 1178 def forward(self, input: Tensor, target: Tensor) -> Tensor:\r\n-> 1179 return F.cross_entropy(input, target, weight=self.weight,\r\n 1180 ignore_index=self.ignore_index, reduction=self.reduction,\r\n 1181 label_smoothing=self.label_smoothing)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)\r\n 3051 if size_average is not None or reduce is not None:\r\n 3052 reduction = _Reduction.legacy_get_string(size_average, reduce)\r\n-> 3053 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\n 3054 \r\n 3055 \r\n\r\nValueError: Expected input batch_size (396) to match target batch_size (1170).\r\n```\r\n",
"The problem was that I was accidentily sending the classifier labels to the model with a language modeling head. Not sure why this misformatted the batch size, but after removing there is no problem.\r\n\r\nFrom this\r\n`hidden = self.plm(input_ids=input_ids, attention_mask=attention_mask, labels=labels).hidden_states[-1]`\r\nto\r\n`hidden = self.plm(input_ids=input_ids, attention_mask=attention_mask).hidden_states[-1]`"
] | 1,699 | 1,699 | 1,699 | NONE | null | ### System Info
2023-11-10 17:12:03.551843: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-10 17:12:03.551902: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-10 17:12:03.551946: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-11-10 17:12:04.651349: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-11-10 17:12:07.781203: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.35.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): 2.14.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@muellerzr @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://huggingface.co/transformers/v3.1.0/custom_datasets.html
The dataset as portrayed leads to flattened inputs that are 1d batch_size * seq_len
Returning tensors pt also leads to batch_size, 1, seq_len which is also wrong.
```
class FineTuneDatasetIDS(Dataset):
def __init__(self, seqs, labels, tokenizer, max_length=512):
self.seqs = seqs
self.labels = labels
self.tokenizer = tokenizer
self.max_length = max_length
def __getitem__(self, idx):
seq = self.seqs[idx]
labels = self.labels[idx]
encodings = self.tokenizer(seq,
add_special_tokens=True,
padding='max_length',
max_length=self.max_length,
truncation=True)
item = {key: torch.tensor(val) for key, val in encodings.items()}
item['labels'] = torch.tensor(labels, dtype=torch.float)
return item
def __len__(self):
return len(self.labels)
training_args = TrainingArguments(
output_dir=f'./results_{experiment}',
num_train_epochs=100,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
warmup_steps=1000,
weight_decay=0.01,
do_train=True,
do_eval=True,
logging_dir=f'./logs_{experiment}',
logging_steps=100,
learning_rate=2e-5,
adam_beta1 = 0.9,
adam_beta2 = 0.98,
evaluation_strategy='epoch',
gradient_accumulation_steps=16,
fp16=False,
fp16_opt_level='02',
run_name=experiment,
seed=42,
load_best_model_at_end=True,
metric_for_best_model='fmax',
greater_is_better=True,
save_strategy='epoch'
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=valid_dataset,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=5)]
)
trainer.train()
predictions, labels, metrics_output = trainer.predict(test_dataset)
metrics_output
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-19-a560f54159b5>](https://localhost:8080/#) in <cell line: 1>()
----> 1 trainer.train()
2 predictions, labels, metrics_output = trainer.predict(test_dataset)
3 metrics_output
13 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1553 hf_hub_utils.enable_progress_bars()
1554 else:
-> 1555 return inner_training_loop(
1556 args=args,
1557 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1858
1859 with self.accelerator.accumulate(model):
-> 1860 tr_loss_step = self.training_step(model, inputs)
1861
1862 if (
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)
2723
2724 with self.compute_loss_context_manager():
-> 2725 loss = self.compute_loss(model, inputs)
2726
2727 if self.args.n_gpu > 1:
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs)
2746 else:
2747 labels = None
-> 2748 outputs = model(**inputs)
2749 # Save past state if it exists
2750 # TODO: this needs to be fixed and made cleaner later.
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
1519
1520 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1528
1529 try:
[<ipython-input-7-6912ec56d6d8>](https://localhost:8080/#) in forward(self, input_ids, attention_mask, labels)
57
58 def forward(self, input_ids, attention_mask, labels=None):
---> 59 hidden = self.plm(input_ids=input_ids, attention_mask=attention_mask, labels=labels).hidden_states[-1]
60 logits = self.classifier(hidden[:, 0, :])
61 if labels is not None:
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
1519
1520 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1528
1529 try:
[/usr/local/lib/python3.10/dist-packages/transformers/models/esm/modeling_esm.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, output_attentions, output_hidden_states, return_dict)
1026
1027 labels = labels.to(prediction_scores.device)
-> 1028 masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
1029
1030 if not return_dict:
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
1519
1520 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1528
1529 try:
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py](https://localhost:8080/#) in forward(self, input, target)
1177
1178 def forward(self, input: Tensor, target: Tensor) -> Tensor:
-> 1179 return F.cross_entropy(input, target, weight=self.weight,
1180 ignore_index=self.ignore_index, reduction=self.reduction,
1181 label_smoothing=self.label_smoothing)
[/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3051 if size_average is not None or reduce is not None:
3052 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3053 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
3054
3055
ValueError: Expected input batch_size (1024) to match target batch_size (1170).
```
### Expected behavior
The trainer compiles a dataloader with the dataset that returns batch_size, seq_len entires. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27439/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27438/comments | https://api.github.com/repos/huggingface/transformers/issues/27438/events | https://github.com/huggingface/transformers/pull/27438 | 1,987,942,135 | PR_kwDOCUB6oc5fKHnS | 27,438 | [ `PretrainedConfig`] Improve messaging | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sure I'll add a test! 😈 "
] | 1,699 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
Fixes #26720 by improving the error message printed out when someone tries:
```python
>>> from transformers import AutoModel
>>> AutoModel.from_pretrained(".gpt2")
...
OSError: Incorrect path_or_model_id: '.gpt2'. Please provide either the path to a local folder or the repo_id of a model on the Hub. Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '.gpt2'.
```
And:
```python
>>> from transformers import AutoConfig
>>> AutoConfig.from_pretrained(".gpt2")
...
OSError: Incorrect path_or_model_id: '.gpt2'. Please provide either the path to a local folder or the repo_id of a model on the Hub. Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '.gpt2'.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27438/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27438",
"html_url": "https://github.com/huggingface/transformers/pull/27438",
"diff_url": "https://github.com/huggingface/transformers/pull/27438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27438.patch",
"merged_at": 1700053840000
} |
https://api.github.com/repos/huggingface/transformers/issues/27437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27437/comments | https://api.github.com/repos/huggingface/transformers/issues/27437/events | https://github.com/huggingface/transformers/pull/27437 | 1,987,930,529 | PR_kwDOCUB6oc5fKFB5 | 27,437 | Make `examples_torch_job` faster | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
This job sometimes having some test timeout (> 120s.). Even if job passes, the log looks like
```
115.55s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_semantic_segmentation
66.07s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_squad
48.24s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_seq2seq
```
It turns out that setting `OMP_NUM_THREADS=1` has a huge impact on this job.
This PR sets `OMP_NUM_THREADS=8` to make it run faster. It now looks like
```
25.78s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_semantic_segmentation
22.82s call examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_swag_no_trainer
20.69s call examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_seq2seq
```
Note that setting `OMP_NUM_THREADS>1` with `pytest -n` where `n > 1` is going to break things (timeout, blocked etc.).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27437",
"html_url": "https://github.com/huggingface/transformers/pull/27437",
"diff_url": "https://github.com/huggingface/transformers/pull/27437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27437.patch",
"merged_at": 1699643105000
} |
https://api.github.com/repos/huggingface/transformers/issues/27436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27436/comments | https://api.github.com/repos/huggingface/transformers/issues/27436/events | https://github.com/huggingface/transformers/pull/27436 | 1,987,855,309 | PR_kwDOCUB6oc5fJ0aB | 27,436 | Heapify `BeamHypotheses` | {
"login": "Wovchena",
"id": 10669582,
"node_id": "MDQ6VXNlcjEwNjY5NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/10669582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wovchena",
"html_url": "https://github.com/Wovchena",
"followers_url": "https://api.github.com/users/Wovchena/followers",
"following_url": "https://api.github.com/users/Wovchena/following{/other_user}",
"gists_url": "https://api.github.com/users/Wovchena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wovchena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wovchena/subscriptions",
"organizations_url": "https://api.github.com/users/Wovchena/orgs",
"repos_url": "https://api.github.com/users/Wovchena/repos",
"events_url": "https://api.github.com/users/Wovchena/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wovchena/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante ",
"Hey @Wovchena 👋 \r\n\r\nThank you for opening the PR! We are planning, however, to change the structure of beam search, for better support with `torch.compile` and vectorization. As such, we are avoiding merging changes related to beam-search",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | # What does this PR do?
Heapify `BeamHypotheses`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. @gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27436/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27436",
"html_url": "https://github.com/huggingface/transformers/pull/27436",
"diff_url": "https://github.com/huggingface/transformers/pull/27436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27436.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27435/comments | https://api.github.com/repos/huggingface/transformers/issues/27435/events | https://github.com/huggingface/transformers/pull/27435 | 1,987,745,567 | PR_kwDOCUB6oc5fJcQR | 27,435 | At most 2 GPUs for CI | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
We now switch our CI runners from GCP to AWS, and on AWS, we can only config single gpu VMs or 4 gpus VMs (with all the condition we impose).
However, our tests don't like env. with 4 GPUs, and there are > 50 more failing tests on the report.
Considering we use 2 GPUs for multi-gpu testing for years, this PR allow the CI to run only with 2 GPUs at the workflow level, even if the hardware have 4 GPUs.
(If we should check with 4 GPUs is another question, but I am afraid we don't have enough bandwidth so far). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27435/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27435",
"html_url": "https://github.com/huggingface/transformers/pull/27435",
"diff_url": "https://github.com/huggingface/transformers/pull/27435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27435.patch",
"merged_at": 1699629546000
} |
https://api.github.com/repos/huggingface/transformers/issues/27434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27434/comments | https://api.github.com/repos/huggingface/transformers/issues/27434/events | https://github.com/huggingface/transformers/pull/27434 | 1,987,718,720 | PR_kwDOCUB6oc5fJWUA | 27,434 | Fixes AutoModel find_adapter_config_file does not use revision #27429 | {
"login": "edbeeching",
"id": 7275864,
"node_id": "MDQ6VXNlcjcyNzU4NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7275864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edbeeching",
"html_url": "https://github.com/edbeeching",
"followers_url": "https://api.github.com/users/edbeeching/followers",
"following_url": "https://api.github.com/users/edbeeching/following{/other_user}",
"gists_url": "https://api.github.com/users/edbeeching/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edbeeching/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edbeeching/subscriptions",
"organizations_url": "https://api.github.com/users/edbeeching/orgs",
"repos_url": "https://api.github.com/users/edbeeching/repos",
"events_url": "https://api.github.com/users/edbeeching/events{/privacy}",
"received_events_url": "https://api.github.com/users/edbeeching/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27434). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"hi @edbeeching do you want me to look at this PR?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,705 | 1,705 | CONTRIBUTOR | null |
Fixes #27429
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27434/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27434",
"html_url": "https://github.com/huggingface/transformers/pull/27434",
"diff_url": "https://github.com/huggingface/transformers/pull/27434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27434.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27433/comments | https://api.github.com/repos/huggingface/transformers/issues/27433/events | https://github.com/huggingface/transformers/pull/27433 | 1,987,600,381 | PR_kwDOCUB6oc5fI8WJ | 27,433 | Multi qformer batch correct | {
"login": "Patchwork53",
"id": 83033987,
"node_id": "MDQ6VXNlcjgzMDMzOTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/83033987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Patchwork53",
"html_url": "https://github.com/Patchwork53",
"followers_url": "https://api.github.com/users/Patchwork53/followers",
"following_url": "https://api.github.com/users/Patchwork53/following{/other_user}",
"gists_url": "https://api.github.com/users/Patchwork53/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Patchwork53/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Patchwork53/subscriptions",
"organizations_url": "https://api.github.com/users/Patchwork53/orgs",
"repos_url": "https://api.github.com/users/Patchwork53/repos",
"events_url": "https://api.github.com/users/Patchwork53/events{/privacy}",
"received_events_url": "https://api.github.com/users/Patchwork53/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,699 | 1,699 | 1,699 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27433",
"html_url": "https://github.com/huggingface/transformers/pull/27433",
"diff_url": "https://github.com/huggingface/transformers/pull/27433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27433.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27432/comments | https://api.github.com/repos/huggingface/transformers/issues/27432/events | https://github.com/huggingface/transformers/issues/27432 | 1,987,581,913 | I_kwDOCUB6oc52eBfZ | 27,432 | Fine tuning with examples/pytorch/language-modeling/run_clm.py on torch/XLA + FSDP produce abnormal models | {
"login": "totorochina",
"id": 11730127,
"node_id": "MDQ6VXNlcjExNzMwMTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/11730127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/totorochina",
"html_url": "https://github.com/totorochina",
"followers_url": "https://api.github.com/users/totorochina/followers",
"following_url": "https://api.github.com/users/totorochina/following{/other_user}",
"gists_url": "https://api.github.com/users/totorochina/gists{/gist_id}",
"starred_url": "https://api.github.com/users/totorochina/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/totorochina/subscriptions",
"organizations_url": "https://api.github.com/users/totorochina/orgs",
"repos_url": "https://api.github.com/users/totorochina/repos",
"events_url": "https://api.github.com/users/totorochina/events{/privacy}",
"received_events_url": "https://api.github.com/users/totorochina/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,699 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.13.0-1027-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.0
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0a0+gitcc01568 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no, use TPU with torch_xla
- Using distributed or parallel set-up in script?: yes, using flags for xla_fsdp
### Who can help?
@muellerzr @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was following this two blogs/docs,
https://pytorch.org/blog/large-scale-training-hugging-face/
https://huggingface.co/docs/transformers/main_classes/trainer#pytorchxla-fully-sharded-data-parallel
1. Create a v3-8 TPU vm on google cloud and login
```
export PROJECT=<project_name>
export REGION=<region>
export ZONE=<tpu_vm_instance_zone>
export VPC=<vpc_name>
export SUBNET=<vpc_subnet>
export TPUVM=<tpu_vm_instance_name>
export TYPE=v3-8
export IMAGE=tpu-vm-pt-2.0
gcloud compute tpus tpu-vm create ${TPUVM} \
--zone=${ZONE} \
--accelerator-type=${TYPE} \
--version=${IMAGE} \
--network=${VPC} \
--subnetwork="projects/${PROJECT}/regions/${REGION}/subnetworks/${SUBNET}" \
--internal-ips
gcloud alpha compute tpus tpu-vm ssh ${TPUVM} --zone=${ZONE} --tunnel-through-iap
```
2. Update with latest torch_xla nightly
```
sudo apt update -y && sudo apt upgrade -y
pip install https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch-nightly-cp38-cp38-linux_x86_64.whl
pip install https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl
pip install https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torchvision-nightly-cp38-cp38-linux_x86_64.whl
```
3. Install latest transformers. I tried with latest main and v4.31-release branches, accelerate==0.21.0 and latest as they were used in the blog, the problem remained the same.
```
cd $HOME
# git clone -b v4.31-release https://github.com/huggingface/transformers.git
git clone https://github.com/huggingface/transformers.git
cd transformers
# For Python 3.8
pip install -e .
pip install datasets evaluate scikit-learn accelerate py7zr
```
4. Prepare llama2_fsdp_config.json and copy to home folder. Login with HF token.
```
# huggingface-cli login --token <YOUR_HF_TOKEN>
# llama2_fsdp_config.json
{
"fsdp_transformer_layer_cls_to_wrap": [
"LlamaDecoderLayer"
],
"xla": true,
"xla_fsdp_settings": {
"compute_dtype": "bfloat16",
"shard_param_on_dim_0": true,
"pin_layout_in_collective_ops": true
},
"xla_fsdp_grad_ckpt": true
}
```
5. Run run_clm.py with xla_spawn.py, set flag --model_name_or_path to fine tune instead of training from scratch
```
export PJRT_DEVICE=TPU
nohup python3 -u examples/pytorch/xla_spawn.py --num_cores 8 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path "meta-llama/Llama-2-7b-hf" \
--num_train_epochs 3 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 6 \
--per_device_eval_batch_size 6 \
--do_train \
--do_eval \
--output_dir /tmp/llama-2-7b-hf-ft-xla \
--overwrite_output_dir \
--cache_dir /tmp \
--block_size 2048 \
--optim adafactor \
--save_strategy no \
--logging_strategy no \
--gradient_checkpointing \
--fsdp "full_shard" \
--fsdp_config ~/llama2_fsdp_config.json > run.log 2>&1 &
```
6. On a CUDA device, load the fine tuned model and inference
```
from transformers import AutoTokenizer, LlamaTokenizer
import transformers
import torch
model = "~/llama-2-7b-hf-ft-xla"
tokenizer = AutoTokenizer.from_pretrained(model, use_auth_token=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
['I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?\n'] * 1,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
print(sequences)
```
It would get Runtime error for fine tuned Llama2-7B,
```
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
I also tried with GPT2, with GPT2, the model can be loaded and used for inference, however it would produce garbage outputs like
```
[{'generated_text': 'I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?\n contends Creator smiling reminiscentoffset prophets contends contends Sheffield contends wetlandslocked maximizing maximizing WIratorct continuity=- ...'}]
```
For both fine tuned Llama2-7B & GPT2, I will get this kind of warnings during instantiating transformers.pipeline
```
Some weights of the model checkpoint at /home/hzchen/scripts/llm/gpt-ft-test were not used when initializing GPT2LMHeadModel: [<FSDP_LAYERS_OMITTED...>]
- This IS expected if you are initializing GPT2LMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2LMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at /home/hzchen/scripts/llm/gpt-ft-test and are newly initialized: [<LAYERS_OMITTED...>]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
I also aware that the output fine tuned models have abnormally small size, e.g. fine tuned GPT2 get 60MB+ while origin 500MB+. Llama2 7B 3.2GB while origin 13GB, but fine tuned on CUDA will give 20+GB in size.
I also tried with accelerate + FSDP on 8*L4 GPU, everything worked fine with the same configs, that made me believe the problem is on XLA+FSDP.
Below is how I ran successfully on CUDA devices,
```
# fsdp_config.yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: NO_PREFETCH
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
```
nohup accelerate launch --config_file ~/fsdp_config.yaml examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path "meta-llama/Llama-2-7b-hf" \
--num_train_epochs 3 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--do_train \
--do_eval \
--output_dir /tmp/llama-2-7b-hf-ft-cuda \
--overwrite_output_dir \
--cache_dir /tmp \
--block_size 2048 \
--optim adafactor \
--save_strategy no \
--logging_strategy no \
--gradient_checkpointing > run.log 2>&1 &
```
### Expected behavior
The output fine tuned models using XLA+FSDP on TPU should be usable, like what it does on Accelerate+FSDP on GPUs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27432/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27431/comments | https://api.github.com/repos/huggingface/transformers/issues/27431/events | https://github.com/huggingface/transformers/pull/27431 | 1,987,563,196 | PR_kwDOCUB6oc5fI0MI | 27,431 | translate hpo_train.md and perf_hardware.md to chinese | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu\r\n\r\nHi,\r\n\r\nHere is another pr. Thanks for my DIY Computer experience, or I can not understand what the docs talking about.\r\n\r\nI think I will be free next couple of days to translate some \"difficult but practical“ docs.\r\n\r\nBest",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27431). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu \r\n\r\nhi, thansk for your review. I just fix problems mentioned in reviews.\r\n\r\nBest"
] | 1,699 | 1,700 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27431",
"html_url": "https://github.com/huggingface/transformers/pull/27431",
"diff_url": "https://github.com/huggingface/transformers/pull/27431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27431.patch",
"merged_at": 1699984637000
} |
https://api.github.com/repos/huggingface/transformers/issues/27430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27430/comments | https://api.github.com/repos/huggingface/transformers/issues/27430/events | https://github.com/huggingface/transformers/issues/27430 | 1,987,428,777 | I_kwDOCUB6oc52dcGp | 27,430 | ModuleNotFoundError: No module named 'transformers_modules.THUDM/chatglm3-6b | {
"login": "ShanJianSoda",
"id": 128208553,
"node_id": "U_kgDOB6ROqQ",
"avatar_url": "https://avatars.githubusercontent.com/u/128208553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShanJianSoda",
"html_url": "https://github.com/ShanJianSoda",
"followers_url": "https://api.github.com/users/ShanJianSoda/followers",
"following_url": "https://api.github.com/users/ShanJianSoda/following{/other_user}",
"gists_url": "https://api.github.com/users/ShanJianSoda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShanJianSoda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShanJianSoda/subscriptions",
"organizations_url": "https://api.github.com/users/ShanJianSoda/orgs",
"repos_url": "https://api.github.com/users/ShanJianSoda/repos",
"events_url": "https://api.github.com/users/ShanJianSoda/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShanJianSoda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ShanJianSoda, thanks for raising an issue! \r\n\r\nCould you provide either a link to the web demo or a code snippet we could run to reproduce the error? ",
"thanks you, It's in this project. https://github.com/THUDM/ChatGLM3. \r\n\r\n:D",
"Thanks for sharing! This is an external library and so we're not responsible for its maintenance. You should raise an issue on that repo. ",
"Some suggestions from my side.\r\nFor transformers issue, maybe you should try the code in https://huggingface.co/THUDM/chatglm3-6b. It will load the model in your cache and you can debug on the web_demo from THU's repo.\r\nAnd a note that the package required is `pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate`.",
"thanks, I have solved. 👍 "
] | 1,699 | 1,699 | 1,699 | NONE | null | ### System Info
当我运行chatglm3的web_demo2.py时,发生错误
此时的配置环境为
When I run the web_demo2.py for chatglm3, an error occurs
In this case, the configuration environment is as follows:
torch = 2.1.0+cu121
Python = 3.11
transformers = 4.26.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "D:\ChatGLM3\ChatGLM3\web_demo2.py", line 28, in <module>
tokenizer, model = get_model()
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 212, in wrapper
return cached_func(*args, **kwargs)
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 241, in __call__
return self._get_or_create_cached_value(args, kwargs)
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 267, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 321, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "D:\ChatGLM3\ChatGLM3\web_demo2.py", line 17, in get_model
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 676, in from_pretrained
tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\dynamic_module_utils.py", line 443, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\dynamic_module_utils.py", line 164, in get_class_in_module
module = importlib.import_module(module_path)
File "C:\Users\chenchen\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
### Expected behavior
有无老哥帮忙鸭 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27430/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27429/comments | https://api.github.com/repos/huggingface/transformers/issues/27429/events | https://github.com/huggingface/transformers/issues/27429 | 1,987,413,196 | I_kwDOCUB6oc52dYTM | 27,429 | AutoModel find_adapter_config_file does not use revision | {
"login": "edbeeching",
"id": 7275864,
"node_id": "MDQ6VXNlcjcyNzU4NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7275864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edbeeching",
"html_url": "https://github.com/edbeeching",
"followers_url": "https://api.github.com/users/edbeeching/followers",
"following_url": "https://api.github.com/users/edbeeching/following{/other_user}",
"gists_url": "https://api.github.com/users/edbeeching/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edbeeching/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edbeeching/subscriptions",
"organizations_url": "https://api.github.com/users/edbeeching/orgs",
"repos_url": "https://api.github.com/users/edbeeching/repos",
"events_url": "https://api.github.com/users/edbeeching/events{/privacy}",
"received_events_url": "https://api.github.com/users/edbeeching/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @edbeeching you need to pass revision through `adapter_kwargs` as such:\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"alignment-handbook/zephyr-7b-sft-lora\", adapter_kwargs={\"revision\":\"main.merged\"})\r\n```\r\nSo that we cover also the usecase where adapter weights and base model weights are on different revisions\r\n",
"Hi, as discussed on slack. The model I am loading is not an adapter. It is merged, the problems is that find_adapter_config_file is not using the model revision. Making PR with a fix now.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-6.2.0-36-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have a (private) repo `alignment-handbook/zephyr-7b-sft-lora` with two model revisions `main` a peft adapter and `main.merged` the adapter weights merged with the based model.
If I try to load the merged model with the following:
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("alignment-handbook/zephyr-7b-sft-lora", revision="main.merged")
```
I get an error related to revisions on the base model:
```
OSError: main.merged is not a valid git identifier (branch name, tag name or commit id) that exists for this model name. Check the model page at 'https://huggingface.co/mistralai/Mistral-7B-v0.1' for available revisions.
```
I think the cause of this is `revision` kwarg is not passed to `find_adapter_config_file` [here](https://github.com/huggingface/transformers/blob/cf32c941350cb296e4c2c9e26a9274291d515e90/src/transformers/models/auto/auto_factory.py#L505C48-L505C48).
### Expected behavior
The correct merge model should be loaded. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27429/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27428/comments | https://api.github.com/repos/huggingface/transformers/issues/27428/events | https://github.com/huggingface/transformers/pull/27428 | 1,987,411,445 | PR_kwDOCUB6oc5fIS8D | 27,428 | use logger.warning_once to avoid massive outputs | {
"login": "ranchlai",
"id": 5043767,
"node_id": "MDQ6VXNlcjUwNDM3Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5043767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranchlai",
"html_url": "https://github.com/ranchlai",
"followers_url": "https://api.github.com/users/ranchlai/followers",
"following_url": "https://api.github.com/users/ranchlai/following{/other_user}",
"gists_url": "https://api.github.com/users/ranchlai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranchlai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranchlai/subscriptions",
"organizations_url": "https://api.github.com/users/ranchlai/orgs",
"repos_url": "https://api.github.com/users/ranchlai/repos",
"events_url": "https://api.github.com/users/ranchlai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranchlai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sure! Happy to do that. If I understand correctly, this will only apply to `logger.info/warn` in the `forward()` function and the functions called by `forward()`. ",
"@ranchlai Awesome - thanks! Yep, that's right. Feel free to ping with any questions if you're not sure about any of them. "
] | 1,699 | 1,702 | 1,702 | CONTRIBUTOR | null |
# What does this PR do?
This is a quick fix to to avoid massive outputs when training/finetuning longformer (for text classifcation) by using ```logger.warning_once``` rather than ```logger.info```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27428/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27428",
"html_url": "https://github.com/huggingface/transformers/pull/27428",
"diff_url": "https://github.com/huggingface/transformers/pull/27428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27428.patch",
"merged_at": 1702295969000
} |
https://api.github.com/repos/huggingface/transformers/issues/27427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27427/comments | https://api.github.com/repos/huggingface/transformers/issues/27427/events | https://github.com/huggingface/transformers/issues/27427 | 1,987,385,230 | I_kwDOCUB6oc52dReO | 27,427 | cannot recall the model after registering it using AutoModelForCausalLM.register | {
"login": "Ranitbag007",
"id": 133197492,
"node_id": "U_kgDOB_ButA",
"avatar_url": "https://avatars.githubusercontent.com/u/133197492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ranitbag007",
"html_url": "https://github.com/Ranitbag007",
"followers_url": "https://api.github.com/users/Ranitbag007/followers",
"following_url": "https://api.github.com/users/Ranitbag007/following{/other_user}",
"gists_url": "https://api.github.com/users/Ranitbag007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ranitbag007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ranitbag007/subscriptions",
"organizations_url": "https://api.github.com/users/Ranitbag007/orgs",
"repos_url": "https://api.github.com/users/Ranitbag007/repos",
"events_url": "https://api.github.com/users/Ranitbag007/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ranitbag007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hi @Ranitbag007, thanks for raising an issue. \r\n\r\nCould you follow the issue template for [reporting a bug](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml)? It should include: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* Full details of the bug: full traceback of the error\r\n* Code we can use to reproduce the error\r\n\r\nIn this case, if you're trying to register a custom model, I suggest adding the model on the hub and linking to that repo so that we can inspect. "
] | 1,699 | 1,699 | null | NONE | null | ### Model description
I have registered my custom model using AutoModelForCausalLM.register(CustomAIConfig, CustomAI) but I am unable to call the model . It shows ImportError: cannot import name 'CustomAI' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/init.py) . Can you help me in this?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27427/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27426/comments | https://api.github.com/repos/huggingface/transformers/issues/27426/events | https://github.com/huggingface/transformers/issues/27426 | 1,987,230,864 | I_kwDOCUB6oc52cryQ | 27,426 | some errors about the compute_mertics function | {
"login": "HelloNicoo",
"id": 42365353,
"node_id": "MDQ6VXNlcjQyMzY1MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/42365353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HelloNicoo",
"html_url": "https://github.com/HelloNicoo",
"followers_url": "https://api.github.com/users/HelloNicoo/followers",
"following_url": "https://api.github.com/users/HelloNicoo/following{/other_user}",
"gists_url": "https://api.github.com/users/HelloNicoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HelloNicoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HelloNicoo/subscriptions",
"organizations_url": "https://api.github.com/users/HelloNicoo/orgs",
"repos_url": "https://api.github.com/users/HelloNicoo/repos",
"events_url": "https://api.github.com/users/HelloNicoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/HelloNicoo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @HelloNicoo, thanks for raising an issue. \r\n\r\nWe get many issues and feature requests and so need to you help us so that we can get through them at a reasonable pace. Could you: \r\n* Make sure all of the code is properly formated in markdown - between three backticks i.e. ` ``` code goes here ``` `\r\n* Give a full description of what the issue is e.g. errors occured with full traceback, your observations, what you've tried so far. In this case, this would involve clarifying what you mean by \"take effect\" (how this was measure and observed) \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | ### System Info
`transformers` version: 4.34.1
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: fp16
- use_cpu: True
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
``'''
@File : custom_bert_model.py
@Time : 2023/09/15 14:37:17
@Author : Raomoin
@Version : 1.0
@Contact : [email protected]
@License : (C)Copyright 2023-2024, Liugroup-NLPR-CASIA
@Desc : None
'''
import torch
import warnings
from dataclasses import dataclass, field
from typing import Dict
import numpy as np
import torch.nn as nn
from datasets import load_dataset
from sklearn.metrics import f1_score, precision_score, recall_score
from transformers import (BertModel, BertPreTrainedModel, BertTokenizer,
Trainer, TrainingArguments)
from transformers.modeling_outputs import SequenceClassifierOutput
from transformers.trainer_utils import EvalPrediction
# warnings.filterwarnings("ignore")
MODEL_NAME = 'bert-base-chinese'
token = BertTokenizer.from_pretrained(MODEL_NAME, local_files_only=True)
@dataclass
class ModelArguments:
"""
模型参数定义
"""
ner_num_labels: int = field(default=2, metadata={"help": "需要预测的标签数量"})
def compute_metrics(eval_output):
"""
该函数是回调函数,Trainer会在进行评估时调用该函数。
(如果使用Pycharm等IDE进行调试,可以使用断点的方法来调试该函数,该函数在进行评估时被调用)
"""
print('qqqqqqqq')
preds = eval_output.predictions
preds = np.argmax(preds, axis=-1).flatten()
labels = eval_output.label_ids.flatten()
# labels为0表示为<pad>,因此计算时需要去掉该部分
mask = labels != 0
preds = preds[mask]
labels = labels[mask]
metrics = dict()
metrics["f1"] = f1_score(labels, preds, average="macro")
metrics["precision"] = precision_score(labels, preds, average="macro")
metrics["recall"] = recall_score(labels, preds, average="macro")
# 必须以字典的形式返回,后面会用到字典的key
print(metrics)
return metrics
class CustomBertModel(BertPreTrainedModel):
"""
自定义模型
"""
def __init__(self, config, *model_args, **model_kargs):
super().__init__(config)
if "model_args" in model_kargs:
model_args = model_kargs["model_args"]
self.config.__dict__.update(model_args.__dict__)
self._num_labels = self.config.ner_num_labels
self._bert = BertModel(config, add_pooling_layer=False)
self._classifier = nn.Linear(
self.config.hidden_size, self._num_labels)
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
target=None,
return_dict=None,
):
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self._bert(
input_ids,
attention_mask=attention_mask,
)
sequence_output = outputs[0]
logits = self._classifier(sequence_output[:, 0, :])
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits, target)
return {'loss': loss}
def tokenize_function(examples):
"""
map处理数据
"""
new_data = token(examples['text'], padding='max_length', truncation=True)
# new_data['labels'] = [cate_dict[label] for label in examples["cat_leaf_name_old"]]
return new_data
if __name__ == '__main__':
model_args = ModelArguments()
model = CustomBertModel.from_pretrained(MODEL_NAME, model_args=model_args, local_files_only=True)
dataset = load_dataset('json', data_files='./data/all.json')
shuffled_dataset = dataset.shuffle()
shuffled_dataset_all = shuffled_dataset['train'].train_test_split(
test_size=0.1)
tokenized_shuffled_dataset_all = shuffled_dataset_all.map(
tokenize_function, batched=True)
training_args = TrainingArguments(output_dir="custom_bert_model",
evaluation_strategy="epoch",
num_train_epochs=2,
per_device_eval_batch_size=8,
per_device_train_batch_size=8,
do_eval=True,
)
trainer = Trainer(model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=tokenized_shuffled_dataset_all['train'],
eval_dataset=tokenized_shuffled_dataset_all['test'],
)
trainer.train()
# trainer.save_model('saved_custom_model')
trainer.evaluate()
`
`
### Expected behavior
Hello, I would like to ask about the situation that rewriting the compute_metrics function does not take effect after rewriting the model part in the transformers framework. Is there any solution
Here is my code, can you help me, thx !
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27426/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27425/comments | https://api.github.com/repos/huggingface/transformers/issues/27425/events | https://github.com/huggingface/transformers/pull/27425 | 1,987,167,758 | PR_kwDOCUB6oc5fHd1f | 27,425 | Introduce Textnet backbone (needed for Fast model) | {
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@amyeroberts I have pulled out all the changes for textnet backbone needed for fast model (https://github.com/huggingface/transformers/pull/26657) into separate PR. Requesting for a first round of review.",
"> Thanks for adding!\r\n> \r\n> I've done an initial pass - it'll likely be at least one more review round before ready for approval. Main comment is about the deletion and addition of model parameters. The priority when adding models to transformers is to make sure that they're easy to read and understand - the current fused kernel logic should be simplified or completely removed. Have you run tests to see compare running times with and without the fused kernels?\r\n\r\nFair point, I just ran the tests. The fused kernel just saves some 5 % time during eval. I am going to remove the complex logic.",
"@amyeroberts I have addressed almost all of the feedbacks, I have some questions of few of them. Help me with some more details for them.",
"@amyeroberts The feedbacks on configuration refactoring has been addressed.",
"@amyeroberts With respect to textnet's image processor being present, I had removed it in favour of clip image processor, But then later realised that in the original preprocessing code, they do take the shortest_side as parameter, scale the short the side by that factor , then scale the long side by same factor. (like being in done our image processors). But in addition to this, they also align both the width and height of the final image to closest factor of 32. ([Ref](https://github.com/czczup/FAST/blob/7ab105474ab226ab74a346b0648d2e2baab67d79/dataset/utils.py#L205)) . Hence I had to introduce image_processing_textnet. Let me know if I am missing something else.",
"@amyeroberts Gentle reminder.",
"Hey, @amyeroberts is off, if this is urgent ping me or let' s wait until next week! ",
"I think this can wait .",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@amyeroberts All Feedbacks has been incorporated. Please have a look",
"@raghavanone Please make sure to look at the diff before asking for review, to catch any obvious things that should be updated. Opening the files tab, the first thing one sees are README's that need to be filled in. ",
"> @raghavanone Please make sure to look at the diff before asking for review, to catch any obvious things that should be updated. Opening the files tab, the first thing one sees are README's that need to be filled in. \n\nOh , the rebase to main should have undone some of these changes . Let me fix and review them again .",
"@amyeroberts I have fixed the issues coming from rebase to main and also validated that all the feedbacks are incorporated. Please have a look .",
"@amyeroberts Let me know how to update the checkpoint to organisation . ",
"Hi @Raghavan, nice work on getting TextNet across the line! \r\n\r\nFor adding the checkpoints to the hub, I’d suggest reaching out to the lead paper author and asking them if they would be happy to host the model weights on the hub under their profile. As the model appears to have been a collab between universities, there isn’t an obvious org for it to go under. \r\n\r\n@NielsRogge has more experience here, so will have recommendations for everything that needs to be done. "
] | 1,699 | 1,707 | null | CONTRIBUTOR | null | This is needed for merging fast model #26501 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27425/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27425",
"html_url": "https://github.com/huggingface/transformers/pull/27425",
"diff_url": "https://github.com/huggingface/transformers/pull/27425.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27425.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27424/comments | https://api.github.com/repos/huggingface/transformers/issues/27424/events | https://github.com/huggingface/transformers/issues/27424 | 1,986,946,259 | I_kwDOCUB6oc52bmTT | 27,424 | Predicted Label to Predicted token mapping in LayoutLMV3. Please look at the description. | {
"login": "slk-genai",
"id": 150240611,
"node_id": "U_kgDOCPR9Yw",
"avatar_url": "https://avatars.githubusercontent.com/u/150240611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slk-genai",
"html_url": "https://github.com/slk-genai",
"followers_url": "https://api.github.com/users/slk-genai/followers",
"following_url": "https://api.github.com/users/slk-genai/following{/other_user}",
"gists_url": "https://api.github.com/users/slk-genai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slk-genai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slk-genai/subscriptions",
"organizations_url": "https://api.github.com/users/slk-genai/orgs",
"repos_url": "https://api.github.com/users/slk-genai/repos",
"events_url": "https://api.github.com/users/slk-genai/events{/privacy}",
"received_events_url": "https://api.github.com/users/slk-genai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,705 | 1,705 | NONE | null | ### Feature request
I am trying the below code:
processor = LayoutLMv3Processor.from_pretrained("/content/layoutlmv3-large-finetuned-funsd")
encoding = processor(image, return_offsets_mapping=True, return_tensors="pt")
offset_mapping = encoding.pop('offset_mapping')
print(encoding.keys())
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
for k,v in encoding.items():
encoding[k] = v.to(device)
model = LayoutLMv3ForTokenClassification.from_pretrained("/content/layoutlmv3-large-finetuned-funsd")
model.to(device)
outputs = model(**encoding)
print(outputs.logits.shape)
I can see that there are parts of image which are getting labelled but during OCR identification those texts are not getting identified.
So, how can I get the label and their respective token?
### Motivation
I am new to LayoutLMv3 and I have task in hand which was getting easily done but I can't find a way to map the token prediction and it's respective label prediction mapping. I am not using the above checkpoints but the problem is similar to the mentioned checkpoint.
### Your contribution
Let me know if you need any help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27424/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27423/comments | https://api.github.com/repos/huggingface/transformers/issues/27423/events | https://github.com/huggingface/transformers/issues/27423 | 1,986,704,872 | I_kwDOCUB6oc52arXo | 27,423 | chatglm3-6b-32k does not support Flash Attention 2.0 | {
"login": "abc123456cxx",
"id": 150245861,
"node_id": "U_kgDOCPSR5Q",
"avatar_url": "https://avatars.githubusercontent.com/u/150245861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abc123456cxx",
"html_url": "https://github.com/abc123456cxx",
"followers_url": "https://api.github.com/users/abc123456cxx/followers",
"following_url": "https://api.github.com/users/abc123456cxx/following{/other_user}",
"gists_url": "https://api.github.com/users/abc123456cxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abc123456cxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abc123456cxx/subscriptions",
"organizations_url": "https://api.github.com/users/abc123456cxx/orgs",
"repos_url": "https://api.github.com/users/abc123456cxx/repos",
"events_url": "https://api.github.com/users/abc123456cxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/abc123456cxx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @abc123456cxx, thanks for raising this. \r\n\r\nIs this a request to add flash attention to the model? If so, the [modeling files live on the hub](https://huggingface.co/THUDM/chatglm3-6b-32k/tree/main). Could you open an issue there to request this feature? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | When I use **AutoModelForCausalLM.from_pretrained(path, trust_remote_code=True, load_in_8bit= True,use_flash_attention=True,device_map="auto")**, it occers:
ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27423/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27422/comments | https://api.github.com/repos/huggingface/transformers/issues/27422/events | https://github.com/huggingface/transformers/pull/27422 | 1,986,658,061 | PR_kwDOCUB6oc5fFv9Z | 27,422 | Perf torch compile | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu\r\n\r\nHi,\r\n\r\nhere is another PR.\r\n\r\nBest",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27422). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,700 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27422",
"html_url": "https://github.com/huggingface/transformers/pull/27422",
"diff_url": "https://github.com/huggingface/transformers/pull/27422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27422.patch",
"merged_at": 1699897601000
} |
https://api.github.com/repos/huggingface/transformers/issues/27421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27421/comments | https://api.github.com/repos/huggingface/transformers/issues/27421/events | https://github.com/huggingface/transformers/issues/27421 | 1,986,632,251 | I_kwDOCUB6oc52aZo7 | 27,421 | Race condition when loading models from local folders with custom code | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @dakinggg, thanks for raising this issue! \r\n\r\nCould you give a bit more information about how you're running the code from multiple processes? \r\n\r\nNote: there was a recent update in the recent release `4.35` where we use safetensors by default when saving/loading models which might also affect this (although I doubt it). ",
"It is simply multiple parallel processes (one per GPU) each running the `from_pretrained` call at the same time to load the model in for that process. The processes are set up with the normal torch distributed environment variables.What other info would be helpful? I also would not expect safetensors to affect this since it's related to the code modules not the weights.",
"Hi @dakinggg, apologies for the delay on my side. I've been away the past two weeks. \r\n\r\nOK, so race condition is exactly what seems to be happening -> each process is trying to download the weights to the same location. Ultimately, you only want to try to download once. The easiest fix is making sure that the checkpoint is downloaded first before running the script. This means all the models will just load the cache in their process. \r\n\r\nCould you share a minimal code snippet which can reproduce the error (even if it's sporadic?) as well as the env variable values, how multiprocessing is being set up and any CLI arguments being used to launch the script. ",
"Hi Amy, sorry for the delay, haven't had time to write and test a minimal script, but will try to soon.\r\n\r\nIf you want to investigate in the meantime, the pseudocode for that script is\r\n```\r\n# outside of the main script\r\n# this can be any model with custom code, we just need a local folder to load from\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)\r\nmodel.save_pretrained('my-local-folder')\r\n```\r\n\r\nmain script:\r\n```\r\ndef main():\r\n config = transformers.AutoConfig.from_pretrained('my-local-folder', trust_remote_code=True)\r\n if local_rank() == 0:\r\n model = AutoModelForCausalLM.from_pretrained('my-local-folder', trust_remote_code=True, config=config)\r\n else:\r\n with init_empty_weights(include_buffers=False):\r\n model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)\r\n```\r\n\r\nthen run the script with whatever distributed launcher you like (accelerate, torchrun, etc). Likely requires a bunch of times to hit the issue, and I'm not sure whether the issue is hardware specific or not.",
"not stale",
"I can confirm this problem still exists when I load the model from a local folder (instead of downloading from internet). It's not very easy to reproduce because I get `AttributeError: module '......' has no attribute '....'` error occasionally.",
"@randxie OK - so the model is completely downloaded and the issue is happening when many models are trying to read from the same weights file at the same time? \r\n"
] | 1,699 | 1,707 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
not sure who to tag, so starting with @arthurzucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is pretty difficult to reproduce, so I don't have an easy script or anything like that, but hopefully the symptom is enough to fix it. It seems that when I load a model with `transformers.AutoModel.from_pretrained(<my local folder>)`, and that model contains custom code, the custom code is copied into the transformers cache. That is fine, except there seems to be a possible race condition when doing this from multiple processes at the same time. So if I call `transformers.AutoModel.from_pretrained(<my local folder>)` from multiple processes at the same time, very occasionally, I end up with a module not found error for a module that very clearly exists. My guess is that the separate processes are overwriting each other and there is a brief moment where some file doesn't exist, while one of the processes is attempting to load it.
I worked around this by just creating the model first on rank 0 to prepopulate the transformers module cache, but should this operation be implemented in a safer way in transformers?
Thanks!
### Expected behavior
`from_pretrained` does not need to be run first on rank 0 in order to be safe. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27421/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27421/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27420/comments | https://api.github.com/repos/huggingface/transformers/issues/27420/events | https://github.com/huggingface/transformers/pull/27420 | 1,986,547,345 | PR_kwDOCUB6oc5fFYaJ | 27,420 | Bump pyarrow from 7.0.0 to 14.0.1 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"@dependabot ignore this major version",
"OK, I won't notify you about version 14.x.x again, unless you re-open this PR."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | Bumps [pyarrow](https://github.com/apache/arrow) from 7.0.0 to 14.0.1.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/apache/arrow/commit/ba537483618196f50c67a90a473039e4d5dc35e0"><code>ba53748</code></a> MINOR: [Release] Update versions for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/529f3768fa4fce80781cd1a3cbdcf3a826281b14"><code>529f376</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/b84bbcac64e184a2b58661385506cff23006cc10"><code>b84bbca</code></a> MINOR: [Release] Update CHANGELOG.md for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf"><code>f141709</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38607">GH-38607</a>: [Python] Disable PyExtensionType autoload (<a href="https://redirect.github.com/apache/arrow/issues/38608">#38608</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/5a37e741987e4cba41dfdad2331a308c95b62083"><code>5a37e74</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38431">GH-38431</a>: [Python][CI] Update fs.type_name checks for s3fs tests (<a href="https://redirect.github.com/apache/arrow/issues/38455">#38455</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/2dcee3f82c6cf54b53a64729fd81840efa583244"><code>2dcee3f</code></a> MINOR: [Release] Update versions for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/297428cbf2fc84a224654eb0b336614e6543d1aa"><code>297428c</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/3e9734f8830797fe09b883f00d349116d95c51f9"><code>3e9734f</code></a> MINOR: [Release] Update CHANGELOG.md for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/9f90995c8cee0d9906349f421f2445ab9adcb7ac"><code>9f90995</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38332">GH-38332</a>: [CI][Release] Resolve symlinks in RAT lint (<a href="https://redirect.github.com/apache/arrow/issues/38337">#38337</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/bd61239a32c94e37b9510071c0ffacad455798c0"><code>bd61239</code></a> <a href="https://redirect.github.com/apache/arrow/issues/35531">GH-35531</a>: [Python] C Data Interface PyCapsule Protocol (<a href="https://redirect.github.com/apache/arrow/issues/37797">#37797</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/apache/arrow/compare/go/v7.0.0...apache-arrow-14.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27420/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27420",
"html_url": "https://github.com/huggingface/transformers/pull/27420",
"diff_url": "https://github.com/huggingface/transformers/pull/27420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27420.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27419/comments | https://api.github.com/repos/huggingface/transformers/issues/27419/events | https://github.com/huggingface/transformers/pull/27419 | 1,986,516,272 | PR_kwDOCUB6oc5fFRdV | 27,419 | Bump pyarrow from 1.0.1 to 14.0.1 in /examples/research_projects/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@dependabot ignore this major version",
"OK, I won't notify you about version 14.x.x again, unless you re-open this PR."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | Bumps [pyarrow](https://github.com/apache/arrow) from 1.0.1 to 14.0.1.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/apache/arrow/commit/ba537483618196f50c67a90a473039e4d5dc35e0"><code>ba53748</code></a> MINOR: [Release] Update versions for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/529f3768fa4fce80781cd1a3cbdcf3a826281b14"><code>529f376</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/b84bbcac64e184a2b58661385506cff23006cc10"><code>b84bbca</code></a> MINOR: [Release] Update CHANGELOG.md for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf"><code>f141709</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38607">GH-38607</a>: [Python] Disable PyExtensionType autoload (<a href="https://redirect.github.com/apache/arrow/issues/38608">#38608</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/5a37e741987e4cba41dfdad2331a308c95b62083"><code>5a37e74</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38431">GH-38431</a>: [Python][CI] Update fs.type_name checks for s3fs tests (<a href="https://redirect.github.com/apache/arrow/issues/38455">#38455</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/2dcee3f82c6cf54b53a64729fd81840efa583244"><code>2dcee3f</code></a> MINOR: [Release] Update versions for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/297428cbf2fc84a224654eb0b336614e6543d1aa"><code>297428c</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/3e9734f8830797fe09b883f00d349116d95c51f9"><code>3e9734f</code></a> MINOR: [Release] Update CHANGELOG.md for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/9f90995c8cee0d9906349f421f2445ab9adcb7ac"><code>9f90995</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38332">GH-38332</a>: [CI][Release] Resolve symlinks in RAT lint (<a href="https://redirect.github.com/apache/arrow/issues/38337">#38337</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/bd61239a32c94e37b9510071c0ffacad455798c0"><code>bd61239</code></a> <a href="https://redirect.github.com/apache/arrow/issues/35531">GH-35531</a>: [Python] C Data Interface PyCapsule Protocol (<a href="https://redirect.github.com/apache/arrow/issues/37797">#37797</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/apache/arrow/compare/apache-arrow-1.0.1...apache-arrow-14.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27419",
"html_url": "https://github.com/huggingface/transformers/pull/27419",
"diff_url": "https://github.com/huggingface/transformers/pull/27419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27419.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27418/comments | https://api.github.com/repos/huggingface/transformers/issues/27418/events | https://github.com/huggingface/transformers/issues/27418 | 1,986,504,289 | I_kwDOCUB6oc52Z6Zh | 27,418 | Flax T5 models (at least) should use scan over layers technique | {
"login": "colehaus",
"id": 9491942,
"node_id": "MDQ6VXNlcjk0OTE5NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9491942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/colehaus",
"html_url": "https://github.com/colehaus",
"followers_url": "https://api.github.com/users/colehaus/followers",
"following_url": "https://api.github.com/users/colehaus/following{/other_user}",
"gists_url": "https://api.github.com/users/colehaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/colehaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/colehaus/subscriptions",
"organizations_url": "https://api.github.com/users/colehaus/orgs",
"repos_url": "https://api.github.com/users/colehaus/repos",
"events_url": "https://api.github.com/users/colehaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/colehaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sanchit-gandhi ",
"Hey @colehaus! Sorry for the late reply here. We've currently decided not to implement `scan` for the Flax models in Transformers. You can see a brief reason for this here: https://github.com/huggingface/transformers/pull/24587#discussion_r1347737942\r\n\r\nHappy to re-open the conversation if you feel strongly about this! There was a WIP PR that shows how this could be done generally for Transformers models here: #18341\r\n\r\nBut currently I tend to view `scan` as a specific feature that can be built on top of the Transformers library by advanced users who require it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,704 | 1,704 | NONE | null | ### Feature request
See the technique described [here](https://docs.kidger.site/equinox/tricks/#improve-compilation-speed-with-scan-over-layers) and [here](https://github.com/google-research/t5x/blob/main/t5x/examples/scalable_t5/README.md#scan-over-layers). The essence is using a JAX `scan` instead of a python loop to iterate over layers that have the same structure.
### Motivation
The scan over layers technique allows JAX to "see" that the computational structure of each iteration is the same. This can dramatically reduce compile time and also system memory occupied by the JAX compilation cache (i.e. I believe if you have 25 layers in a model, the naive approach will end up with ~25 times as much JIT-compiled code since each layer will result in duplicative output code). My handwritten T5-like model uses ~1/50th of the system memory of the `transformers` Flax T5 models of similar size. It's easy to get system OOM errors with the current Flax implementation if you end up with multiple versions of the model compiled for different sequence lengths.
### Your contribution
It's possible I could submit a PR for this at some point in the future, but I can't be certain. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27418/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27417/comments | https://api.github.com/repos/huggingface/transformers/issues/27417/events | https://github.com/huggingface/transformers/pull/27417 | 1,986,502,145 | PR_kwDOCUB6oc5fFOTH | 27,417 | Bump pyarrow from 1.0.1 to 14.0.1 in /examples/research_projects/visual_bert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@dependabot ignore this major version",
"OK, I won't notify you about version 14.x.x again, unless you re-open this PR."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | Bumps [pyarrow](https://github.com/apache/arrow) from 1.0.1 to 14.0.1.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/apache/arrow/commit/ba537483618196f50c67a90a473039e4d5dc35e0"><code>ba53748</code></a> MINOR: [Release] Update versions for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/529f3768fa4fce80781cd1a3cbdcf3a826281b14"><code>529f376</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/b84bbcac64e184a2b58661385506cff23006cc10"><code>b84bbca</code></a> MINOR: [Release] Update CHANGELOG.md for 14.0.1</li>
<li><a href="https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf"><code>f141709</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38607">GH-38607</a>: [Python] Disable PyExtensionType autoload (<a href="https://redirect.github.com/apache/arrow/issues/38608">#38608</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/5a37e741987e4cba41dfdad2331a308c95b62083"><code>5a37e74</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38431">GH-38431</a>: [Python][CI] Update fs.type_name checks for s3fs tests (<a href="https://redirect.github.com/apache/arrow/issues/38455">#38455</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/2dcee3f82c6cf54b53a64729fd81840efa583244"><code>2dcee3f</code></a> MINOR: [Release] Update versions for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/297428cbf2fc84a224654eb0b336614e6543d1aa"><code>297428c</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/3e9734f8830797fe09b883f00d349116d95c51f9"><code>3e9734f</code></a> MINOR: [Release] Update CHANGELOG.md for 14.0.0</li>
<li><a href="https://github.com/apache/arrow/commit/9f90995c8cee0d9906349f421f2445ab9adcb7ac"><code>9f90995</code></a> <a href="https://redirect.github.com/apache/arrow/issues/38332">GH-38332</a>: [CI][Release] Resolve symlinks in RAT lint (<a href="https://redirect.github.com/apache/arrow/issues/38337">#38337</a>)</li>
<li><a href="https://github.com/apache/arrow/commit/bd61239a32c94e37b9510071c0ffacad455798c0"><code>bd61239</code></a> <a href="https://redirect.github.com/apache/arrow/issues/35531">GH-35531</a>: [Python] C Data Interface PyCapsule Protocol (<a href="https://redirect.github.com/apache/arrow/issues/37797">#37797</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/apache/arrow/compare/apache-arrow-1.0.1...apache-arrow-14.0.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27417",
"html_url": "https://github.com/huggingface/transformers/pull/27417",
"diff_url": "https://github.com/huggingface/transformers/pull/27417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27417.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27416/comments | https://api.github.com/repos/huggingface/transformers/issues/27416/events | https://github.com/huggingface/transformers/issues/27416 | 1,986,354,803 | I_kwDOCUB6oc52ZV5z | 27,416 | 4.35.0 requires troublesome pytorch install for JAX users | {
"login": "colehaus",
"id": 9491942,
"node_id": "MDQ6VXNlcjk0OTE5NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9491942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/colehaus",
"html_url": "https://github.com/colehaus",
"followers_url": "https://api.github.com/users/colehaus/followers",
"following_url": "https://api.github.com/users/colehaus/following{/other_user}",
"gists_url": "https://api.github.com/users/colehaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/colehaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/colehaus/subscriptions",
"organizations_url": "https://api.github.com/users/colehaus/orgs",
"repos_url": "https://api.github.com/users/colehaus/repos",
"events_url": "https://api.github.com/users/colehaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/colehaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"(Also, after the upgrade I get this warning:\r\n\r\nSome weights of the model checkpoint at t5-base were not used when initializing FlaxT5ForConditionalGeneration: {('decoder', 'block', '0', 'layer', '1', 'EncDecAttention', 'relative_attention_bias', 'kernel')}\r\n- This IS expected if you are initializing FlaxT5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing FlaxT5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n\r\nBased on https://github.com/huggingface/transformers/issues/10221, I think the model is as expected and the issue is just that `model.safetensors` has an extra/inappropriate tensor there? If that's not the case, I can make a proper issue for this.)",
"cc @LysandreJik @sanchit-gandhi ",
"Thanks for reporting! This shouldn't be the case. We'll take a look and release a patch.",
"@sanchit-gandhi, @amyeroberts, curious to hear your thoughts:\r\n\r\nRight now the implementation defaults to `safetensors` to load weights (it has been the case for PyTorch since the initial implementation and for TF since https://github.com/huggingface/transformers/pull/19900.\r\n\r\nThis is problematic as the overwhelming majority of `safetensors` weights are saved from PyTorch and will need to be loaded in PyTorch. As @colehaus has shown above, installing PyTorch solves the problem, but is a requirement we really do not want for users wanting to run Flax/TF models.\r\n\r\nAFAIK we have no way of knowing the metadata of the checkpoint (and hence to know from which framework it was saved) without first fetching the entire checkpoint. I don't think we'll want to download safetensors checkpoints before reading the metadata (which will be PT in 99% of cases), and then fetching a .msgpack.\r\n\r\nGiven this, I think the solution is to default to `.msgpack` checkpoints (and their sharded counterparts) even if there are safetensors weights in the repository.\r\n\r\nEDIT: Alternative solutions that might be cleaner\r\n- After talking to @Narsil I understand that we could eventually just download the first few MBs of the checkpoints and have access to the metadata, to check there\r\n- The necessity to have `torch` installed for the conversion doesn't seem to have a hard necessity. I can look at relaxing that requirement, which would make everything much simpler.",
"As an immediate workaround @colehaus, please downgrade to v4.34; we'll aim to resolve this cleanly as this is clearly unintended behavior.",
"@LysandreJik Aligned with what you suggest here - as a first step defaulting to `.msgpack` seems like a pragmatic solution to fix quickly. \r\n\r\nI think we can do both of the alternative solutions. Grabbing the metadata and checking before downloading would have been my suggestion too if it was possible. ",
"@colehaus, could you please let me know if checking out this PR solves your issue? https://github.com/huggingface/transformers/pull/27460\r\n\r\nIt's a bandaid while we implement the metadata fetching/relaxing of the `torch` requirement for the conversion.",
"Installing from source should now solve the issue @colehaus. I'll release a patch probably tomorrow or Wednesday.",
"how to convert a .msgpack (flax/jax format) model weights to .bin (huggingface format) or .pth (pytorch format)? thanks a lot",
"See my comment here: https://github.com/huggingface/transformers/issues/26813#issuecomment-1835703292"
] | 1,699 | 1,701 | 1,699 | NONE | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.7.4 (gpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`FlaxT5ForConditionalGeneration.from_pretrained("t5-base")`
### Expected behavior
I was using transformers 4.31.0 and loading pre-trained models worked pretty smoothly. I recently tried to upgrade to 4.35.0 ad encountered a cluster of issues:
- The same `from_pretrained` call fails at runtime now and says that pytorch must be installed (I assume this is due to the `safetensors` changes mentioned here: https://github.com/huggingface/transformers/releases/tag/v4.35.0)
- I don't see any way to disable the new loading behavior and revert to the old loading behavior
- I install pytorch (sort of unfortunate that a third-party lib is mandating the install of an entire separate DL stack)
- However, now `jax.devices()` warns about a failure to initialize CUDA and only returns the CPU as a device. This is because `pytorch` pins specific versions of CUDA support libs (https://github.com/pytorch/pytorch/blob/ceb07656c234b78bd045312f871b9cdca8c3c3ba/.github/scripts/generate_binary_build_matrix.py#L32) which are incompatible with those JAX wants.
- I also tried some other permutations of pytorch and JAX versions (between 0.4.14 and 0.4.20 IIRC). None of them are compatible. They all fail at runtime with either JAX's failure to initialize CUDA or throw an error about JAX being compiled against newer versions of the libs.
TL;DR: 4.35.0 requires `torch` alongside `jax` to load models and it's hard to do this in a way that satisfies both sets of constraints for CUDA support libs.
This was ultimately resolvable for me by installing the CPU version of pytorch, but I thought I'd write out this issue for anyone else that encounters it and so that perhaps this interaction can be handled and/or documented more fluently. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27415/comments | https://api.github.com/repos/huggingface/transformers/issues/27415/events | https://github.com/huggingface/transformers/pull/27415 | 1,986,336,751 | PR_kwDOCUB6oc5fEqPh | 27,415 | Adding LeakyReLU | {
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing tests are note related to this PR: \r\n`examples/pytorch/test_accelerate_examples.py:176: AssertionError`\r\n`tests/models/auto/test_tokenization_auto.py:428: AssertionError`",
"@rafaelpadilla Can you rebase on main? There were some fixes last week committed which should resolve these. "
] | 1,699 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Simple thing.... it includes LeakyReLU in the `ACT2CLS`.
Noticed that the RTDetr official implementation supports "leaky_relu" ([here](https://github.com/lyuwenyu/RT-DETR/blob/3330eca679a7d7cce16bbb10509099174a2f40bf/rtdetr_pytorch/src/nn/backbone/common.py#L70)), which is not being mapped in our ACT2CLS.
Edited: CI failing tests are not related to this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27415/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27415",
"html_url": "https://github.com/huggingface/transformers/pull/27415",
"diff_url": "https://github.com/huggingface/transformers/pull/27415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27415.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27414/comments | https://api.github.com/repos/huggingface/transformers/issues/27414/events | https://github.com/huggingface/transformers/pull/27414 | 1,986,131,840 | PR_kwDOCUB6oc5fD9Yf | 27,414 | remove failing tests and clean FE files | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you run the feature extraction tests for all the audio models to double check they're passing before merging please! ",
"Sure thing!",
"I've double check every audio models! Everything passes",
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool - do you have permissions to merge? I can merge if not ",
"I do, it's merged!"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Fixes [this](https://app.circleci.com/pipelines/github/huggingface/transformers/77846/workflows/8ed051fc-a105-462e-9097-91e919374c6e/jobs/992100), which was introduced by #26339.
To explain a bit, in #26339 I've mutualised an overriding of `to_dict` which deleted `mel_filters` and `windows`. The failing test was testing to retrieve those parameters to assert equality.
I've also cleaned a bit feature extractors that overrode `to_dict`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27414/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27414",
"html_url": "https://github.com/huggingface/transformers/pull/27414",
"diff_url": "https://github.com/huggingface/transformers/pull/27414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27414.patch",
"merged_at": 1699554942000
} |
https://api.github.com/repos/huggingface/transformers/issues/27413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27413/comments | https://api.github.com/repos/huggingface/transformers/issues/27413/events | https://github.com/huggingface/transformers/pull/27413 | 1,986,129,264 | PR_kwDOCUB6oc5fD80S | 27,413 | Run all tests if `circleci/create_circleci_config.py` is modified | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
This file is also a central file related to tests. If it has some modification and only this file, currently no test job defined in it would run. (`empty` job). We have been cheated by green CI those were actually empty.
Let's run all tests in this case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27413",
"html_url": "https://github.com/huggingface/transformers/pull/27413",
"diff_url": "https://github.com/huggingface/transformers/pull/27413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27413.patch",
"merged_at": 1699563667000
} |
https://api.github.com/repos/huggingface/transformers/issues/27412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27412/comments | https://api.github.com/repos/huggingface/transformers/issues/27412/events | https://github.com/huggingface/transformers/pull/27412 | 1,986,123,794 | PR_kwDOCUB6oc5fD7oG | 27,412 | Extend save_pretrained to offloaded models | {
"login": "blbadger",
"id": 54602201,
"node_id": "MDQ6VXNlcjU0NjAyMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/54602201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blbadger",
"html_url": "https://github.com/blbadger",
"followers_url": "https://api.github.com/users/blbadger/followers",
"following_url": "https://api.github.com/users/blbadger/following{/other_user}",
"gists_url": "https://api.github.com/users/blbadger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blbadger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blbadger/subscriptions",
"organizations_url": "https://api.github.com/users/blbadger/orgs",
"repos_url": "https://api.github.com/users/blbadger/repos",
"events_url": "https://api.github.com/users/blbadger/events{/privacy}",
"received_events_url": "https://api.github.com/users/blbadger/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @LysandreJik ",
"Curious what you think @muellerzr @pacman100 ",
"Overall I can see this being a pretty nice idea. Made some nits to improve. cc @SunMarc for your thoughts as well :) ",
"Would it be possible to eventually offload this to `accelerate`?",
"> Would it be possible to eventually offload this to accelerate?\r\n\r\nYes, we can definitely put it in a `get_offloaded_model_state_dict()` function in accelerate and use it here. ",
"Thanks very much for reviewing everyone! @muellerzr @SunMarc your suggestions are much appreciated, I have gone ahead and implemented pretty much all of them.\r\n\r\nA test for `save_pretrained` applied to a model with offloaded parameters has been added to `test_modeling_utils.py`. I previously used an equality test on the state dictionaries themselves and can add that too if that would be desirable.",
"@SunMarc sounds good to me! Thanks for the review and for giving pointers on where the method would live as well. I will open a PR in `accelerate` for adding the function `get_state_dict_offloaded_model()` with the proper integrations. ",
"@ArthurZucker thanks much for the review!\r\n\r\nJust to make sure I understand correctly, are you referring to adding compatibility for an offloaded model to be saved when it does not fit in cpu memory, with a possible method being onloading the state dictionary in parts and saving these parts as shards?\r\n\r\nI certainly agree that it would be best to be able to save models that don't fit in cpu memory, rather than just gpu memory as is currently the case. \r\n\r\nAs a general question to guide this, should these changes be added here or in `accelerate`? Being that most of this PR's additions were sent to https://github.com/huggingface/accelerate/pull/2156 we could implement this compatibility feature either in `transformers/modeling_utils` or in `accelerator.accelerate`.",
"It's more like it does not make sense for me to support save_pretrained for offloaded models if we don't support the most common use case for offload which is that the RAM is not big enough to hold the model (you can't save from the GPU anyway) ! \r\nThis could come in a followup PR, but it clearly makes more sense to support both instead of overloading the ram even if we have a big ram. Will also be faster\r\n ",
"We should be good to go for saving models too large to fit on cpu memory (although I am still cleaning up the code and adding to the test). The approach is to map shard keys to modules and onload each shard's key just before saving that shard, before offloading once again. \r\n\r\nSafe serialization provides a wrinkle in that meta tensors are by default recognized as shared tensors. I am not sure if there is a good way around this without onloading all tensors, which is a no-go if we are trying to preserve memory. Open to suggestions on this, however.",
"Asking @muellerzr to do a first pass before me! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Code has now been refactored and the test is updated, all are passing now.\r\n\r\nI have run the test externally with small `max_shard_size` to confirm that the model only loads the specified amount into memory at any one time (checked via linux tools). A memory trace (ie via `tracemalloc` or `memory_profiler`) can be added to the test if that is considered desirable too.",
"Went ahead and added cpu memory checks via `psutil` to the test. The idea is that the `model.save_pretrained` function will onload no more memory than `max_shard_size` from the disk to the cpu at any given time during the saving process, accounting for a margin of error that is somewhat model-dependent. \r\n\r\nThis is perhaps easier to check with a large model, but I think this test (for a very small model) is still valid as-is: if we increase the shard size substantially (or use the default shard size which is larger than the model size) the memory check fails. ",
"Thanks a lot, I'm just waiting for @muellerzr to run a first pass and will review as well 🤗 ",
"No problem, sounds good to me! I have also performed tests on disk storage used and there does not appear to be any change in storage usage compared to the `head` branch. I am not sure if we need a disk storage check added to the current test, however. ",
"After some re-examination this approach seems fairly robust and straightforward with the exception of the sharded memory allocation. It appears to me that the current block allocation using `id_tensor_storage()` will not suffice for saving offloaded modules (hence the extra conditions added to avoid classifying 'meta' device tensors as shared tensors), but I may be overlooking something there @muellerzr "
] | 1,699 | 1,707 | null | CONTRIBUTOR | null | # What does this PR do?
Fixes #20072 and addresses the second part of https://github.com/huggingface/peft/issues/868
Models with offloaded weights are currently incompatible with `save_pretrained`. This PR allows large models that are loaded onto the gpu and cpu to be saved, which is particularly useful for big models that have undergone merging and unloading via https://github.com/huggingface/peft/pull/1063.
The implementation is to iterate through modules and onload parameters to the execution device (typically gpu) before sending the appropriate elements of the state dict to the cpu in-place, where the final state dictionary is assembled and saved.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Still working on the tests (some small models are not compatible with offloading due to architectural considerations) but am happy to submit a colab version with a large model in the meantime:)
## Who can review?
Anyone!
@pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27412/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27412",
"html_url": "https://github.com/huggingface/transformers/pull/27412",
"diff_url": "https://github.com/huggingface/transformers/pull/27412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27412.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27411/comments | https://api.github.com/repos/huggingface/transformers/issues/27411/events | https://github.com/huggingface/transformers/pull/27411 | 1,986,071,999 | PR_kwDOCUB6oc5fDwYl | 27,411 | Faster generation using AWQ + Fused modules | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27411). All of your documentation changes will be reflected on that endpoint.",
"Before moving forward with tests and advanced docs, I would love to have an early feedback of the API that is described in the PR description. cc @amyeroberts , whenever you have time, i would appreciate your feedback on this PR 🙏 Thanks! ",
"Thank you very much @amyeroberts for your extensive review! I addressed most of your comments and left some open questions - let me know what do you think! 🙏 ",
"Thanks @amyeroberts @SunMarc for your great reviews!\r\n@SunMarc I just want to get more clarification on this comment, otherwise good to merge IMO !"
] | 1,699 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Introduces a new feature - fused module generation using `autoawq` library. Users need to specify modules that they want to fuse inside `fusing_mapping`.
The API is as follows:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AwqConfig, TextStreamer
model_id = "TheBloke/Mistral-7B-OpenOrca-AWQ"
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
quantization_config = AwqConfig(
bits=4,
do_fuse=True,
fuse_max_seq_len=512,
)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt_template = """\
<|im_start|>system
You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer([prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt)], return_tensors="pt", padding=True).to(0)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
## Before this PR:

## After this PR:

## TODOs:
- [x] Block save_pretrained if one creates a fused model
- [x] Fuse MLP
- [x] wait until dynamic batch size is resolved on auto-awq side : https://github.com/casper-hansen/AutoAWQ/pull/181
- [x] Force users to use `autoawq>=0.1.7`
- [x] Tests
- [x] Benchmarks
- [x] Docs
cc @amyeroberts @casper-hansen @SunMarc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27411/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 7,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27411/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27411",
"html_url": "https://github.com/huggingface/transformers/pull/27411",
"diff_url": "https://github.com/huggingface/transformers/pull/27411.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27411.patch",
"merged_at": 1701774885000
} |
https://api.github.com/repos/huggingface/transformers/issues/27410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27410/comments | https://api.github.com/repos/huggingface/transformers/issues/27410/events | https://github.com/huggingface/transformers/pull/27410 | 1,985,991,148 | PR_kwDOCUB6oc5fDeqJ | 27,410 | [ignore] CI stuff | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27410). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27410/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27410",
"html_url": "https://github.com/huggingface/transformers/pull/27410",
"diff_url": "https://github.com/huggingface/transformers/pull/27410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27410.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27409/comments | https://api.github.com/repos/huggingface/transformers/issues/27409/events | https://github.com/huggingface/transformers/pull/27409 | 1,985,970,610 | PR_kwDOCUB6oc5fDaNB | 27,409 | [`dependency`] update pillow pins | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
Following, [High Severity CVE](https://github.com/advisories/GHSA-j7hp-h8jx-5ppr), trying to update the pinned versions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27409",
"html_url": "https://github.com/huggingface/transformers/pull/27409",
"diff_url": "https://github.com/huggingface/transformers/pull/27409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27409.patch",
"merged_at": 1700642431000
} |
https://api.github.com/repos/huggingface/transformers/issues/27408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27408/comments | https://api.github.com/repos/huggingface/transformers/issues/27408/events | https://github.com/huggingface/transformers/pull/27408 | 1,985,943,844 | PR_kwDOCUB6oc5fDUYp | 27,408 | Final fix of the accelerate installation issue | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Still get `@4f100318f499cff48b62555ca7d96ad97d7cb9be`, I will check check!!!!!!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27408). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Fixfixfix | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27408/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27408",
"html_url": "https://github.com/huggingface/transformers/pull/27408",
"diff_url": "https://github.com/huggingface/transformers/pull/27408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27408.patch",
"merged_at": 1699552349000
} |
https://api.github.com/repos/huggingface/transformers/issues/27407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27407/comments | https://api.github.com/repos/huggingface/transformers/issues/27407/events | https://github.com/huggingface/transformers/pull/27407 | 1,985,844,660 | PR_kwDOCUB6oc5fC-Qm | 27,407 | Cache round 2 | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,699 | 1,701 | 1,701 | MEMBER | null | # What does this PR do?
Adds indexing to the cache, as `generate` relies on things like `past_key_values[0][0].shape` (the first set of indexes being 0 for the attention `key` and 1 for the attention `value`).
After this change, the following snippet produces the same result as before:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
set_seed(0)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer(["The best color is"], return_tensors="pt").to(model.device)
gen_out = model.generate(
**inputs,
do_sample=False,
max_new_tokens=20,
use_legacy_cache=False,
num_beams=2,
num_return_sequences=2,
)
print(tokenizer.batch_decode(gen_out, skip_special_tokens=True))
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27407",
"html_url": "https://github.com/huggingface/transformers/pull/27407",
"diff_url": "https://github.com/huggingface/transformers/pull/27407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27407.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27406/comments | https://api.github.com/repos/huggingface/transformers/issues/27406/events | https://github.com/huggingface/transformers/pull/27406 | 1,985,830,892 | PR_kwDOCUB6oc5fC7RT | 27,406 | Fix RequestCounter to make it more future-proof | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I will merge once something on main is fixed. (should be soon)"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | Related to https://github.com/huggingface/transformers/pull/27389 and https://github.com/huggingface/transformers/pull/27393 (cc @amyeroberts @ydshieh).
The problem came from the fact that `transformers` test suite was trying to mock `huggingface_hub.utils.get_session` which is an internal method that I sometimes move/adapt for my needs. It shouldn't change much in the future but to make it more future-proof I chose to intercept the `urllib3` debug logs directly. It's not perfect as urllib3 might change their logs + we could use `pytest.caplog` instead but at this stage I think we are fine.
I also unskipped the 2 tests that were previously failing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27406/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27406",
"html_url": "https://github.com/huggingface/transformers/pull/27406",
"diff_url": "https://github.com/huggingface/transformers/pull/27406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27406.patch",
"merged_at": 1699552407000
} |
https://api.github.com/repos/huggingface/transformers/issues/27405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27405/comments | https://api.github.com/repos/huggingface/transformers/issues/27405/events | https://github.com/huggingface/transformers/issues/27405 | 1,985,825,212 | I_kwDOCUB6oc52XUm8 | 27,405 | How Can I use cashed models from HuggingFace? | {
"login": "khabibulloevm",
"id": 86304910,
"node_id": "MDQ6VXNlcjg2MzA0OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/86304910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khabibulloevm",
"html_url": "https://github.com/khabibulloevm",
"followers_url": "https://api.github.com/users/khabibulloevm/followers",
"following_url": "https://api.github.com/users/khabibulloevm/following{/other_user}",
"gists_url": "https://api.github.com/users/khabibulloevm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khabibulloevm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khabibulloevm/subscriptions",
"organizations_url": "https://api.github.com/users/khabibulloevm/orgs",
"repos_url": "https://api.github.com/users/khabibulloevm/repos",
"events_url": "https://api.github.com/users/khabibulloevm/events{/privacy}",
"received_events_url": "https://api.github.com/users/khabibulloevm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @khabibulloevm, thanks for opening an issues, \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nThat being said, all of these files are used to construct a tokenizer which will perform tokenization.",
"Hello, @amyeroberts I apologize for opening the problem in the wrong place. Could you tell me where I can find information about how these files are used? That is, I want to see how these files are accessed, I will be very grateful for your answer!",
"You can inspect the tokenization files to see how they're used: \r\n* Model-specific tokenizer: https://github.com/huggingface/transformers/blob/c5037b459e117b9286c611092f38663f6cb763b0/src/transformers/models/bert/tokenization_bert.py#L4\r\n* `from_pretrained` method: https://github.com/huggingface/transformers/blob/c5037b459e117b9286c611092f38663f6cb763b0/src/transformers/tokenization_utils_base.py#L1799",
"for that you will need:\r\n1. call `x.save_pretrained(output_dirrectory)` for each x where x is tokenizer, model and feature_extractor(for some models there may be no featrue extractor, in that case ignore it) to the same output directory.\r\n2. then you can use `x.from_pretrained` to read the model, tokenizer and feature extractor from saved directories.\r\n\r\nFor example, download your model and save:\r\n\r\n```python\r\n\r\nfrom transformers import WhisperForConditionalGeneration, WhisperFeatureExtractor, WhisperTokenizer\r\n# this will download weights and configs from hf\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")\r\ntokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-small\")\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-small\")\r\n\r\noutput_dir = './my_awesome_model'\r\nos.makedirs(output_dir, exists_ok=True)\r\nmodel.save_pretrained(output_dir)\r\ntokenizer.save_pretrained(output_dir)\r\nfeature_extractor.save_pretrained(output_dir)\r\n```\r\n\r\nThen read your model, but supply output_dir to read it from file system (without internet):\r\n```python\r\n\r\nfrom transformers import (\r\n WhisperForConditionalGeneration,\r\n WhisperFeatureExtractor,\r\n WhisperTokenizer,\r\n)\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"./my_awesome_model\")\r\ntokenizer = WhisperTokenizer.from_pretrained(\"./my_awesome_model\")\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(\"./my_awesome_model\")\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | I want to use models from: [https://huggingface.co/ARTeLab/mbart-summarization-mlsum](https://stackoverflow.com/) in offline mode, meaning that after downloading them from Hugging Face, they will be saved locally and I will be able to use them offline. However, I don't know how to do this. If anyone has already figured this out, please advise me. I use these lines to download models:
```
from transformers import MBartTokenizer, MBartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-mlsum")
model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-mlsum")
```
The problem is that when I run this line, I download several files from the repository at once, and I don’t know which one is then used for tokenization:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27405/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27404/comments | https://api.github.com/repos/huggingface/transformers/issues/27404/events | https://github.com/huggingface/transformers/pull/27404 | 1,985,794,169 | PR_kwDOCUB6oc5fCzRS | 27,404 | Use editable install for git deps | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
This PR solves the issue https://github.com/huggingface/transformers/pull/27398 tried to address, but instead was an even more subtle bug that has existed in the workflow:
When using a cache (such as any PR that has been opened and ran), if a git installed dependency (such as `accelerate`) is installed with `--upgrade-strategy eager` (so as to install the latest dependency versions of the library) it will *ignore* the actual library itself!
The solution to this (tested locally a number of times) is to use an editable install. `pip freeze` will finally show the right versions meaning it was actually installed to the latest versions.
Steps to try yourself:
1. `pip uninstall accelerate`
2. `pip install --upgrade-strategy eager git+https://github.com/huggingface/accelerate@4f100318f499cff48b62555ca7d96ad97d7cb9be`
3. `pip freeze | grep "accelerate"` (should show the `4f1...` commit)
4. `pip install --upgrade-strategy eager git+https://github.com/huggingface/accelerate@main#egg=accelerate`
5. `pip freeze | grep "accelerate"` (will **still** show the `4f1...` commit)
6. `pip install -e --upgrade-strategy eager git+https://github.com/huggingface/accelerate@main#egg=accelerate`
7. `pip freeze | grep "accelerate"` (will **finally** show the right commit)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27404/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27404/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27404",
"html_url": "https://github.com/huggingface/transformers/pull/27404",
"diff_url": "https://github.com/huggingface/transformers/pull/27404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27404.patch",
"merged_at": 1699543213000
} |
https://api.github.com/repos/huggingface/transformers/issues/27403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27403/comments | https://api.github.com/repos/huggingface/transformers/issues/27403/events | https://github.com/huggingface/transformers/pull/27403 | 1,985,791,959 | PR_kwDOCUB6oc5fCyyU | 27,403 | Add version check for Jinja | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | MEMBER | null | As @philschmid noticed, `apply_chat_template` requires `jinja` version 3.x, but it's possible for some users to still have `2.x` installed. We now raise a useful error in this case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27403",
"html_url": "https://github.com/huggingface/transformers/pull/27403",
"diff_url": "https://github.com/huggingface/transformers/pull/27403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27403.patch",
"merged_at": 1699894890000
} |
https://api.github.com/repos/huggingface/transformers/issues/27402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27402/comments | https://api.github.com/repos/huggingface/transformers/issues/27402/events | https://github.com/huggingface/transformers/pull/27402 | 1,985,786,160 | PR_kwDOCUB6oc5fCxgm | 27,402 | Fix `Owlv2` checkpoint name and a default value in `Owlv2VisionConfig` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
See comment. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27402",
"html_url": "https://github.com/huggingface/transformers/pull/27402",
"diff_url": "https://github.com/huggingface/transformers/pull/27402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27402.patch",
"merged_at": 1699562343000
} |
https://api.github.com/repos/huggingface/transformers/issues/27401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27401/comments | https://api.github.com/repos/huggingface/transformers/issues/27401/events | https://github.com/huggingface/transformers/pull/27401 | 1,985,785,664 | PR_kwDOCUB6oc5fCxZr | 27,401 | Translating `en/model_doc` docs to Japanese. | {
"login": "Yuki-Imajuku",
"id": 72183189,
"node_id": "MDQ6VXNlcjcyMTgzMTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/72183189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuki-Imajuku",
"html_url": "https://github.com/Yuki-Imajuku",
"followers_url": "https://api.github.com/users/Yuki-Imajuku/followers",
"following_url": "https://api.github.com/users/Yuki-Imajuku/following{/other_user}",
"gists_url": "https://api.github.com/users/Yuki-Imajuku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yuki-Imajuku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yuki-Imajuku/subscriptions",
"organizations_url": "https://api.github.com/users/Yuki-Imajuku/orgs",
"repos_url": "https://api.github.com/users/Yuki-Imajuku/repos",
"events_url": "https://api.github.com/users/Yuki-Imajuku/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yuki-Imajuku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27401). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
translating `en/model_doc` docs to Japanese.
Fixes #27392
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27401",
"html_url": "https://github.com/huggingface/transformers/pull/27401",
"diff_url": "https://github.com/huggingface/transformers/pull/27401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27401.patch",
"merged_at": 1700072032000
} |
https://api.github.com/repos/huggingface/transformers/issues/27400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27400/comments | https://api.github.com/repos/huggingface/transformers/issues/27400/events | https://github.com/huggingface/transformers/pull/27400 | 1,985,758,753 | PR_kwDOCUB6oc5fCrhS | 27,400 | update Bark FA2 docs | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! Merging!"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Following @ArthurZucker's [review](https://github.com/huggingface/transformers/pull/27364#pullrequestreview-1722408067) in #27634, I've added a section on FA2 in Bark readme and a mention of Bark FA2 support in a section on FA2!
Note that this comment https://github.com/huggingface/transformers/pull/27364#discussion_r1387923555 is already addressed since `self.dropout` is already a float!
## Before submitting
- [w] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
cc @amyeroberts and @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27400/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27400",
"html_url": "https://github.com/huggingface/transformers/pull/27400",
"diff_url": "https://github.com/huggingface/transformers/pull/27400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27400.patch",
"merged_at": 1699623630000
} |
https://api.github.com/repos/huggingface/transformers/issues/27399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27399/comments | https://api.github.com/repos/huggingface/transformers/issues/27399/events | https://github.com/huggingface/transformers/pull/27399 | 1,985,658,229 | PR_kwDOCUB6oc5fCVS6 | 27,399 | Fix fuyu checkpoint repo in `FuyuConfig` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
There is no `adept/fuyu-8b-base` on the Hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27399/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27399",
"html_url": "https://github.com/huggingface/transformers/pull/27399",
"diff_url": "https://github.com/huggingface/transformers/pull/27399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27399.patch",
"merged_at": 1699541266000
} |
https://api.github.com/repos/huggingface/transformers/issues/27398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27398/comments | https://api.github.com/repos/huggingface/transformers/issues/27398/events | https://github.com/huggingface/transformers/pull/27398 | 1,985,634,293 | PR_kwDOCUB6oc5fCQCF | 27,398 | Skip flaky accelerate test | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@amyeroberts can you link to a failure? I thought #27378 fixed it :( (I tried to run every combo to see what that value would be)",
"@muellerzr Sure! Here's two CI runs with the test failing after rebasing on main: \r\n* https://app.circleci.com/pipelines/github/huggingface/transformers/77801/workflows/57ee7a82-f95d-43e2-8431-44420be74491/jobs/991432\r\n* https://app.circleci.com/pipelines/github/huggingface/transformers/77800/workflows/147b815d-30df-41c9-90ad-5013a78e0e1c/jobs/991478",
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah pytorch test, got it. I'll see if I can recreate this and see during the alignment PR what that looks like",
"I can recreate locally it failing with 0.51 (same as the accelerate example that fixed), I can't recreate this 0.67 result :( ",
"> I can recreate locally it failing with 0.51 (same as the accelerate example that fixed), I can't recreate this 0.67 result :(\r\n\r\n@ydshieh In this case - what would you recommend? Hold off until we can replicate and know we've found all the issues? ",
"re; slack: this is not the right issue. tl;dr the way we install the git pip dependencies does not actually grab the latest main commit, a different fix is needed. ",
"https://github.com/huggingface/transformers/pull/27404 will solve this in full :) "
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Skipping another flaky accelerate tests until values have been aligned with accelerate library
cc @muellerzr | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27398",
"html_url": "https://github.com/huggingface/transformers/pull/27398",
"diff_url": "https://github.com/huggingface/transformers/pull/27398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27398.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27397/comments | https://api.github.com/repos/huggingface/transformers/issues/27397/events | https://github.com/huggingface/transformers/issues/27397 | 1,985,631,869 | I_kwDOCUB6oc52WlZ9 | 27,397 | SafetensorError: Error while deserializing header: InvalidHeaderDeserialization when open .safetensor model | {
"login": "adhiiisetiawan",
"id": 51025603,
"node_id": "MDQ6VXNlcjUxMDI1NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/51025603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adhiiisetiawan",
"html_url": "https://github.com/adhiiisetiawan",
"followers_url": "https://api.github.com/users/adhiiisetiawan/followers",
"following_url": "https://api.github.com/users/adhiiisetiawan/following{/other_user}",
"gists_url": "https://api.github.com/users/adhiiisetiawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adhiiisetiawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adhiiisetiawan/subscriptions",
"organizations_url": "https://api.github.com/users/adhiiisetiawan/orgs",
"repos_url": "https://api.github.com/users/adhiiisetiawan/repos",
"events_url": "https://api.github.com/users/adhiiisetiawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/adhiiisetiawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @adhiiisetiawan, thanks for reporting! \r\n\r\nSo that we can best help you, could you: \r\n* include how the model is created before passing to the trainer? \r\n* confirm if this code previously working on an old version of transformers? \r\n* provide the running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* confirm if this works without the peft logic? \r\n\r\nRegarding the PEFT logic - why are you modifying the state_dict directly like this? You can follow the PEFT docs to see the canonical way to load and prepare a model for training: https://huggingface.co/docs/peft/task_guides/image_classification_lora#train-and-evaluate",
"cc @muellerzr as we discussed this issue yesterday; seems like safetensors aren't very friendly with the `Trainer`",
"@adhiiisetiawan your issue is the call to `torch.compile()`. If that step is skipped, you can save and load no problem.\r\n\r\nWith it included, you should find that `model.state_dict()` is *completely empty*, leading to this issue. \r\n\r\nThe sole reason it doesn't error without safetensors is because torch/pickle is okay loading in the empty dictionary as well. You can see this by simply adding the following code at the end:\r\n\r\n```python\r\nmodel.save_pretrained(\"test_model\", safe_serialization=False)\r\nf = torch.load(\"test_model/adapter_model.bin\")\r\nprint(f)\r\n```\r\nIt should print `{}`. Remove the `.compile()` and it will work fine. This is a `peft` issue specifically with `save_pretrained` and it's behavior with `torch.compile`. cc @BenjaminBossan ",
"A note on PEFT + `torch.compile`: Unfortunately, `torch.compile` still has a couple of gaps that make it not work properly in PEFT. There is not much we can do about it except to wait for PyTorch to close those gaps. How that can lead to an empty `state_dict`, I don't know.",
"Oh I see, I got it. Thank you very much all for your answer and details explanation @amyeroberts @LysandreJik @muellerzr @BenjaminBossan \r\n\r\nsafetensors it's work now without `torch.compile`",
"@adhiiisetiawan Hello~ I want to know whether the LoRA training will be slowed down without torch.compile? also the the memory consumption increased? ",
"hi @MerrillLi, in my case, i dont have any issue without `torch.compile`. sorry for late response",
"I'm having this same issue (details here: https://github.com/huggingface/transformers/issues/28742). Could anyone please help?"
] | 1,699 | 1,706 | 1,699 | NONE | null | ### System Info
Hi guys, i just fine tune alpaca (LLaMA 7B base model) with custom dataset and using trainer API. After completing the training process, I received the following error:
```python
SafetensorError Traceback (most recent call last)
<ipython-input-16-8ff7a1776602> in <cell line: 18>()
16 model = torch.compile(model)
17
---> 18 trainer.train()
19 model.save_pretrained(OUTPUT_DIR)
5 frames
/usr/local/lib/python3.10/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1554 hf_hub_utils.enable_progress_bars()
1555 else:
-> 1556 return inner_training_loop(
1557 args=args,
1558 resume_from_checkpoint=resume_from_checkpoint,
/usr/local/lib/python3.10/dist-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1964 smp.barrier()
1965
-> 1966 self._load_best_model()
1967
1968 # add remaining tr_loss
/usr/local/lib/python3.10/dist-packages/transformers/trainer.py in _load_best_model(self)
2183 if hasattr(model, "active_adapter") and hasattr(model, "load_adapter"):
2184 if os.path.exists(best_adapter_model_path) or os.path.exists(best_safe_adapter_model_path):
-> 2185 model.load_adapter(self.state.best_model_checkpoint, model.active_adapter)
2186 # Load_adapter has no return value present, modify it when appropriate.
2187 from torch.nn.modules.module import _IncompatibleKeys
/usr/local/lib/python3.10/dist-packages/peft/peft_model.py in load_adapter(self, model_id, adapter_name, is_trainable, **kwargs)
601 self.add_adapter(adapter_name, peft_config)
602
--> 603 adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
604
605 # load the weights into the model
/usr/local/lib/python3.10/dist-packages/peft/utils/save_and_load.py in load_peft_weights(model_id, device, **hf_hub_download_kwargs)
220
221 if use_safetensors:
--> 222 adapters_weights = safe_load_file(filename, device=device)
223 else:
224 adapters_weights = torch.load(filename, map_location=torch.device(device))
/usr/local/lib/python3.10/dist-packages/safetensors/torch.py in load_file(filename, device)
306 """
307 result = {}
--> 308 with safe_open(filename, framework="pt", device=device) as f:
309 for k in f.keys():
310 result[k] = f.get_tensor(k)
SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
```
and here's my code:
```python
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=training_arguments,
data_collator=data_collator
)
model.config.use_cache = False
old_state_dict = model.state_dict
model.state_dict = (
lambda self, *_, **__: get_peft_model_state_dict(
self, old_state_dict()
)
).__get__(model, type(model))
model = torch.compile(model)
trainer.train() # the error from this
model.save_pretrained(OUTPUT_DIR)
```
I already got this error maybe 3 times. Initially, I suspected it might be related to the model I was using (Alpaca weight base model), but even after switching to the LLaMA 7B base model, the problem persists. Still can't found the root cause and how to solve the problem. But, in my opinion the problem comes from safetensor model itself. Because when I try to open safetensor model using this code, I got same error.
```python
from safetensors import safe_open
tensors = {}
with safe_open("/content/experiments/checkpoint-100/adapter_model.safetensors", framework="pt", device=0) as f:
for k in f.keys():
tensors[k] = f.get_tensor(k)
```
**Note:** I installed the transformers library from source. When using the version from PyPI, I didn't encounter an error because the model was saved in .bin format, rather than .safetensor.
### Reproduction
To reproduce the behavior:
1. Install the transformers library from source.
2. Train any model using the installed library.
3. The model will automatically be saved in .safetensor format.
4. Once the training is complete, the error will occur.
### Expected behavior
Can complete train model using .safetensor
### Update
The training process complete using transformers from source, but the model is .bin, not .safetensor. It's okay, but i still curious, why in safetensor got an error when try to open it. here's my [colab ](https://colab.research.google.com/drive/1vtz5DwFSm6bYIH52ZDAmogbIjDI_Feqg?usp=sharing) link when i test to open safetensor model
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27396/comments | https://api.github.com/repos/huggingface/transformers/issues/27396/events | https://github.com/huggingface/transformers/issues/27396 | 1,985,606,848 | I_kwDOCUB6oc52WfTA | 27,396 | Safetensor boolean is wrong in modeling utils | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | COLLABORATOR | null | # Description
In [`load_pretrained_model`](https://github.com/huggingface/transformers/blob/3258ff93304078b9e27d752e6c19d3813f664855/src/transformers/modeling_utils.py#L3573), the `is_safetensors` boolean doesn't seem to work as expected.
To be more specific, `is_safetensors` is defined at the [beginning of the method](https://github.com/huggingface/transformers/blob/3258ff93304078b9e27d752e6c19d3813f664855/src/transformers/modeling_utils.py#L3591-L3599C13):
```python
is_safetensors = False
...
if device_map is not None and "disk" in device_map.values():
archive_file = (
resolved_archive_file[0] if isinstance(resolved_archive_file, (list, tuple)) else resolved_archive_file
)
is_safetensors = archive_file.endswith(".safetensors")
```
As you can see, `is_safetensors` is updated only if `device_map is not None and "disk" in device_map.values()`. The second condition is really specific. Given the name of the variable `is_safetensors`, I would have expected it to be `True` independently from `device_map.values()`.
This boolean is used a few times throughout the method. To name but a few, [here](https://github.com/huggingface/transformers/blob/3258ff93304078b9e27d752e6c19d3813f664855/src/transformers/modeling_utils.py#L3795), [here](https://github.com/huggingface/transformers/blob/3258ff93304078b9e27d752e6c19d3813f664855/src/transformers/modeling_utils.py#L3835C26-L3835C26) and [here](https://github.com/huggingface/transformers/blob/3258ff93304078b9e27d752e6c19d3813f664855/src/transformers/modeling_utils.py#L3883).
Just wanted to raise a warning here, just in case it is not the expected behaviour !
If this is unexpected, I can open a quick PR to correct it, if needed. If not, maybe we should rename the boolean ?
----------------
cc @LysandreJik and @Narsil
--------------
BTW, this didn't raises any error, so I'm not using the classic bug reporting template | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27395/comments | https://api.github.com/repos/huggingface/transformers/issues/27395/events | https://github.com/huggingface/transformers/pull/27395 | 1,985,576,958 | PR_kwDOCUB6oc5fCDej | 27,395 | Fix M4T weights tying | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This shouldn't be needed, the weights should already be tied at init time.\r\n\r\nEdit: I looked a bit more closely:\r\n\r\n- `no_init_weights` means all tensors are initialized as meta tensors (all shared with basically a null pointer for tensor).\r\n- Tensors are loaded where they can, NOT updating the tied weights (since information is lost view device=\"meta\".\r\n- `tie_weights` is called here (currently doing nothing).\r\n- Then things crash in `dispatch_model` (Something tries to attach some data to those still meta tensors).\r\n\r\nWith the fix, `tie_weights` reaffects the missing tensors.\r\nSomething could be a bit off though, since if this shared was actually offloaded (so really kept onto disk for instance) my understanding is that it would still be `meta` at step 2, and other kind of crashes would still happen, no ?",
"> no_init_weights means all tensors are initialized as meta tensors (all shared with basically a null pointer for tensor).\r\nTensors are loaded where they can, NOT updating the tied weights (since information is lost view device=\"meta\".\r\ntie_weights is called here (currently doing nothing).\r\nThen things crash in dispatch_model (Something tries to attach some data to those still meta tensors).\r\nWith the fix, tie_weights reaffects the missing tensors.\r\n\r\nThis is exactly what happens here!\r\n\r\n> Something could be a bit off though, since if this shared was actually offloaded (so really kept onto disk for instance) my understanding is that it would still be meta at step 2, and other kind of crashes would still happen, no ?\r\n\r\nNot sure to have a response here, what kind of crashes could happen ?",
"It's just information flow, here we're good only because the shared tensor is actually loaded, but with a fully offloaded to disk model it might not, right ?",
"Oh right, it could happen indeed. But it could happen to every model right ? because `model.tie_weights` happens [here](https://github.com/ylacombe/transformers/blob/1432fae1ee2ae621d1e88bcf70741a74ed4c17ae/src/transformers/modeling_utils.py#L3502) anyways ?",
"Gentle ping here to ask for a review @LysandreJik ! :hugs: "
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Following some failing tests [here](https://github.com/huggingface/transformers/actions/runs/6806630933/job/18508287020), I've pinpointed the bug to a tying weight issue ! This fixes the issue
cc @ydshieh, @LysandreJik and @amyeroberts
Also cc @Narsil who had fix a related seamlessM4t issue recently | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27395/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27395",
"html_url": "https://github.com/huggingface/transformers/pull/27395",
"diff_url": "https://github.com/huggingface/transformers/pull/27395.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27395.patch",
"merged_at": 1699955531000
} |
https://api.github.com/repos/huggingface/transformers/issues/27394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27394/comments | https://api.github.com/repos/huggingface/transformers/issues/27394/events | https://github.com/huggingface/transformers/pull/27394 | 1,985,367,586 | PR_kwDOCUB6oc5fBVmo | 27,394 | Fix `from_pt` flag when loading with `safetensors` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`ci/circleci: tests_torch` is treated in another PR, so not relevant here.",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Not sure it's really necessary, but maybe a comment to say those methods not only deal with pytorch checkpoint but also safetensors.\r\n> \r\n> (The name itself like load_pytorch_checkpoint_in_xxx doesn't match the fact)\r\n\r\nIt remains a PyTorch checkpoint, but under the safetensors format :)"
] | 1,699 | 1,699 | 1,699 | MEMBER | null | `from_pt` flag should continue working when specifying `safe_serialization=True`. The safetensors file will be in the PyTorch format, so it's not a stretch to expect the flag to be here to load it (it works without the flag `from_pt=True` as well).
Not supporting `from_pt=True` for this is a breaking change as existing workflows with `from_pt=True` break when the serialized file is a safetensors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27394",
"html_url": "https://github.com/huggingface/transformers/pull/27394",
"diff_url": "https://github.com/huggingface/transformers/pull/27394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27394.patch",
"merged_at": 1699885100000
} |
https://api.github.com/repos/huggingface/transformers/issues/27393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27393/comments | https://api.github.com/repos/huggingface/transformers/issues/27393/events | https://github.com/huggingface/transformers/pull/27393 | 1,985,307,622 | PR_kwDOCUB6oc5fBIiC | 27,393 | Skip failing cache call tests | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
See: https://github.com/huggingface/transformers/pull/27389
Skips tests which are currently failing because of incompatibilities with new huggingface_hub release
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27393/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27393",
"html_url": "https://github.com/huggingface/transformers/pull/27393",
"diff_url": "https://github.com/huggingface/transformers/pull/27393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27393.patch",
"merged_at": 1699527817000
} |
https://api.github.com/repos/huggingface/transformers/issues/27392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27392/comments | https://api.github.com/repos/huggingface/transformers/issues/27392/events | https://github.com/huggingface/transformers/issues/27392 | 1,985,285,724 | I_kwDOCUB6oc52VQ5c | 27,392 | [i18n-JP] Translating `en/model_doc` docs to Japanese | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"Thanks for the information!\r\nJust to be sure, would it be correct if I translate the rest of the `model_docs` that are not listed in the **Model_do section**?",
"> Thanks for the information!\n> Just to be sure, would it be correct if I translate the rest of the `model_docs` that are not listed in the **Model_do section**?\n\n@Yuki-Imajuku no, you can start from the listed files, they are yet to be translated! 10 docs/PR would work.",
"Oh, I was right to ask for confirmation...... I got it! I'll try to work on them :)",
"> Oh, I was right to ask for confirmation...... I got it! I'll try to work on them :)\n\nJust tick them off when you complete them. "
] | 1,699 | 1,700 | 1,700 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Japanese-speaking community 🌐
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Model_doc section
- [x] [albert.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/albert.md) #27401
- [x] [align.md.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/align.md) #27401
- [x] [altclip.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/altclip.md) #27401
- [x] [audio-spectrogram-transformer.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/audio-spectrogram-transformer.md) #27401
- [x] [auto.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/auto.md) #27401
- [x] [autoformer.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/autoformer.md) #27401
- [x] [bark.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bark.md) #27264
- [x] [bart.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bart.md) #27264
- [x] [barthez.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/barthez.md) #27264
- [x] [bartpho.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bartpho.md) #27264
- [x] [beit.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/beit.md) #27264
- [x] [bert-generation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bert-generation.md) #27264
- [x] [bert-japanese.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bert-japanese.md) #27264
- [x] [bert.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bert.md) [](#27264)
- [x] [bertweet.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bertweet.md) #27264
- [x] [big_bird.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/big_bird.md) #27401
- [x] [bigbird_pegasus.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bigbird_pegasus.md) #27264
- [ ] [biogpt.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/biogpt.md) #27264
- [ ] [bit.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/bit.md) #27264
- [ ] [blenderbot-small.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/blenderbot-small.md) #27264
- [ ] [blenderbot.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/blenderbot.md) #27264
Keep on adding more as you go 🔥
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27392/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27391/comments | https://api.github.com/repos/huggingface/transformers/issues/27391/events | https://github.com/huggingface/transformers/issues/27391 | 1,985,278,702 | I_kwDOCUB6oc52VPLu | 27,391 | bus error when load codellama2 34b | {
"login": "boundles",
"id": 3818060,
"node_id": "MDQ6VXNlcjM4MTgwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3818060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boundles",
"html_url": "https://github.com/boundles",
"followers_url": "https://api.github.com/users/boundles/followers",
"following_url": "https://api.github.com/users/boundles/following{/other_user}",
"gists_url": "https://api.github.com/users/boundles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boundles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boundles/subscriptions",
"organizations_url": "https://api.github.com/users/boundles/orgs",
"repos_url": "https://api.github.com/users/boundles/repos",
"events_url": "https://api.github.com/users/boundles/events{/privacy}",
"received_events_url": "https://api.github.com/users/boundles/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @boundles, thanks for raising an issue. \r\n\r\nSo that we can best help you, can you make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and include all important information such as: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal code snippet we can run to reproduce the error\r\n* Full details of the error encountered, including full traceback",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | ### System Info
when I load model on 2*A6000 45G, bus error occurs
`model_path = 'CodeLlama-34b-hf'
model = AutoModelForCausalLM.from_pretrained(model_path, load_in_8bit=True, device_map="auto", trust_remote_code=True)
print('finish load model')
`
looking for your reply, thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run the code
### Expected behavior
load successfully | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27390/comments | https://api.github.com/repos/huggingface/transformers/issues/27390/events | https://github.com/huggingface/transformers/pull/27390 | 1,985,236,053 | PR_kwDOCUB6oc5fA5It | 27,390 | use `pytest.mark` directly | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Would just using `pytest.mark.flash_attn_test` instead of importing `from pytest import mark` work? This looks like a complex workaround for a feature which is just a nice-to-have e.g. (comments [here](https://github.com/huggingface/transformers/pull/25598/files#r1331411143) and [here](https://github.com/huggingface/transformers/pull/25598/files#r1332676490))",
"Thanks for the great suggestion ! It works, and I will apply the same changes to all related test files."
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
#25598 uses `from pytest import mark` which breaks the function `get_test_classes` as the test module would have `mark` as attribute, and
```
getattr(attr_value, "all_model_classes", [])
```
gives an `'MarkDecorator'` object (while `attr_value` is the `pytest.mark`).
This PR `import pytest` and use `@pytest.mark` to avoid the issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27390/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27390",
"html_url": "https://github.com/huggingface/transformers/pull/27390",
"diff_url": "https://github.com/huggingface/transformers/pull/27390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27390.patch",
"merged_at": 1699533174000
} |
https://api.github.com/repos/huggingface/transformers/issues/27389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27389/comments | https://api.github.com/repos/huggingface/transformers/issues/27389/events | https://github.com/huggingface/transformers/pull/27389 | 1,985,205,790 | PR_kwDOCUB6oc5fAykA | 27,389 | Pin huggingface_hub | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot!",
"Looks like we still get 0.19 ...",
"😫 ",
"`accelerate`!\r\n\r\n```\r\n Attempting uninstall: huggingface-hub\r\n Found existing installation: huggingface-hub 0.17.3\r\n Uninstalling huggingface-hub-0.17.3:\r\n Successfully uninstalled huggingface-hub-0.17.3\r\n Attempting uninstall: accelerate\r\n Found existing installation: accelerate 0.24.1\r\n Uninstalling accelerate-0.24.1:\r\n Successfully uninstalled accelerate-0.24.1\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ntokenizers 0.14.1 requires huggingface_hub<0.18,>=0.16.4, but you have huggingface-hub 0.19.0 which is incompatible.\r\ntransformers 4.36.0.dev0 requires huggingface-hub<=0.18.0,>=0.16.4, but you have huggingface-hub 0.19.0 which is incompatible.\r\nSuccessfully installed accelerate-0.25.0.dev0 huggingface-hub-0.19.0 urllib3-2.0.7\r\n```",
"I guess the quick but dirty way is to install huggingface_hub specific version in `.circleci/create_circleci_config.py` after the line \r\n\r\npip install -U --upgrade-strategy eager git+https://github.com/huggingface/accelerate\r\n\r\n(or ask @muellerzr to make change in `accelerate`)",
"OK, shall we pin our accelerate version then to unblock and then make transformers accelerate compatible?\r\n\r\ncc @muellerzr and @ArthurZucker as it looks like it involves `tokenizers` as well",
"_The documentation is not available anymore as the PR was closed or merged._",
"The thing is, the hub version isn't pinned in accelerate: https://github.com/huggingface/accelerate/blob/main/setup.py. I might be misunderstanding pip, but I don't see why it couldn't downgrade? ",
"In `accelerate`, no pin, so \r\n\r\n```\r\npip install -U --upgrade-strategy eager git+https://github.com/huggingface/accelerate\r\n```\r\nwill upgrade to the latest available one, even the dependencies I believe.",
"Or I can try to remove `--upgrade-strategy eager`. Let me know.",
"But maybe \r\n\r\n> if there is only 2 failing tests due to huggingface_hub 0.19, let's skip them for nonw maybe and wait \r\n[@Lucain](https://huggingface.slack.com/team/U03QTE7PET0) ’s patch maybe?\r\n\r\n",
"Opened a PR to skip the tests here: https://github.com/huggingface/transformers/pull/27393"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Currently tests are failing on `test_torch` CI run e.g. [this one](https://app.circleci.com/pipelines/github/huggingface/transformers/77707/workflows/ababd81d-2704-4b31-914d-15784acfad8e/jobs/989825).
The tests failing relating to the recent release of the hub are:
```
FAILED tests/models/auto/test_modeling_auto.py::AutoModelTest::test_cached_model_has_minimum_calls_to_head - AssertionError: 0 != 1
FAILED tests/models/auto/test_tokenization_auto.py::AutoTokenizerTest::test_cached_tokenizer_has_minimum_calls_to_head - AssertionError: 0 != 1
```
Pinning until tests are resolved.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27389",
"html_url": "https://github.com/huggingface/transformers/pull/27389",
"diff_url": "https://github.com/huggingface/transformers/pull/27389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27389.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27388/comments | https://api.github.com/repos/huggingface/transformers/issues/27388/events | https://github.com/huggingface/transformers/pull/27388 | 1,985,062,319 | PR_kwDOCUB6oc5fAS_c | 27,388 | Update tiny model summary file | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"**Ignore this comment 🙏** \r\n\r\n```\r\n(py38) λ ruff check examples tests src utils setup.py conftest.py --fix\r\ntests\\models\\clvp\\test_modeling_clvp.py:284:64: F821 Undefined name `PipelineTesterMixin`\r\ntests\\models\\dpt\\test_modeling_dpt_auto_backbone.py:143:35: F821 Undefined name `DPTModel`\r\ntests\\models\\dpt\\test_modeling_dpt_auto_backbone.py:144:35: F821 Undefined name `DPTForSemanticSegmentation`\r\ntests\\models\\esm\\test_modeling_esmfold.py:176:35: F821 Undefined name `EsmModel`\r\ntests\\models\\esm\\test_modeling_esmfold.py:177:26: F821 Undefined name `EsmForMaskedLM`\r\ntests\\models\\esm\\test_modeling_esmfold.py:178:36: F821 Undefined name `EsmForSequenceClassification`\r\ntests\\models\\esm\\test_modeling_esmfold.py:179:37: F821 Undefined name `EsmForTokenClassification`\r\ntests\\models\\esm\\test_modeling_esmfold.py:180:26: F821 Undefined name `EsmForSequenceClassification`\r\ntests\\models\\fuyu\\test_modeling_fuyu.py:265:39: F821 Undefined name `PipelineTesterMixin`\r\ntests\\models\\kosmos2\\test_modeling_kosmos2.py:247:42: F821 Undefined name `PipelineTesterMixin`\r\ntests\\models\\seamless_m4t\\test_modeling_seamless_m4t.py:620:46: F821 Undefined name `PipelineTesterMixin`\r\nFound 11 errors.\r\n```\r\n\r\ncheck dpt, permission and esmfold\r\ncheck TFLayoutLMv3ForQuestionAnswering"
] | 1,699 | 1,700 | 1,700 | COLLABORATOR | null | # What does this PR do?
Update tiny model summary file (to enable pipeline testing for recently added models).
----------------------------------------------------------------------------------------------
The following is a more detailed step on how to do this update (as a reference for the )
#### Steps
1. Go to https://github.com/huggingface/transformers/actions/workflows/check_tiny_models.yml -> Click the latest successful run -> Download the artifact `tiny_model_creation_reports ` -> Open the file `updated_tiny_model_summary.json` in the artifact and copy the content
2. Paste the content copied in step 1. to `tests/utils/tiny_model_summary.json`
- 2.1. Check the diff: if some architectures only get tokenizer/processor being created? if some recently added architectures in `transformers` not showing, etc. If so, we will need to check if we could do something.
- 2.2. Once the changes look good, commit it.
3. Update the mapping
- 3.1 `python utils\add_pipeline_model_mapping_to_test.py --all --overwrite`
- 3.2 `ruff format examples tests src utils setup.py conftest.py`
- 3.3 `ruff check examples tests src utils setup.py conftest.py --fix`
- 3.4 Commit the changes
4. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27388/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27388",
"html_url": "https://github.com/huggingface/transformers/pull/27388",
"diff_url": "https://github.com/huggingface/transformers/pull/27388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27388.patch",
"merged_at": 1700769639000
} |
https://api.github.com/repos/huggingface/transformers/issues/27387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27387/comments | https://api.github.com/repos/huggingface/transformers/issues/27387/events | https://github.com/huggingface/transformers/issues/27387 | 1,985,045,469 | I_kwDOCUB6oc52UWPd | 27,387 | load tokenizer size is different config vocab_size | {
"login": "lw3259111",
"id": 12690488,
"node_id": "MDQ6VXNlcjEyNjkwNDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12690488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lw3259111",
"html_url": "https://github.com/lw3259111",
"followers_url": "https://api.github.com/users/lw3259111/followers",
"following_url": "https://api.github.com/users/lw3259111/following{/other_user}",
"gists_url": "https://api.github.com/users/lw3259111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lw3259111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lw3259111/subscriptions",
"organizations_url": "https://api.github.com/users/lw3259111/orgs",
"repos_url": "https://api.github.com/users/lw3259111/repos",
"events_url": "https://api.github.com/users/lw3259111/events{/privacy}",
"received_events_url": "https://api.github.com/users/lw3259111/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I recommend you to open the issue on the repo as we did not upload it and it might just be additional padding used for training that was not added to the tokenizer. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | ### System Info
transformers==4.35.0
python==3.10
ubuntu==20
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. download model https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base
2. from transformers import LlamaTokenizer
3. tokenizer = LlamaTokenizer.from_pretrained("/data/Ziya2-13B-Base/",legacy=False)
4. len(tokenizer)
### Expected behavior
I use the https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base model
config.json
```json
{
"architectures": [
"LlamaForCausalLM"
],
"_flash_attn_2_enabled": true,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 13824,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"rotary_emb_base": 10000,
"rotary_pct": 1,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.30.2",
"use_cache": true,
"use_parallel_residual": false,
"vocab_size": 39424
}
```
when I load the tokenizer,the tokenizer size is 39410
<img width="684" alt="image" src="https://github.com/huggingface/transformers/assets/12690488/55f688d7-194a-4ebf-8c0b-5c0acbff70d1">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27386/comments | https://api.github.com/repos/huggingface/transformers/issues/27386/events | https://github.com/huggingface/transformers/issues/27386 | 1,984,973,506 | I_kwDOCUB6oc52UErC | 27,386 | Do I need to conduct example skipping myself for IterableDataset data type when resuming from last_checkpoint? | {
"login": "Zcchill",
"id": 83019888,
"node_id": "MDQ6VXNlcjgzMDE5ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/83019888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zcchill",
"html_url": "https://github.com/Zcchill",
"followers_url": "https://api.github.com/users/Zcchill/followers",
"following_url": "https://api.github.com/users/Zcchill/following{/other_user}",
"gists_url": "https://api.github.com/users/Zcchill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zcchill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zcchill/subscriptions",
"organizations_url": "https://api.github.com/users/Zcchill/orgs",
"repos_url": "https://api.github.com/users/Zcchill/repos",
"events_url": "https://api.github.com/users/Zcchill/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zcchill/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, the trainer does this under the hood. Check out the `_inner_training_loop`, iirc the logic lies in there",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | ### System Info
transformers:4.27.0
torch 2.1.0
### Who can help?
@muellerzr, @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
`
The code sources from https://github.com/huggingface/transformers/blob/7ecd229ba475dbf78040f368ae86c86bba875442/examples/pytorch/language-modeling/run_clm.py#L296
And it haven't calculate the number of examples that need to be skipped. For original Dataset data type, I think these examples will be skipped in "Trainer.train()" since the dataset has length. However, the dataset type is always IterableDataset for pre-training corpus like Pile which has no length. In this case, do I need to conduct example skipping myself in my own data loading scripts for IterableDataset data type when resuming from last_checkpoint before "Trainer.train()"?
### Expected behavior
`
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty."
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
state = TrainerState.load_from_json(str(Path(last_checkpoint) / TRAINER_STATE_NAME))
global_batch_size = training_args.train_batch_size * training_args.gradient_accumulation_steps * training_args.world_size
num_skip_examples = state.global_step * global_batch_size
logger.info(f"Skipping {num_skip_examples} examples")
`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27386/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27385/comments | https://api.github.com/repos/huggingface/transformers/issues/27385/events | https://github.com/huggingface/transformers/issues/27385 | 1,984,890,876 | I_kwDOCUB6oc52Twf8 | 27,385 | Whitespace at the start of generated sentence remains or get removed | {
"login": "LalchandPandia",
"id": 115443428,
"node_id": "U_kgDOBuGG5A",
"avatar_url": "https://avatars.githubusercontent.com/u/115443428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LalchandPandia",
"html_url": "https://github.com/LalchandPandia",
"followers_url": "https://api.github.com/users/LalchandPandia/followers",
"following_url": "https://api.github.com/users/LalchandPandia/following{/other_user}",
"gists_url": "https://api.github.com/users/LalchandPandia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LalchandPandia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LalchandPandia/subscriptions",
"organizations_url": "https://api.github.com/users/LalchandPandia/orgs",
"repos_url": "https://api.github.com/users/LalchandPandia/repos",
"events_url": "https://api.github.com/users/LalchandPandia/events{/privacy}",
"received_events_url": "https://api.github.com/users/LalchandPandia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! The reason for this is that the tokenizer by default prepends a space to any input. Thus when decoding it is skipped. ",
"Hi, Thanks for the clarification. Is there any way to make sure, the space present in the output?. In my use case, I want to check how much probability it assigns to whitespace at each timestep.",
"You should be looking at the inputs ids not the string I think. This way you get rid of potential decoding issues. \r\nI'll try to add the `add_dummy_prefix_space` to our tokenizers to seamlessly support these kind of usecases",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | ### System Info
transformers version: 4.34.0
pytorch:1.12.1
python:3.8
I am running llama -2 (Llama-2-7b-hf).
If the generated content has whitepace at the beginning, the tokenizer skips it. For example: if tokens are [29871(id for space), id_for_Hey, id_for_there], the generated sentence is "Hey there".
tokenizer is transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast
The above issue occurs on using tokenizer.decode but tokenizer.batch_decode works fine.
What is the reason for this and is it correct behavior?
My code:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", token=access_token)
encoded_input = tokenizer(" Hey there", return_tensors='pt')
#{'input_ids': tensor([[1, 1724, 526, 278, 3405, 29914, 12957, 10350, 310, 21589, 1056, 29875, 29915, 29879, 4593, 4918, 11367, 29973, 673, 29901, 29871, 29896, 29941, 30073, 29900, 29906, 29915, 29941, 29955, 29908, 29940, 29871, 29947, 29900, 30073, 29896, 29941, 29915, 29900, 29947, 29908, 29923, 13, 6747, 287, 491, 29901, 19556, 9716, 472, 3786, 29871, 29896, 29941, 29892]]), 'attention_mask': tensor([[1, 1, 1, 1]])}
print(tokenizer.decode(a[0][20:]))
print(tokenizer.decode(a[0][21:]))
#both the print outputs are same.
```
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", token=access_token)
a = torch.Tensor([[1, 1724, 526, 278, 3405, 29914, 12957, 10350, 310, 21589, 1056, 29875, 29915, 29879, 4593, 4918, 11367, 29973, 673, 29901, 29871, 29896, 29941, 30073, 29900, 29906, 29915, 29941, 29955, 29908, 29940, 29871, 29947, 29900, 30073, 29896, 29941, 29915, 29900, 29947, 29908, 29923, 13, 6747, 287, 491, 29901, 19556, 9716, 472, 3786, 29871, 29896, 29941, 29892]])
print(tokenizer.decode(a[0][20:]))
print(tokenizer.decode(a[0][21:]))
#both the print outputs are same.
### Expected behavior
The model should output whitespace at the beginning of the sentence | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27385/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27384/comments | https://api.github.com/repos/huggingface/transformers/issues/27384/events | https://github.com/huggingface/transformers/issues/27384 | 1,984,844,151 | I_kwDOCUB6oc52TlF3 | 27,384 | Ignore whisper previous tokens during loss calculation | {
"login": "DavraYoung",
"id": 33338429,
"node_id": "MDQ6VXNlcjMzMzM4NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/33338429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavraYoung",
"html_url": "https://github.com/DavraYoung",
"followers_url": "https://api.github.com/users/DavraYoung/followers",
"following_url": "https://api.github.com/users/DavraYoung/following{/other_user}",
"gists_url": "https://api.github.com/users/DavraYoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavraYoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavraYoung/subscriptions",
"organizations_url": "https://api.github.com/users/DavraYoung/orgs",
"repos_url": "https://api.github.com/users/DavraYoung/repos",
"events_url": "https://api.github.com/users/DavraYoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavraYoung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc: @sanchit-gandhi ",
"I think the cleanest way of doing this is actually in the data collator, as per: https://github.com/huggingface/distil-whisper/blob/e35cbff09e04f6e710c72f33e0441d6bebc47ec1/training/run_distillation.py#L400",
"@sanchit-gandhi I think data collator may not be suitable here, since we are dealing with loss masking.\r\n\r\nAs I understood to achieve this in data collator I will need to replace decoder_input_ids with -100 values, which means that previous prompt tokens will simply be removed from training process, which makes them useless.\r\n\r\nIn other words, If we simply replace initial prompt tokens in decoder_input_ids with -100 values, model will not be able to access the information in those tokens.\r\n\r\nSo, the decoder_input_ids should contain previous_prompt tokens, but loss on such tokens should be ignored at loss calculation step, but due to hf transformers whisper implementation, there is no way to mask the loss at the loss calculation step on certain token positions.\r\n\r\n\r\n\r\n\r\n\r\n",
"oh, wait, sorry, I got it. You mean to supply decoder_input_ids and labels together? The decoder_inputs_ids will be shifted right labels and labels will have -100 on previous prompt positions",
"What we do is:\r\n1. Shift the decoder input ids to the right to get the labels (targets)\r\n2. Shift the input ids attention mask to the right to get the labels mask\r\n3. Mask out where we have padded tokens in the labels (set to -100 to ignore in the loss)\r\n4. Mask out the prompt in the labels (set to -100 to ignore the loss)\r\n\r\n=> this means that the prompt is included in the decoder input ids, so the model can condition on the prompt, but is masked from the loss, so the model is not trained to predict the prompt. This makes sense intuitively: we don't want to train the model on the prompt, only the target transcription. Indeed, if you refer to page 3 of the [Whisper paper](https://arxiv.org/pdf/2212.04356.pdf), you'll see this is one-to-one the same as how they do it:\r\n\r\n\r\n",
"Thanks for explanation. \r\nBtw, what about task tokens? Like <|translate|>, <|transcribe|> etc. \r\nShould they be ignored during loss calculation? It seems counterintuitive that the model predicts them. \r\nFor language and silence detection loss should definately be computed, but for task tokens its not clear.\r\n\r\n\r\nDuring training those tokens impact loss calculation, since the model predicts them, which makes loss charts be less informative\r\n\r\n\r\nFor example:\r\n`<|startoftranscript|><|translate|> token1 token2 <|endoftext|>`\r\nduring training step will give loss for each of the token being predicted:\r\nFor `<|startoftranscript|>` the loss will be calculated based on the probability of `<|translate|>` token, and if I have mostly trained on transcribe task, the loss will be slightly higher then it actually is, because the task token will be predicted incorrectly.\r\n",
"They should be counted, e.g. for the language token it corresponds to the model's prediction for the source audio language. If we mask these tokens, the model will not be trained to differentiate between `translate`/`transcribe` or tell what language the audio is in. At inference time, we 'force' the task tokens (e.g. `task=\"transcribe\"`) which sets the task token to our desired value -> this then controls what kind of generation we perform, so the model is forced to perform the task we specify.",
"If the model's processor is anyway forcing these tokens(specifically `transcribe` and `translate`), and the model is never choosing by itself, then maybe it we can ignore calculating the loss on model prediction of such tokens during training?\r\n\r\nBy not including these tokens in loss calculation, the model can potentially focus more on learning the core aspects of the tasks (like transcribing content accurately) instead of predicting task tokens that are already predetermined.\r\n\r\n\r\n\r\nFor example:\r\n\r\n\r\nIn my current scenario the whisper model chooses the next step given the list of possible steps and the audio.\r\nFor example:\r\nActor says: \"Remove the 231 todo from the list\".\r\n\r\n\r\nprevious tokens:\r\n```\r\n1. Add todo item\r\n2. Remove todo item\r\n3. List todo items\r\n```\r\n\r\nlearnable_tokens:\r\n```\r\n<|startoftranscript|><|predict_next_action|>\r\n2. 231\r\n<|endoftext|>\r\n```\r\n\r\nThe `previous tokens` tokens are masked and the model still pays attention to them, \r\nwhich, I think, means: masking `task` tokens will also not affect the model accuracy of performing a certain task\r\n\r\n\r\n",
"for any one who faces same problem, adjust your data collator to supply both labels and decoder_input_ids.\r\nExample: https://github.com/huggingface/distil-whisper/blob/e35cbff09e04f6e710c72f33e0441d6bebc47ec1/training/run_distillation.py#L400",
"We need to train on task/language since the model needs to learn how these tokens affect the output:\r\n* `task=\"translate\"` means speech translation, `task=\"transcribe\"` means speech transcription. If we mask the task token, then the model has no way of differing between these tasks during training time, and so will perform them randomly at inference time\r\n* Same for language (need to train on the language token so the model learns to differentiate between languages)\r\n\r\nSo what we do is train the model on these tokens during training time (model learns that `task=\"translate\"` means speech translation), and then force them at inference time to control the behaviour that we want.\r\n\r\nHowever, **if** you are only doing one task (resp. one language), you can mask the task (resp. language) token accordingly. This is what's done for English-only training in the original model for example: the tokens are removed entirely."
] | 1,699 | 1,701 | 1,700 | NONE | null | ### Feature request
I am training whisper using previous prompts and according to whitepaper, its better to ignore the loss for previous tokens.
> We only mask out the training loss over the previous context text, and train the model to predict all other tokens
Which means:
For: `<|startofprev|>Here goes previous transcription tokens<|startoftranscript|><|transcribe|><|en|>hello world!<|endoftext|>`
the loss must be masked for all tokens that are going before<|startoftranscript|>.
Simply putting -100 in labels is not correct since model actually attend to the decoder_input_ids.
Training without masking gives this:
1. Start training whisper with prompts (so called "previous tokens")
2. Observe whisper learning to predict prompts. (which its not supposed to do)
3. Observe loss never dropping below 1.0
4. Observe a lot of hallucinations
- `transformers` version: 4.34.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0.dev20230803+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
cc: @sanchit-gandhi
### Motivation
Finetuning whisper on audio with text context.
### Your contribution
```python
class ExtendedWhisperForConditionalGeneration(WhisperForConditionalGeneration):
def __init__(self, config: WhisperConfig):
super().__init__(config)
self.model = WhisperModel(config)
self.post_init()
def forward(
self,
input_features: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = None,
decoder_head_mask: Optional[torch.Tensor] = None,
cross_attn_head_mask: Optional[torch.Tensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
decoder_inputs_embeds: Optional[Tuple[torch.FloatTensor]] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], Seq2SeqLMOutput]:
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
if labels is not None:
if decoder_input_ids is None and decoder_inputs_embeds is None:
decoder_input_ids = shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
outputs = self.model(
input_features,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
encoder_outputs=encoder_outputs,
decoder_attention_mask=decoder_attention_mask,
head_mask=head_mask,
decoder_head_mask=decoder_head_mask,
cross_attn_head_mask=cross_attn_head_mask,
past_key_values=past_key_values,
decoder_inputs_embeds=decoder_inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
lm_logits = self.proj_out(outputs[0])
loss = None
if labels is not None:
# Find the position of the <|startoftranscript|> token in labels
transcribe_positions = (labels == 50258).nonzero(as_tuple=True)[1] - 1
max_position = labels.shape[1]
mask = (
torch.arange(max_position)
.expand(len(labels), max_position)
.to(labels.device)
)
if (
len(transcribe_positions) > 0
): # Ensure there's at least one <|startoftranscript|> token
mask = mask > transcribe_positions[:, None]
# Modify the labels to be -100 (ignored in CrossEntropyLoss) for positions in the mask
labels = torch.where(mask, labels, torch.tensor(-100).to(labels.device))
loss_fct = CrossEntropyLoss()
# Compute loss with modified labels
loss = loss_fct(
lm_logits.view(-1, self.config.vocab_size), labels.reshape(-1)
)
if not return_dict:
output = (lm_logits,) + outputs[1:]
return ((loss,) + output) if loss is not None else output
return Seq2SeqLMOutput(
loss=loss,
logits=lm_logits,
past_key_values=outputs.past_key_values,
decoder_hidden_states=outputs.decoder_hidden_states,
decoder_attentions=outputs.decoder_attentions,
cross_attentions=outputs.cross_attentions,
encoder_last_hidden_state=outputs.encoder_last_hidden_state,
encoder_hidden_states=outputs.encoder_hidden_states,
encoder_attentions=outputs.encoder_attentions,
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27383/comments | https://api.github.com/repos/huggingface/transformers/issues/27383/events | https://github.com/huggingface/transformers/issues/27383 | 1,984,819,752 | I_kwDOCUB6oc52TfIo | 27,383 | decoder only beam search decoding indices usage | {
"login": "Hustcketlyj",
"id": 40982662,
"node_id": "MDQ6VXNlcjQwOTgyNjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/40982662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hustcketlyj",
"html_url": "https://github.com/Hustcketlyj",
"followers_url": "https://api.github.com/users/Hustcketlyj/followers",
"following_url": "https://api.github.com/users/Hustcketlyj/following{/other_user}",
"gists_url": "https://api.github.com/users/Hustcketlyj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hustcketlyj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hustcketlyj/subscriptions",
"organizations_url": "https://api.github.com/users/Hustcketlyj/orgs",
"repos_url": "https://api.github.com/users/Hustcketlyj/repos",
"events_url": "https://api.github.com/users/Hustcketlyj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hustcketlyj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante ",
"Hey @Hustcketlyj 👋 \r\n\r\nThe beam indices ([docs](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation.BeamSearchEncoderDecoderOutput.beam_indices)) are only -1 for unfilled positions ([reference in the code](https://github.com/huggingface/transformers/blob/c5037b459e117b9286c611092f38663f6cb763b0/src/transformers/generation/beam_search.py#L388)).\r\n\r\nIf that is not the case for you, I'll need a short reproducible script so I can figure out what might be wrong :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | ### System Info
I am using beam search decoding on opt-1.3b, and I wish to know which beam is selected as the final output along the generation process, I looked at beam_indices, but the size is prompt size+number of token generated, if I ignore the first prompt size indices, the remaining indices are all -1. So what should i do.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
beam_indice=output_with_watermark.beam_indices[:,tokd_input["input_ids"].shape[-1]:][0], which is all -1.
### Expected behavior
na | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27382/comments | https://api.github.com/repos/huggingface/transformers/issues/27382/events | https://github.com/huggingface/transformers/issues/27382 | 1,984,789,657 | I_kwDOCUB6oc52TXyZ | 27,382 | Different PTB perplexity in transformers 4.31 vs 4.32 | {
"login": "tsengalb99",
"id": 33385672,
"node_id": "MDQ6VXNlcjMzMzg1Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/33385672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsengalb99",
"html_url": "https://github.com/tsengalb99",
"followers_url": "https://api.github.com/users/tsengalb99/followers",
"following_url": "https://api.github.com/users/tsengalb99/following{/other_user}",
"gists_url": "https://api.github.com/users/tsengalb99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsengalb99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsengalb99/subscriptions",
"organizations_url": "https://api.github.com/users/tsengalb99/orgs",
"repos_url": "https://api.github.com/users/tsengalb99/repos",
"events_url": "https://api.github.com/users/tsengalb99/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsengalb99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Interesting issue. \r\nDo you see differences in padding and masks before the forward? Or it only shows in the logits?\r\nIs the issue still present with `seqlen < 4096`? like 1024?\r\n",
"Hey 🤗 ! Would recommend you to upgrade to the latest version, even if there is a bug and we fix it, we can't really do anything about the previous version so it's not really relevant (only to know where it started diverging). Could be as simple as a tokenizer issue that was later on fixed 😉 ",
"I am not seeing differences in the input tokens. I am not feeding in any masks, just raw tokens. I have not tried smaller sequences. I do not have time right now to debug this issue but the attached example code should reproduce the issue. You can use smaller models since 70b fp16 may be hard to run.\r\n\r\n> Would recommend you to upgrade to the latest version, even if there is a bug and we fix it, we can't really do anything about the previous version so it's not really relevant (only to know where it started diverging).\r\n\r\nAt least two important papers (QuIP, GPTQ) use PTB with the transformers library in their evaluations of quantized models. It may be that the old version was correct and some change in the new version broke something. People should know which version is correct, and perhaps that version is the old version. This is especially important since quantization is an active research area and someone may write a paper comparing their numbers on the old transformers to a baseline on the new transformers where their method would do better from the \"bug,\" or vice versa and mistakenly believe their method does worse.",
"Understood. 🤗 I don't have the bandwidth to dive deep into this, from the releases differences, there does not seems to be any changes done to the code, but rather to the slow tokenizers, which should not affect the fast one. \r\nOther changes could come from configuration updates made directly on the hub like: https://huggingface.co/meta-llama/Llama-2-70b-hf/commit/6aa89cf376ffeaf8b7cf6fa5d744a87e340445a8 or the max length argument or the padding token id. The `self.config.pretraining_tp` argument also affects the logits and was set to `8` in the config at a certain point. \r\n\r\n\r\nYou mentioned that this does not happen for all datasets, and only for two of them so it's a bit hard to guess what can be wrong here! The community might be able to help so leaving this open 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,703 | 1,703 | NONE | null | ### System Info
python: 3.10
transformers 4.31 and 4.32
datasets: 2.14.6
pytorch 2.1.0 with cuda 11.8
OS: ubuntu 20.04
GPU: Various NVIDIA GPUs
CPU: Various Intel CPUs
I can reproduce this issue on multiple machines with different hardware configs.
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to run inference on llama2-70b on the PTB dataset using GPTQ's evaluation sampling method. Using transfomers version 4.31 and before gives a perplexity of 19.55. Using transformers 4.32 and later gives a perplexity of 22.67. The two perplexities should be the same, i.e. the transformers version used should not affect results. I tried multiple datasets versions on both 4.31 and 4.32 and the datasets version used does not appear to affect perplexity. Below is sample code you can use to reproduce this issue. This issue does not happen on Wikitext2 and C4, two other datasets that I tested.
```python
import torch
import datasets
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
import random
torch.set_grad_enabled(False)
torch.manual_seed(0)
random.seed(0)
seqlen = 4096
def get_ptb(model):
from datasets import load_dataset
testdata = load_dataset('ptb_text_only', 'penn_treebank', split='test')
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model, use_fast=False)
testenc = tokenizer(" ".join(testdata['sentence']), return_tensors='pt')
return testenc
model_str = 'meta-llama/Llama-2-70b-hf'
model = transformers.LlamaForCausalLM.from_pretrained(model_str,
torch_dtype='auto',
low_cpu_mem_usage=True,
device_map='auto').half()
input_tok = get_ptb(model_str)['input_ids']
nsamples = input_tok.numel() // seqlen
input_tok = input_tok[0, :(seqlen * nsamples)].view(nsamples, seqlen)
print(input_tok.shape)
loss_fct = torch.nn.CrossEntropyLoss().cuda()
acc_loss = 0.0
for ii in range(nsamples):
input = input_tok[ii, :].cuda().view(1, -1)
output = model(input,
use_cache=False,
output_hidden_states=False,
output_attentions=False)[0]
shift_logits = output[:, :-1, :].contiguous()
shift_labels = input[:, 1:]
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
acc_loss += loss.item()
avg_loss = acc_loss / nsamples
ppl = torch.exp(torch.tensor(avg_loss)).item()
print(f'perplexity: {ppl}')
```
### Expected behavior
The PTB perplexities should be the same between transformers versions. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27382/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27381/comments | https://api.github.com/repos/huggingface/transformers/issues/27381/events | https://github.com/huggingface/transformers/issues/27381 | 1,984,664,494 | I_kwDOCUB6oc52S5Ou | 27,381 | `YolosImageProcessor` violates `longest_edge` constraint for certain images | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @xenova, thanks for reporting! \r\n\r\nLooking into it 🕵️♀️ ",
"@xenova Apologies for the delay - I've opened a PR to fix! ",
"@amyeroberts No worries! ;) Thanks so much!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"bump :)"
] | 1,699 | 1,703 | 1,703 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu)
- Jax version: 0.4.16
- JaxLib version: 0.4.16
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@NielsRogge @amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from transformers import AutoProcessor
from PIL import Image
import requests
processor = AutoProcessor.from_pretrained("Xenova/yolos-small-300") # or hustvl/yolos-small-300
url = 'https://i.imgur.com/qOp3m0N.png' # very thin image
image = Image.open(requests.get(url, stream=True).raw).convert('RGB')
output = processor(image)
print(output['pixel_values'][0].shape) # (3, 89, 1335)
```
A shape of (3, 89, 1335) is printed out, but this shouldn't be possible due to the `longest_edge` constraint in the [config.json](https://huggingface.co/Xenova/yolos-small-300/blob/main/preprocessor_config.json#L22):
```json
"size": {
"longest_edge": 1333,
"shortest_edge": 800
}
```
Here is the image used:

### Expected behavior
The image should have the maximum edge length be at most 1333 (1335 should not be possible) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27380/comments | https://api.github.com/repos/huggingface/transformers/issues/27380/events | https://github.com/huggingface/transformers/issues/27380 | 1,984,652,131 | I_kwDOCUB6oc52S2Nj | 27,380 | ValueError: Target module QuantLinear() is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported. | {
"login": "jason-brian-anderson",
"id": 34688597,
"node_id": "MDQ6VXNlcjM0Njg4NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/34688597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jason-brian-anderson",
"html_url": "https://github.com/jason-brian-anderson",
"followers_url": "https://api.github.com/users/jason-brian-anderson/followers",
"following_url": "https://api.github.com/users/jason-brian-anderson/following{/other_user}",
"gists_url": "https://api.github.com/users/jason-brian-anderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jason-brian-anderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jason-brian-anderson/subscriptions",
"organizations_url": "https://api.github.com/users/jason-brian-anderson/orgs",
"repos_url": "https://api.github.com/users/jason-brian-anderson/repos",
"events_url": "https://api.github.com/users/jason-brian-anderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/jason-brian-anderson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada ",
"Hi @jason-brian-anderson \r\nThanks for the issue, there might be a regression but not sure\r\nAre you able to run this small snippet? The snippet below worked fine on my end (transformers main + PEFT 0.6.0)\r\n```python\r\nfrom transformers import AutoModelForCausalLM, GPTQConfig\r\nfrom peft import LoraConfig, get_peft_model\r\n\r\nmodel_id = \"TheBloke/Llama-2-7B-GPTQ\"\r\nquantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, device_map={\"\": 0}, quantization_config=quantization_config)\r\n\r\nlora_config = LoraConfig(\r\n r=8,\r\n target_modules=[\"k_proj\",\"o_proj\",\"q_proj\",\"v_proj\"],\r\n task_type=\"CAUSAL_LM\"\r\n)\r\n\r\nmodel = get_peft_model(model, lora_config)\r\nprint(model)\r\n```\r\nAlso I noticed that you pass a `PeftModel` to `SFTTrainer` together with a `peft_config` argument, this might lead to create a nested peft model which can lead to some bugs.",
"> Hi @jason-brian-anderson Thanks for the issue, there might be a regression but not sure Are you able to run this small snippet? The snippet below worked fine on my end (transformers main + PEFT 0.6.0)\r\n> \r\n> ```python\r\n> from transformers import AutoModelForCausalLM, GPTQConfig\r\n> from peft import LoraConfig, get_peft_model\r\n> \r\n> model_id = \"TheBloke/Llama-2-7B-GPTQ\"\r\n> quantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, device_map={\"\": 0}, quantization_config=quantization_config)\r\n> \r\n> lora_config = LoraConfig(\r\n> r=8,\r\n> target_modules=[\"k_proj\",\"o_proj\",\"q_proj\",\"v_proj\"],\r\n> task_type=\"CAUSAL_LM\"\r\n> )\r\n> \r\n> model = get_peft_model(model, lora_config)\r\n> print(model)\r\n> ```\r\n> \r\n> Also I noticed that you pass a `PeftModel` to `SFTTrainer` together with a `peft_config` argument, this might lead to create a nested peft model which can lead to some bugs.\r\n\r\nFirst thanks for the tip about sending both `peft_config` and `PeftModel` to `SFTTrainer`, my results are now much better by not passing the `peft_config `arg to `SFTTrainer` along with the `PeftModel` :-)\r\n\r\ni copy/pasted/ran your code and i do still get the error: on 4.35.0 whereas on 4.34.1 i do see the model \r\n\r\n```\r\nUsing `disable_exllama` is deprecated and will be removed in version 4.37. Use `use_exllama` instead and specify the version with `exllama_config`.The value of `use_exllama` will be overwritten by `disable_exllama` passed in `GPTQConfig` or stored in your config file.\r\nYou passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute and has already quantized weights. However, loading attributes (e.g. use_exllama, exllama_config, use_cuda_fp16, max_input_length) will be overwritten with the one you passed to `from_pretrained`. The rest will be ignored.\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[1], line 15\r\n 7 model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, device_map={\"\": 0}, quantization_config=quantization_config)\r\n 9 lora_config = LoraConfig(\r\n 10 r=8,\r\n 11 target_modules=[\"k_proj\",\"o_proj\",\"q_proj\",\"v_proj\"],\r\n 12 task_type=\"CAUSAL_LM\"\r\n 13 )\r\n---> 15 model = get_peft_model(model, lora_config)\r\n 16 print(model)\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/mapping.py:106, in get_peft_model(model, peft_config, adapter_name)\r\n 104 if peft_config.is_prompt_learning:\r\n 105 peft_config = _prepare_prompt_learning_config(peft_config, model_config)\r\n--> 106 return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/peft_model.py:889, in PeftModelForCausalLM.__init__(self, model, peft_config, adapter_name)\r\n 888 def __init__(self, model, peft_config: PeftConfig, adapter_name=\"default\"):\r\n--> 889 super().__init__(model, peft_config, adapter_name)\r\n 890 self.base_model_prepare_inputs_for_generation = self.base_model.prepare_inputs_for_generation\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/peft_model.py:111, in PeftModel.__init__(self, model, peft_config, adapter_name)\r\n 109 if not peft_config.is_prompt_learning:\r\n 110 self.peft_config[adapter_name] = peft_config\r\n--> 111 self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type](\r\n 112 self.base_model, self.peft_config, adapter_name\r\n 113 )\r\n 114 self.set_additional_trainable_modules(peft_config, adapter_name)\r\n 115 else:\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/tuners/lora.py:274, in LoraModel.__init__(self, model, config, adapter_name)\r\n 273 def __init__(self, model, config, adapter_name) -> None:\r\n--> 274 super().__init__(model, config, adapter_name)\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/tuners/tuners_utils.py:88, in BaseTuner.__init__(self, model, peft_config, adapter_name)\r\n 85 if not hasattr(self, \"config\"):\r\n 86 self.config = {\"model_type\": \"custom\"}\r\n---> 88 self.inject_adapter(self.model, adapter_name)\r\n 90 # Copy the peft_config in the injected model.\r\n 91 self.model.peft_config = self.peft_config\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/tuners/tuners_utils.py:219, in BaseTuner.inject_adapter(self, model, adapter_name)\r\n 212 parent, target, target_name = _get_submodules(model, key)\r\n 214 optionnal_kwargs = {\r\n 215 \"loaded_in_8bit\": getattr(model, \"is_loaded_in_8bit\", False),\r\n 216 \"loaded_in_4bit\": getattr(model, \"is_loaded_in_4bit\", False),\r\n 217 \"current_key\": key,\r\n 218 }\r\n--> 219 self._create_and_replace(peft_config, adapter_name, target, target_name, parent, **optionnal_kwargs)\r\n 221 if not is_target_modules_in_base_model:\r\n 222 raise ValueError(\r\n 223 f\"Target modules {peft_config.target_modules} not found in the base model. \"\r\n 224 f\"Please check the target modules and try again.\"\r\n 225 )\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/tuners/lora.py:372, in LoraModel._create_and_replace(self, lora_config, adapter_name, target, target_name, parent, **optionnal_kwargs)\r\n 364 target.update_layer(\r\n 365 adapter_name,\r\n 366 lora_config.r,\r\n (...)\r\n 369 lora_config.init_lora_weights,\r\n 370 )\r\n 371 else:\r\n--> 372 new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs)\r\n 373 self._replace_module(parent, target_name, new_module, target)\r\n\r\nFile /app/venv/lib/python3.10/site-packages/peft/tuners/lora.py:481, in LoraModel._create_new_module(lora_config, adapter_name, target, **kwargs)\r\n 479 kwargs[\"fan_in_fan_out\"] = lora_config.fan_in_fan_out = True\r\n 480 else:\r\n--> 481 raise ValueError(\r\n 482 f\"Target module {target} is not supported. \"\r\n 483 f\"Currently, only `torch.nn.Linear` and `Conv1D` are supported.\"\r\n 484 )\r\n 485 new_module = Linear(adapter_name, in_features, out_features, bias=bias, **kwargs)\r\n 487 return new_module\r\n\r\nValueError: Target module QuantLinear() is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.\r\n```\r\n\r\nmy container versions were as follows during the test:\r\n\r\n\r\n```\r\nimport numpy as np\r\nimport wandb\r\nimport transformers\r\nimport IPython\r\nimport peft\r\nprint(np.__version__)\r\nprint(wandb.__version__)\r\nprint(transformers.__version__)\r\nprint(IPython.__version__)\r\nprint(peft.__version__)\r\n\r\n1.26.1\r\n0.16.0\r\n4.35.0\r\n8.16.1\r\n0.5.0\r\n```\r\n\r\nalso, to correct original assertion, i'm running\r\n\r\n```\r\n!python\r\nPython 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> \r\n```\r\nnot 3.9. this is a dev env so had tons of other libraries installed at various versions, perhaps the issue lies there.",
"OK interesting, thanks for running the experiments, a similar issue also on PEFT: https://github.com/huggingface/peft/issues/1097 there might be a regression on transformers @BenjaminBossan did also not managed to reproduce, will have a deeper look tomorrow",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | ### System Info
docker/python 3.9
### Who can help?
@muellerzr; @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Noticed upon updating transformers from ` 4.34.1` to `4.35.0` while on `peft version 0.5.0` that upon invoking `peft.get_peft_model` I get the error:
`ValueError: Target module QuantLinear() is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.`
i was able to produce this error with some code i grabbed from a tutorial:
```
from datasets import Dataset
dataset = Dataset.from_pandas(train_df)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# sharded model path in hugging face
# model_name = "TinyPixel/Llama-2-7B-bf16-sharded"
# model_name = 'NousResearch/Llama-2-7b-hf'
# Quantization config
# bnb_config = BitsAndBytesConfig(
# load_in_4bit=True,
# bnb_4bit_quant_type="nf4",
# bnb_4bit_compute_dtype="float16",
# )
# take pre quantized model from hugging face
model_id = "TheBloke/Llama-2-7B-GPTQ"
# model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
# tokenizer = AutoTokenizer.from_pretrained(model_id)
# loading the model with quantization config
# model = AutoModelForCausalLM.from_pretrained(
# model_name,
# quantization_config=bnb_config,
# trust_remote_code=True,
# device_map='auto'
# )
# can change to False if need the newest model update.
# %%
from peft import prepare_model_for_kbit_training
from transformers import GPTQConfig
# model_id = "TheBloke/Llama-2-7B-GPTQ"
# model_id = "TheBloke/Llama-2-7b-Chat-GPTQ"
quantization_config_loading = GPTQConfig(bits=4, disable_exllama=True)
model = AutoModelForCausalLM.from_pretrained(
model_id, quantization_config=quantization_config_loading, device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.config.use_cache = False
model.config.pretraining_tp = 1
# %%
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["k_proj","o_proj","q_proj","v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
print('error here expected')
model = get_peft_model(model, config)
print('dont expect to get here')
model.print_trainable_parameters()
# %%
from transformers import Trainer, TrainingArguments, DataCollatorForLanguageModeling
# needed for llama 2 tokenizer
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
args=TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
warmup_steps=2,
max_steps=100,
learning_rate=2e-4,
fp16=True, #use mixed precision training
logging_steps=1,
output_dir="outputs_gptq_training",
optim="adamw_hf",
save_strategy="epoch",
report_to="none")
from trl import SFTTrainer
trainer = SFTTrainer(
model=model,
args=args,
train_dataset=dataset,
peft_config=config,
dataset_text_field="text",
tokenizer=tokenizer,
packing=False,
max_seq_length=512)
# %%
train_result = trainer.train()
# %%
checkpoint_name ="final_checkpoints_gptqsummarizer_7b_peft"
#to merge and save the model
output_dir = os.path.join(args.output_dir, checkpoint_name)
trainer.model.save_pretrained(output_dir)
from peft import AutoPeftModelForCausalLM
# To perform inference on the test dataset example load the model from the checkpoint
persisted_model = AutoPeftModelForCausalLM.from_pretrained(
output_dir,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda",
)
# %%
#inference on test data example
from time import perf_counter
from rich import print
from transformers import GenerationConfig
text = test_df['text'][4]
inputs = tokenizer(text, return_tensors="pt").to('cuda')
generation_config = GenerationConfig(
penalty_alpha=0.6,
do_sample = True,
top_k=5,
temperature=0.5,
repetition_penalty=1.2,
max_new_tokens=100
)
start_time = perf_counter()
outputs = persisted_model.generate(**inputs, generation_config=generation_config)
print("output:::::::::")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
end_time = perf_counter()
output_time = end_time - start_time
print(f"Time taken for inference: {round(output_time,2)} seconds")
```
i was able to eliminate the error by downgrading my container build to 4.34.1 and reproduce it by rebuilding with 4.35.0. peft release 0.6.0 was released a few days ago and i tried it with 4.35.0 but 0.6.0/4.35.0 did not resolve the issue.
i scanned the release notes for both the 4.35.0 and 0.6.0 versions and didn't notice any breaking changes that would predict this.
will continue to dig but i thought i'd throw this out in case anyone else sees it.
### Expected behavior
4.35.0 works like 4.34.1 with peft 0.5.0/0.6.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27379/comments | https://api.github.com/repos/huggingface/transformers/issues/27379/events | https://github.com/huggingface/transformers/issues/27379 | 1,984,616,448 | I_kwDOCUB6oc52StgA | 27,379 | dinov2 with REGISTERS | {
"login": "betterze",
"id": 8336718,
"node_id": "MDQ6VXNlcjgzMzY3MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8336718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/betterze",
"html_url": "https://github.com/betterze",
"followers_url": "https://api.github.com/users/betterze/followers",
"following_url": "https://api.github.com/users/betterze/following{/other_user}",
"gists_url": "https://api.github.com/users/betterze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/betterze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/betterze/subscriptions",
"organizations_url": "https://api.github.com/users/betterze/orgs",
"repos_url": "https://api.github.com/users/betterze/repos",
"events_url": "https://api.github.com/users/betterze/events{/privacy}",
"received_events_url": "https://api.github.com/users/betterze/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hello! Could you kindly assign this task to me? I'm eager to take it on as my first contribution and greatly appreciate any guidance or considerations you can provide. Thank you in advance.",
"I am not a team member of hugging face, I can not 'assign the task to you'. But I believe you are very welcome to work on it, a lot of people will be benefited from your work. Thx\r\n",
"Hello @amyeroberts @NielsRogge!\r\nCan I please know your opinion about this endeavor? Thank you in advance."
] | 1,699 | 1,699 | null | NONE | null | ### Model description
Dear huggingface team,
The fair team published an improved version of dinov2 [VISION TRANSFORMERS NEED REGISTERS](https://arxiv.org/abs/2309.16588). The models and checkpoints are available in the [dinov2 website](https://github.com/facebookresearch/dinov2#pretrained-backbones-via-pytorch-hub), but not in hugging face.
Could you add this new model? I really appreciate your work.
Best Wishes,
Zongze
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[<VISION TRANSFORMERS NEED REGISTERS>](https://arxiv.org/abs/2309.16588)
[dinov2 reg checkpoint](https://github.com/facebookresearch/dinov2#pretrained-backbones-via-pytorch-hub) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27379/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27379/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27378/comments | https://api.github.com/repos/huggingface/transformers/issues/27378/events | https://github.com/huggingface/transformers/pull/27378 | 1,984,553,671 | PR_kwDOCUB6oc5e-l1z | 27,378 | Change thresh in test | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Last little bit for the new accelerate dataloaders logic, the new results are at 0.51, *just* above our thresh so slightly tweaks the thresh to be passing.
(We will have our own CI running tests on examples etc here soon to catch these better :) ). IIRC these tests just verify it trains, which that thresh is 0.12 :)
Fixes # (issue)
Failing test on CPU CI
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27378/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27378",
"html_url": "https://github.com/huggingface/transformers/pull/27378",
"diff_url": "https://github.com/huggingface/transformers/pull/27378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27378.patch",
"merged_at": 1699523076000
} |
https://api.github.com/repos/huggingface/transformers/issues/27377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27377/comments | https://api.github.com/repos/huggingface/transformers/issues/27377/events | https://github.com/huggingface/transformers/issues/27377 | 1,984,214,547 | I_kwDOCUB6oc52RLYT | 27,377 | Transformers need keras updates for tf-nightly + tf_keras (Keras 2.0) & keras 3.0 | {
"login": "jojivk",
"id": 45277591,
"node_id": "MDQ6VXNlcjQ1Mjc3NTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/45277591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jojivk",
"html_url": "https://github.com/jojivk",
"followers_url": "https://api.github.com/users/jojivk/followers",
"following_url": "https://api.github.com/users/jojivk/following{/other_user}",
"gists_url": "https://api.github.com/users/jojivk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jojivk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jojivk/subscriptions",
"organizations_url": "https://api.github.com/users/jojivk/orgs",
"repos_url": "https://api.github.com/users/jojivk/repos",
"events_url": "https://api.github.com/users/jojivk/events{/privacy}",
"received_events_url": "https://api.github.com/users/jojivk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @jojivk, thanks for raising this issue! \r\n\r\nWe don't officially support nightly releases as they're not guaranteed to be stable. Our versions of tensorflow and keras are pinned because automatically installing the latest release tends to lead to many breaking changes. If there is a change in the import structure in a stable release, then it's something we could try to handle and of course help from the community on this is always welcome :) \r\n\r\ncc @Rocketknight1 ",
"Hi @amyeroberts Thanks for the quick response. Keras 3.0 will be default (default now in nightly) with TF 2.16 release. Also, to fall back to using keras 2.0 with TF 2.16, few fixes that I mentioned, is needed.\r\nBest",
"@jojivk OK, thanks for pointing out. There's a draft PR #27375 which you can track which is handling updating the supported TF versions ",
"Hi @jojivk, yes, we're aware of the issues! We're going to stick to the version pin for now until 2.16 is more stable and we can test it properly. \r\n\r\nOur long-term plan here is:\r\n\r\n1) Start by pinning the version of TF so we don't break backward compatibility\r\n2) Move and update imports so we can support TF>=2.16, although we still won't support Keras 3 at this point\r\n3) Rewrite our core modelling code to support Keras 3 (TF only for now)\r\n4) Finally, hopefully (!) add the ability to run our Keras 3 models in other frameworks, probably starting with JAX.",
"@Rocketknight1 Thanks for the update",
"Quick update for this issue, we've filed our first big PR to prepare for Keras 3 / keras-nightly at #27794. There's likely more work to be done after this, but this resolves one major blocker.",
"@Rocketknight1 . Thanks for the update and fixes. I will update on any issues we see.",
"@Rocketknight1 @amyeroberts. Thank you for all the fixes!\r\nI see the following imports for keras which cause backward compatibility issues.\r\n\r\n from keras import backend as K\r\n [loc1](https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/modeling_tf_utils.py#L36)\r\n\r\n[loc2](https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/modeling_tf_pytorch_utils.py#L260)\r\n\r\nWe fixed it locally (as below) to use with Keras2.0 Models, due to reasons mentioned above in the issue report. \r\n from tf_keras import backend as K\r\n \r\nOther similar imports: which as of now we are not sure if it is an issue.\r\n\r\nfrom tensorflow.keras.callbacks import Callback\r\n ./src/transformers/keras_callbacks.py\r\nfrom tensorflow.keras.layers import Dense\r\n ./src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py\r\nfrom tensorflow.keras.activations import gelu\r\nfrom tensorflow.keras.layers import Dense, Dropout, Embedding, Layer, LayerNormalization\r\n ./src/transformers/models/esm/modeling_tf_esm.py\r\nfrom tensorflow.keras.preprocessing import image\r\n ./src/transformers/models/efficientnet/convert_efficientnet_to_pytorch.py\r\nfrom tensorflow.keras.optimizers.legacy import Adam\r\nfrom tensorflow.keras.optimizers import Adam\r\n ./src/transformers/optimization_tf.py\r\n\r\n\r\n\r\n",
"Hi @jojivk - yep, we noted those too! I think we're going to remove **all** instances of `keras.backend` in the TF files, and replace them with equivalent TF functions. We'll also probably replace imports of `keras` with `from tensorflow import keras` during the transition period - this should avoid issues where the Keras version differs significantly from the installed TF version.",
"@Rocketknight1 . Thank you for the update!.",
"> We'll also probably replace imports of keras with from tensorflow import keras during the transition period\r\n\r\nPrefer using `import tf_keras as keras` to make sure you're getting Keras 2.",
"Thanks @fchollet! We'll use that pattern instead.",
"@Rocketknight1\r\nThank you for the fixes. Have a few questions.\r\nThe fix (https://github.com/huggingface/transformers/pull/28588) I understand supports transitioning to tf_keras. Does it also support Keras 3.0 for customers who want to move to Keras 3.0?\r\nWill the information be added to your next release notes?\r\nAlso let me know if there is any additional setting(s) needed to with keep using tf_keras 2.0.?",
"You shouldn't need to change any settings @jojivk73. If you install `tf_keras`, everything should work the same as it did before.\r\n\r\nThe full transition to Keras 3 will be trickier. The reason for this is that all of our model code is written for TF/Keras, and mixes Keras layers with TF ops. To work in Keras 3, this code will need to replace all the TF ops with `keras.ops`. This will break compatibility with Keras 2, so we can't cleanly update the files in place.\r\n\r\nOur two options are:\r\n\r\n1) Wait some time (~1 year), then drop support for older versions of TensorFlow and begin to transition our TF codebase from Keras 2 + TF ops to Keras 3 + `keras.ops` code.\r\n2) Add \"Keras 3\" as a fourth framework alongside TF, PyTorch and JAX/Flax, so we can start supporting Keras 3 before we deprecate TF + Keras 2.\r\n\r\nEither way, we'll need a big community push to port all the models to `keras.ops`! Right now, we're being cautious, but we'll probably choose option 2 and accelerate a lot if we feel like a lot of users want Keras 3.",
"@Rocketknight1 Thanks for the quick and detailed reply."
] | 1,699 | 1,706 | 1,706 | NONE | null | ### System Info
With new changes for Keras (https://groups.google.com/g/keras-users/c/jgGcX730WGE), we are seeing issues with using Huggingface models (GPT-J, Stable Diffusion, & Vision Transformer). We saw that transformers restrict the versions for Tensorflow and Keras. https://github.com/huggingface/transformers/blob/main/setup.py#L129.
To use the models with tf-keras (keras 2.0), we could resolve issues in Transformers by changing import of keras in transformers to import tf_keras. -ie change
import keras.*
to
import tf_keras.*
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install tf-nightly
2. Install tf-keras-nightly
3. export TF_USE_LEGACY_KERAS=1
4. Run Huggingface Tensorflow GPT-J
### Expected behavior
Fails to run | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27376/comments | https://api.github.com/repos/huggingface/transformers/issues/27376/events | https://github.com/huggingface/transformers/issues/27376 | 1,984,176,435 | I_kwDOCUB6oc52RCEz | 27,376 | keyerror : mistral (for transformer version = 4.30) and Import Error Using `load_in_8bit=True` requires Accelerate: for transformer version > 4.30 | {
"login": "Abhaycnvrg",
"id": 107987033,
"node_id": "U_kgDOBm_AWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107987033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abhaycnvrg",
"html_url": "https://github.com/Abhaycnvrg",
"followers_url": "https://api.github.com/users/Abhaycnvrg/followers",
"following_url": "https://api.github.com/users/Abhaycnvrg/following{/other_user}",
"gists_url": "https://api.github.com/users/Abhaycnvrg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abhaycnvrg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abhaycnvrg/subscriptions",
"organizations_url": "https://api.github.com/users/Abhaycnvrg/orgs",
"repos_url": "https://api.github.com/users/Abhaycnvrg/repos",
"events_url": "https://api.github.com/users/Abhaycnvrg/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abhaycnvrg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Abhaycnvrg, thanks for raising this issue! \r\n\r\nFirstly - please change your authentication key and ~remove it from this example~ (I removed it but it will still be in the history)- it should be secret",
"We have many requests for help, which we can only attend at a decent pace if you help us too. Could you please: \r\n* Provide your running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* Provide a minimal code snippet to reproduce the error. There's a lot of code in this example and there's two models and tokenizers loaded as well as external tools such as langchain being used. \r\n* Provide a full traceback of the error ",
"- `transformers` version: 4.36.0.dev0\r\n- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.36\r\n- Python version: 3.11.5\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.25.0.dev0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.1.0+cu121 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n- ",
"```\r\n# Code to reproduce the error\r\nmodel_config = transformers.AutoConfig.from_pretrained(\r\n model_id,\r\n use_auth_token=hf_auth)\r\n\r\nbnb_config = transformers.BitsAndBytesConfig(load_in_4bit = True,\r\n bnb_4bit_quant_tyoe = 'nf4',\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_compute_dtype=bfloat16)\r\n\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n trust_remote_code=True,\r\n config=model_config,\r\n quantization_config=bnb_config,\r\n device_map='auto',\r\n use_auth_token=hf_auth,\r\n offload_folder=\"save_folder\")\r\n\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(model_id, use_auth_token=hf_auth)\r\ngenerate_text = transformers.pipeline(\r\n model=model, tokenizer=tokenizer,\r\n return_full_text=True, \r\n task='text-generation',\r\n temperature=0.1, \r\n max_new_tokens=4096, \r\n repetition_penalty=1.1)\r\nprint(\"loaded model\")\r\nllm = HuggingFacePipeline(pipeline=generate_text)\r\n\r\n```\r\n# Error\r\n\r\n",
"cc @younesbelkada ",
"Hi everyone! \r\nI used to face this issue sometimes when using Google colab and when libraries are not correctly installed. In case you are using Kaggle or Google Colab notebook can you try to delete the runtime and re-start it again? If not retry everything on a fresh new environment by making sure to install all the required packages `pip install transformers accelerate bitsandbytes`",
"we are using a workspace. not google colab.\r\nIt would be helpful for us if you can tell us which\r\n1. python version\r\n2. transformers version\r\n3. accelerate\r\n4. bitsandbytes\r\nare to be used. the exact numbers please.",
"python == 3.9.16 / transformers == 4.35.0 / accelerate 0.25.dev0 (from source) / bitsandbytes 0.41.1",
"can you also run: \r\n\r\n```python\r\n>>> from transformers.utils.import_utils import is_accelerate_available, is_bitsandbytes_available\r\n>>> is_accelerate_available()\r\nTrue\r\n>>> is_bitsandbytes_available()\r\nTrue\r\n```",
"I tried in the exact same environment\r\n\r\n\r\n\r\n\r\nSomehow, bitsandbytes isn't being made available\r\n@younesbelkada and @amyeroberts can you help?\r\nthe code is below\r\n\r\n```\r\n!pip install -q -U bitsandbytes==0.41.1\r\n!pip install -q -U git+https://github.com/huggingface/transformers.git\r\n!pip install -q -U git+https://github.com/huggingface/peft.git\r\n!pip install -q -U accelerate==0.25.dev0\r\n!pip install -q -U einops\r\n!pip install -q -U safetensors\r\n!pip install -q -U torch\r\n!pip install -q -U xformers\r\n!pip install -q -U langchain\r\n!pip install -q -U ctransformers[cuda]\r\n!pip install chromadb\r\n!pip install sentence-transformers\r\n!pip install -q -U accelerate\r\n!pip install bitsandbytes\r\n!pip install -i https://test.pypi.org/simple/ bitsandbytes\r\n!pip install --upgrade langchain\r\n!pip install transformers==4.35\r\n#loading packges\r\nfrom torch import cuda, bfloat16\r\nimport transformers\r\nfrom transformers import StoppingCriteria, StoppingCriteriaList\r\nimport torch\r\nfrom langchain.document_loaders import UnstructuredFileLoader\r\nfrom langchain.chains.summarize import load_summarize_chain\r\nfrom langchain.chains.question_answering import load_qa_chain\r\nfrom langchain.llms import HuggingFacePipeline\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom langchain import PromptTemplate\r\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\r\nimport accelerate\r\nimport bitsandbytes\r\nbase_model_id = \"mistralai/Mistral-7B-Instruct-v0.1\"\r\nbaseline = AutoModelForCausalLM.from_pretrained(base_model_id, device_map=\"auto\")\r\ntokenizer = AutoTokenizer.from_pretrained(base_model_id)\r\nprint(\"loaded all packages\")\r\ndevice = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\r\nprint(\"Printing Device...\")\r\nprint(device)\r\nprint(\"loading model....\")\r\nmodel_id = \"mistralai/Mistral-7B-Instruct-v0.1\"\r\nhf_auth = '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'\r\nmodel_config = transformers.AutoConfig.from_pretrained(\r\n model_id,\r\n use_auth_token=hf_auth#,\r\n# load_in_8bit=False \r\n)\r\nbnb_config = transformers.BitsAndBytesConfig(load_in_4bit = True,\r\nbnb_4bit_quant_tyoe = 'nf4',\r\nbnb_4bit_use_double_quant=True,\r\nbnb_4bit_compute_dtype=bfloat16\r\n)\r\n\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n trust_remote_code=True,\r\n config=model_config,\r\n quantization_config=bnb_config,\r\n# load_in_8bit=False,\r\n device_map='auto',\r\n use_auth_token=hf_auth,\r\n offload_folder=\"save_folder\"\r\n)\r\n# Load model directly\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(model_id, use_auth_token=hf_auth)\r\ngenerate_text = transformers.pipeline(\r\n model=model, tokenizer=tokenizer,\r\n return_full_text=True, \r\n task='text-generation',\r\n temperature=0.1, \r\n max_new_tokens=4096, \r\n repetition_penalty=1.1 \r\n)\r\nprint(\"loaded model\")\r\nllm = HuggingFacePipeline(pipeline=generate_text)\r\n```",
"I see `PyTorch Version (GPU?) - xxx (False)` - maybe your bitsandbytes install is broken due to CUDA / GPU hardware not properly detected. Are you able to run `!nvidia-smi` inside your notebook and what is its outcome?",
"So it doesn't work without GPU? only on CPU for eg?",
"Yes bnb does not work on CPU, you need to have access to a GPU. You can for instance use free-tier google colab instances that provides a decent 16GB NVIDIA T4 GPU",
"What do you mean by bnb? You mean mistral doesn't work on CPU?",
"I meant `bitsandbytes`, i.e. all quantization features such as `load_in_8bit` or `load_in_4bit` ",
"okay thanks! can mistral work without quantisation code? i mean, we just want to run the inference\r\n",
"On CPU yes, but it might be slow, please consider using Mistral-7b on a free tier Google colab instance using bitsandbytes 4bit\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_id = \"ybelkada/Mistral-7B-v0.1-bf16-sharded\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)\r\n```\r\n\r\nI advise to use https://huggingface.co/ybelkada/Mistral-7B-v0.1-bf16-sharded as the weights are sharded with smaller shards (~2GB) otherwise it will lead to CPU OOM when trying to load mistral weights on google colab",
"Thanks for this code, but can you point me to a tutorial post which does inference from mistral for cpu only? We are using custom machines with a limited scalability so OOM should not be a problem",
"For CPU only \r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_id = \"ybelkada/Mistral-7B-v0.1-bf16-sharded\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True)\r\n```\r\n\r\nShould do the trick, if you want to load the model in bfloat16:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_id = \"ybelkada/Mistral-7B-v0.1-bf16-sharded\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, torch_dtype=torch.bfloat16)\r\n```",
"Hey @younesbelkada , I tried with GPU but got this error\r\n\r\nRuntimeError: Failed to import transformers.models.mistral.modeling_mistral because of the following error (look up to see its traceback):\r\nFailed to import transformers.integrations.peft because of the following error (look up to see its traceback):\r\ncannot import name 'dispatch_model' from 'accelerate' (unknown location)\r\n\r\ntransformers = 4.36.dev0 , 4.35, 4.35.2\r\naccelerate = 0.25.0.dev0\r\nbitsandbytes = 0.41.2.post2\r\npython = 3.10.6",
"Hello @Abhaycnvrg \r\n`dispatch_model` should be still in accelerate init: https://github.com/huggingface/accelerate/blob/main/src/accelerate/__init__.py#L8 \r\nCan you share how did you installed accelerate?\r\nCan you also try to uninstall accelerate and re-install it ? `pip uninstall accelerate && pip install -U accelerate`\r\nCould you also share the full error traceback in case the error still persists? ",
"get the same error by following these commands\r\npip uninstall accelerate && pip install -U accelerate",
"my python version is 3.10.6\r\nis that the cause of the problem?\r\nalso, can you suggest me a container image with both python 3.9.16 and cuda installed in it, so that i can test with GPU?\r\n@younesbelkada and @amyeroberts ",
"which is the torch and torchvision version i need to use with mistral and GPU?",
"Also, can you suggest which container image (from nvidia or docker hub) should I use for running this?\r\nare these ones\r\nhttps://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-23-03.html#rel-23-03\r\nor \r\nhttps://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-23-07.html#rel-23-07\r\nokay?",
"@Abhaycnvrg transformers officially supports python 3.8 and above. You can find images by search docker hub - the hugging face one for pytorch-gpu [is here](https://hub.docker.com/r/huggingface/transformers-pytorch-gpu). The compatible versions of library packages can be found in [setup.py](https://github.com/huggingface/transformers/blob/main/setup.py). Running `pip install transformers` will find and install the compatible packages - and warn you if this isn't possible based on different libraries' requirements. You don't need `torchvision` for mistral - it's an LLM. ",
"Thanks @amyeroberts , so if i do the following: -\r\n1. REQUIREMENTS FILE :\r\npython == 3.9.16 / transformers == 4.35.0 / accelerate 0.25.dev0 (from source) / bitsandbytes 0.41.1\r\n2. container image: https://hub.docker.com/r/huggingface/transformers-pytorch-gpu\r\n3. and this code here\r\n```\r\nfrom torch import cuda, bfloat16\r\nimport transformers\r\nfrom transformers import StoppingCriteria, StoppingCriteriaList\r\nimport torch\r\nfrom langchain.document_loaders import UnstructuredFileLoader\r\nfrom langchain.chains.summarize import load_summarize_chain\r\nfrom langchain.chains.question_answering import load_qa_chain\r\nfrom langchain.llms import HuggingFacePipeline\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom langchain import PromptTemplate\r\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\r\nimport accelerate\r\nimport bitsandbytes\r\n#base_model_id = \"mistralai/Mistral-7B-Instruct-v0.1\"\r\n#baseline = AutoModelForCausalLM.from_pretrained(base_model_id, device_map=\"auto\")\r\n#tokenizer = AutoTokenizer.from_pretrained(base_model_id)\r\nprint(\"loaded all packages\")\r\ndevice = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\r\nprint(\"Printing Device...\")\r\nprint(device)\r\nprint(\"loading model....\")\r\nmodel_id = \"mistralai/Mistral-7B-Instruct-v0.1\"\r\nmodel_id = \"ybelkada/Mistral-7B-v0.1-bf16-sharded\"\r\nmodel_config = transformers.AutoConfig.from_pretrained(\r\n model_id,\r\n use_auth_token=hf_auth#,\r\n# load_in_8bit=False \r\n)\r\nbnb_config = transformers.BitsAndBytesConfig(load_in_4bit = True,\r\nbnb_4bit_quant_tyoe = 'nf4',\r\nbnb_4bit_use_double_quant=True,\r\nbnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\n\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n trust_remote_code=True,\r\n config=model_config,\r\n quantization_config=bnb_config,\r\n# load_in_8bit=False,\r\n device_map='auto',\r\n use_auth_token=hf_auth,\r\n offload_folder=\"save_folder\"\r\n)\r\n# Load model directly\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(model_id, use_auth_token=hf_auth)\r\ngenerate_text = transformers.pipeline(\r\n model=model, tokenizer=tokenizer,\r\n return_full_text=True, \r\n task='text-generation',\r\n temperature=0.1, \r\n max_new_tokens=4096, \r\n repetition_penalty=1.1 \r\n)\r\nprint(\"loaded model\")\r\nllm = HuggingFacePipeline(pipeline=generate_text)\r\n```\r\nit should work?",
"As you're running from the dev version of accelerate I can't guarantee that it will be compatible with all of the other packages. Why not run and find out? 🤷♀️ ",
"Sure we will try but if you have any other recommended configs, please do send",
"hi @jitender-cnvrg @Abhaycnvrg \r\nThanks a lot for iterating, loading a model in 4bit / 8bit should work out of the box on a simple Free-tier Google colab instance. Make sure to select T4 on runtime type. I made a quick example here: https://colab.research.google.com/drive/1zia3Q9FXhNHOhdwA9p8zD4qgPEWkZvHl?usp=sharing and made sure it works."
] | 1,699 | 1,703 | 1,703 | NONE | null | ### System Info
transformer version : 4.30,4.31,4.34,4.35
python version : 3.11.1, 3.11.5,3.8
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
1. #loading packges
from torch import cuda, bfloat16
import transformers
from transformers import StoppingCriteria, StoppingCriteriaList
import torch
from langchain.document_loaders import UnstructuredFileLoader
from langchain.chains.summarize import load_summarize_chain
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import HuggingFacePipeline
from transformers import AutoTokenizer, AutoModelForCausalLM
from langchain import PromptTemplate
from langchain.text_splitter import RecursiveCharacterTextSplitter
import accelerate
base_model_id = "mistralai/Mistral-7B-Instruct-v0.1"
baseline = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
print("loaded all packages")
device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'
print("Printing Device...")
print(device)
print("loading model....")
model_id = "mistralai/Mistral-7B-Instruct-v0.1"
model_config = transformers.AutoConfig.from_pretrained(
model_id,
use_auth_token=hf_auth#,
# load_in_8bit=False
)
bnb_config = transformers.BitsAndBytesConfig(load_in_4bit = True,
bnb_4bit_quant_tyoe = 'nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=bfloat16
)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
# load_in_8bit=False,
device_map='auto',
use_auth_token=hf_auth,
offload_folder="save_folder"
)
# Load model directly
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id, use_auth_token=hf_auth)
generate_text = transformers.pipeline(
model=model, tokenizer=tokenizer,
return_full_text=True,
task='text-generation',
temperature=0.1,
max_new_tokens=4096,
repetition_penalty=1.1
)
print("loaded model")
llm = HuggingFacePipeline(pipeline=generate_text)
```
Run this
### Expected behavior
we should get inference | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27375/comments | https://api.github.com/repos/huggingface/transformers/issues/27375/events | https://github.com/huggingface/transformers/pull/27375 | 1,984,071,968 | PR_kwDOCUB6oc5e88pf | 27,375 | Update the TF pin for 2.15 | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Seems to work locally! Marking this as ready for review.",
"@ydshieh I didn't know that! It seems to be caused by one of the dependencies being slightly behind - 2.14 is the most recent version, and 2.15 is in RC.\r\n\r\nI'm not sure what to do about it, though. Since local tests pass, though, we can probably assume that 2.15 is okay?",
"> we can probably assume that 2.15 is okay?\r\n\r\nAs the CI won't run with 2.15 (with and after this PR), **it's not a blocking factor to merge this PR.**\r\n\r\nMy comment above is just to say we probably need to check what happens and decide if to make effort to have 2.15 installed for CI. If TF 2.15 works will be relevant only at that point.\r\n\r\n",
"Understood - I suspect the issue is that one of the dependencies (I think maybe `tf2onnx`?) has a version limit for TF. Whenever they update that cap, our CI will probably suddenly move to 2.14. Either way, I'll merge this PR for now, since local tests for me suggest 2.14 and 2.15 are working fine."
] | 1,699 | 1,700 | 1,700 | MEMBER | null | This PR updates the TF pin to allow TF 2.15. Leaving it as a draft for now while I make sure this doesn't break anything! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27375/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27375",
"html_url": "https://github.com/huggingface/transformers/pull/27375",
"diff_url": "https://github.com/huggingface/transformers/pull/27375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27375.patch",
"merged_at": 1700142464000
} |
https://api.github.com/repos/huggingface/transformers/issues/27374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27374/comments | https://api.github.com/repos/huggingface/transformers/issues/27374/events | https://github.com/huggingface/transformers/pull/27374 | 1,984,063,275 | PR_kwDOCUB6oc5e86sl | 27,374 | translate debugging.md to chinese | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu\r\n\r\nhi, here is another pr for debugging doc. I think I can finish this section before next week.\r\n\r\nBest",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27374). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27374",
"html_url": "https://github.com/huggingface/transformers/pull/27374",
"diff_url": "https://github.com/huggingface/transformers/pull/27374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27374.patch",
"merged_at": 1699481046000
} |
https://api.github.com/repos/huggingface/transformers/issues/27373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27373/comments | https://api.github.com/repos/huggingface/transformers/issues/27373/events | https://github.com/huggingface/transformers/issues/27373 | 1,983,968,769 | I_kwDOCUB6oc52QPYB | 27,373 | When using 4.35 to load llama2 and flash attention2 for training, the memory usage has nearly quadrupled | {
"login": "Trangle",
"id": 3235116,
"node_id": "MDQ6VXNlcjMyMzUxMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3235116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Trangle",
"html_url": "https://github.com/Trangle",
"followers_url": "https://api.github.com/users/Trangle/followers",
"following_url": "https://api.github.com/users/Trangle/following{/other_user}",
"gists_url": "https://api.github.com/users/Trangle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Trangle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Trangle/subscriptions",
"organizations_url": "https://api.github.com/users/Trangle/orgs",
"repos_url": "https://api.github.com/users/Trangle/repos",
"events_url": "https://api.github.com/users/Trangle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Trangle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Trangle, thanks for raising this issue! \r\n\r\nFor the diff highlighted in the issue, this shouldn't affect the memory consumption of the model. This is just a change in type annotation. \r\n\r\nCould you provide some more details about the setup and what's being observed? Specifically: \r\n* What hardware are you using?\r\n* Is 4x memory consumption the limit of the GPU i.e. are you hitting OOM before seeing the max load? \r\n* At what line of code is this explosion in memory being observed? e.g. when the model's loading? On the forward pass? \r\n\r\nWe have many requests for help, which we can only attend at a decent pace if you help us too. The linked to script is large and has a lot of logic. Could you provide a code snippet we can run to reproduce the error with the minimal amount of code? ",
"> Hi @Trangle, thanks for raising this issue!\r\n> \r\n> For the diff highlighted in the issue, this shouldn't affect the memory consumption of the model. This is just a change in type annotation.\r\n> \r\n> Could you provide some more details about the setup and what's being observed? Specifically:\r\n> \r\n> * What hardware are you using?\r\n> * Is 4x memory consumption the limit of the GPU i.e. are you hitting OOM before seeing the max load?\r\n> * At what line of code is this explosion in memory being observed? e.g. when the model's loading? On the forward pass?\r\n> \r\n> We have many requests for help, which we can only attend at a decent pace if you help us too. The linked to script is large and has a lot of logic. Could you provide a code snippet we can run to reproduce the error with the minimal amount of code?\r\n\r\nI use 8 * A100-80G, with seq_length=4096,flash-attention2\r\n\r\nI'm not sure which part caused the surge in graphics memory and caused OOM, but this issue was resolved by downgrading 4.34.1",
"> I use 8 * A100-80G, with seq_length=4096,flash-attention2\r\n\r\nThanks!\r\n\r\n> I'm not sure which part caused the surge in graphics memory and caused OOM, but this issue was resolved by downgrading 4.34.1\r\n\r\nIt's OK to not know what caused it (this is what the issues are for! :) ) but as mentioned before we have very many requests and need you to help us too if we are to be able to address them at a good pace. \r\n\r\nCould you provide a minimal i.e. as few lines of code as possible, snippet which we can run to replicate the issue? ",
"hi bro, do you have training problems? I am training llama, with 4.35 loss will be significantly higher than 4.33!",
"> hi bro, do you have training problems? I am training llama, with 4.35 loss will be significantly higher than 4.33!\n\nyes I do",
"Wondering that if it is the change of the logic in line 520~522\r\n\r\nfrom\r\n\r\n``` python\r\nquery_states = query_states.to(torch.float16)\r\nkey_states = key_states.to(torch.float16)\r\nvalue_states = value_states.to(torch.float16)\r\n```\r\n\r\nto\r\n\r\n``` python\r\nquery_states = query_states.to(target_dtype)\r\nkey_states = key_states.to(target_dtype)\r\nvalue_states = value_states.to(target_dtype)\r\n```\r\n\r\nAnother possible logical estimate that may be relevant is the change in the \"attention_mask\" section, starting from 4.35, to: \"_prepare_4d_causal_attention_mask\"",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I have encountered the exact same issue with transformers + flash-attn. Any version >= 4.35 would get OOM while the exact same code with transformers < 4.35 won't.",
"@jedyang97 Thanks for confirming the. Could you provide a minimal reproducer which can replicate this issue? "
] | 1,699 | 1,704 | 1,703 | NONE | null | ### System Info
transformers version: 4.35.0
Platform: Ubuntu22.04
Python version: 3.11
Huggingface_hub version: 0.17.3
Safetensors version: 0.4.0
Accelerate version: 0.24.1
Accelerate config: not found
PyTorch version (GPU?): 2.1.0+cu121 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No/
### Who can help?
When switched to 4.34, the memory usage decreases by nearly four times
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Some example like: https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_7b.sh
or any training process using 4.35
### Expected behavior
Code does not run out of CUDA memory. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27372/comments | https://api.github.com/repos/huggingface/transformers/issues/27372/events | https://github.com/huggingface/transformers/pull/27372 | 1,983,911,859 | PR_kwDOCUB6oc5e8ZuJ | 27,372 | Fix tiny model script: not using `from_pt=True` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
After #27064, the checkpoint is saved with format `safetensors`. If we specify `from_pretrained("...", from_pt=True)` while the checkpoint is in `safetensors` format, we get error like `invalid load key *\xa0* (or *\xd8* etc.)`.
This PR remove the usage of `from_pt=True` and therefore fix 1 issue in the failing CI (that checks the script working).
### Code snippet
```python
from transformers import BertModel, TFBertModel
ckpt = "hf-internal-testing/tiny-random-BertModel"
bert_pt = BertModel.from_pretrained(ckpt)
bert_pt.save_pretrained("my-bert")
# this works
bert_tf = TFBertModel.from_pretrained("my-bert")
# this fail
bert_tf = TFBertModel.from_pretrained("my-bert", from_pt=True)
print(bert_tf)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27372/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27372",
"html_url": "https://github.com/huggingface/transformers/pull/27372",
"diff_url": "https://github.com/huggingface/transformers/pull/27372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27372.patch",
"merged_at": 1699460157000
} |
https://api.github.com/repos/huggingface/transformers/issues/27371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27371/comments | https://api.github.com/repos/huggingface/transformers/issues/27371/events | https://github.com/huggingface/transformers/issues/27371 | 1,983,861,267 | I_kwDOCUB6oc52P1IT | 27,371 | Atrribute Error: 'AlignConfig' object has no attribute 'encoder', 'PoolFormerConfig' object has no attribute 'encoder'. | {
"login": "PriyaBSavithiri",
"id": 104089347,
"node_id": "U_kgDOBjRHAw",
"avatar_url": "https://avatars.githubusercontent.com/u/104089347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PriyaBSavithiri",
"html_url": "https://github.com/PriyaBSavithiri",
"followers_url": "https://api.github.com/users/PriyaBSavithiri/followers",
"following_url": "https://api.github.com/users/PriyaBSavithiri/following{/other_user}",
"gists_url": "https://api.github.com/users/PriyaBSavithiri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PriyaBSavithiri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PriyaBSavithiri/subscriptions",
"organizations_url": "https://api.github.com/users/PriyaBSavithiri/orgs",
"repos_url": "https://api.github.com/users/PriyaBSavithiri/repos",
"events_url": "https://api.github.com/users/PriyaBSavithiri/events{/privacy}",
"received_events_url": "https://api.github.com/users/PriyaBSavithiri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @PriyaBSavithiri, thanks for opening an issue! \r\n\r\nAs noted in the warning message, Hugging Face Benchmarking utilis are deprecated and we are no longer actively maintaining them. To benchmark models, please use external benchmarking libraries. ",
"Thanks for the update @amyeroberts . I understand the deprecation of Hugging Face Benchmarking utilities.\r\n\r\nI'm curious if there are any community members who are still using or have information on alternative benchmarking tools. Is there a discussion or resources we can tap into regarding this issue?\r\n\r\n",
"@PriyaBSavithiri I don't know of any particular members of the community who are using these tools. I'd suggest searching past issues to see if there's any relevant information. There's also the [HF discord server](https://discord.com/invite/hugging-face-879548962464493619) and [forum](https://discuss.huggingface.co/) where you can post questions. \r\n\r\nOf course, other people might find this issue and comment here too :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | Hi! I am using python==3.10, torch==1.13.0+cpu, transformers==4.35.0. I am trying to (cpu)benchmark the transformer models in pytorch framework using the command:
`python run_benchmark.py --models kakaobrain/align-base --batch_sizes 1 --sequence_lengths 384`
when I execute the command, am getting the atrribute error as:
***********************************************************************************
FutureWarning: The class <class 'transformers.benchmark.benchmark.PyTorchBenchmark'> is deprecated. Hugging Face Benchmarking utils are deprecated in general and it is advised to use external Benchmarking libraries to benchmark Transformer models.
warnings.warn(
1 / 1
**'AlignConfig' object has no attribute 'encoder'
'AlignConfig' object has no attribute 'encoder'**
Traceback (most recent call last):
File "/home/priya/priya/transformers/examples/pytorch/benchmarking/run_benchmark.py", line 50, in
main()
File "/home/priya/priya/transformers/examples/pytorch/benchmarking/run_benchmark.py", line 46, in main
benchmark.run()
File "/home/priya/miniconda3/envs/pyo/lib/python3.10/site-packages/transformers/benchmark/benchmark_utils.py", line 710, in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
ValueError: too many values to unpack (expected 2)
***********************************************************************************************
I have encountered the same issue while trying to install the transformers package using pip as well as from the source code.
And the same issue persists in many transformer models. Hereby, am listing few of them:
1. kakaobrain/align-base
2. BAAI/AltCLIP
3. SenseTime/deformable-detr
4. sail/poolformer_s12
etc.,
Am I missing something ?
Any assistance on this issue is greatly appreciated, thankyou! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27371/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27370/comments | https://api.github.com/repos/huggingface/transformers/issues/27370/events | https://github.com/huggingface/transformers/issues/27370 | 1,983,745,146 | I_kwDOCUB6oc52PYx6 | 27,370 | Missing generation token for zephyr-beta | {
"login": "dt-ahmed-touila",
"id": 28984325,
"node_id": "MDQ6VXNlcjI4OTg0MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/28984325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dt-ahmed-touila",
"html_url": "https://github.com/dt-ahmed-touila",
"followers_url": "https://api.github.com/users/dt-ahmed-touila/followers",
"following_url": "https://api.github.com/users/dt-ahmed-touila/following{/other_user}",
"gists_url": "https://api.github.com/users/dt-ahmed-touila/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dt-ahmed-touila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dt-ahmed-touila/subscriptions",
"organizations_url": "https://api.github.com/users/dt-ahmed-touila/orgs",
"repos_url": "https://api.github.com/users/dt-ahmed-touila/repos",
"events_url": "https://api.github.com/users/dt-ahmed-touila/events{/privacy}",
"received_events_url": "https://api.github.com/users/dt-ahmed-touila/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Rocketknight1 ",
"cc @lewtun to this one, I think he wrote the chat template for Zephyr-beta!",
"Wait, no, my mistake! The issue is just that your version of `transformers` is slightly out of date @dt-ahmed-touila, and `add_generation_prompt` was only added in 4.35. Please `pip install --upgrade transformers` to resolve the issue! ",
"Thank you @Rocketknight1 ! Updating to `transformers-4.35.0` solved this. "
] | 1,699 | 1,699 | 1,699 | NONE | null | ### System Info
transformers version: 4.34.0
tokenizers version: 0.14.1
Platform: macOS 13.5.1
Python version: 3.8.10
huggingface-hub version: 0.17.3
PyTorch version(GPU?): 2.0.1 (False)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: N
Using distributed or parallel set-up in script?: N
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta")
tokenizer.apply_chat_template([{"role": "user", "content": "What is life?"}], tokenize=False, add_generation_prompt=True)
```
Output: `<|user|>\nWhat is life?</s>\n`
### Expected behavior
`<|user|>\nWhat is life?</s>\n<|assistant|>` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27370/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27369/comments | https://api.github.com/repos/huggingface/transformers/issues/27369/events | https://github.com/huggingface/transformers/pull/27369 | 1,983,707,931 | PR_kwDOCUB6oc5e7tnO | 27,369 | Solve dimension mismatch error between input_embeds and embed_pos | {
"login": "MorenoLaQuatra",
"id": 10062811,
"node_id": "MDQ6VXNlcjEwMDYyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10062811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorenoLaQuatra",
"html_url": "https://github.com/MorenoLaQuatra",
"followers_url": "https://api.github.com/users/MorenoLaQuatra/followers",
"following_url": "https://api.github.com/users/MorenoLaQuatra/following{/other_user}",
"gists_url": "https://api.github.com/users/MorenoLaQuatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorenoLaQuatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorenoLaQuatra/subscriptions",
"organizations_url": "https://api.github.com/users/MorenoLaQuatra/orgs",
"repos_url": "https://api.github.com/users/MorenoLaQuatra/repos",
"events_url": "https://api.github.com/users/MorenoLaQuatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorenoLaQuatra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's literally 1-line change to avoid padding to 30s each time, can it be merged? @sanchit-gandhi",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,703 | 1,703 | CONTRIBUTOR | null | When generating with WhisperForConditionalGeneration and specifying max_length, padding and truncation, a mismatch can occur between the input length after preprocessing and the size of the positional embedding tensor.
This minimal change ensures the positional embeddings fit the input dimensions, avoiding the size mismatch.
# What does this PR do?
Fix positional embeddings mismatch with inputs not having the default max length (Whisper)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27368
## Who can review?
@sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27369/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27369",
"html_url": "https://github.com/huggingface/transformers/pull/27369",
"diff_url": "https://github.com/huggingface/transformers/pull/27369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27369.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27368/comments | https://api.github.com/repos/huggingface/transformers/issues/27368/events | https://github.com/huggingface/transformers/issues/27368 | 1,983,665,546 | I_kwDOCUB6oc52PFWK | 27,368 | `input_embeds` - `embed_pos` dimension mismatch when using `max_length` in Whisper (v3) model | {
"login": "MorenoLaQuatra",
"id": 10062811,
"node_id": "MDQ6VXNlcjEwMDYyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10062811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorenoLaQuatra",
"html_url": "https://github.com/MorenoLaQuatra",
"followers_url": "https://api.github.com/users/MorenoLaQuatra/followers",
"following_url": "https://api.github.com/users/MorenoLaQuatra/following{/other_user}",
"gists_url": "https://api.github.com/users/MorenoLaQuatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorenoLaQuatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorenoLaQuatra/subscriptions",
"organizations_url": "https://api.github.com/users/MorenoLaQuatra/orgs",
"repos_url": "https://api.github.com/users/MorenoLaQuatra/repos",
"events_url": "https://api.github.com/users/MorenoLaQuatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorenoLaQuatra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, sorry for the late reply, this is a duplicate of #25744! 🤗 ",
"The Whisper model is trained on a context of fixed size (30 seconds = 1500 spectrogram frames = default max spectrogram length). It will give you gibberish if you go beyond this, since the embeddings will be randomly initialised. If you go below this, you get sub-optimal outputs, since the model deviates from how it was trained. \r\n\r\nThus, we follow the original OpenAI codebase, and always fix the max spectrogram length to 1500 frames, and expect this in the embeddings.",
"Got it, thanks for the explaination, I was not aware of the problem. Indeed I feel a little bit \"strange\" the need to pad to 30s but as @sanchit-gandhi said, it may lead to worse results. \r\n\r\nI'm going to close the issue, however, do you have any evidence of the degraded performance for <30s unpadded audio?",
"If someone is actually asking, those are some results on a subset of 1000 samples taken from CV 13 test set in English. This is the `openai/whisper-large-v3` model. without any special text normalization (just to have an idea). \r\n\r\n*No explicit set of language=\"en\" in the generate function*:\r\n- Max Length: 10 seconds, WER: 31.67%\r\n- Max Length: 20 seconds, WER: 29.48%\r\n- Max Length: 30 seconds, WER: 24.94%\r\n\r\n*Explicitly setting language='en' in the generate function*:\r\n- Max Length: 10 seconds, WER: 18.54%\r\n- Max Length: 20 seconds, WER: 17.69%\r\n- Max Length: 30 seconds, WER: 17.47%",
"That's super interesting that the language detection of `large-v3` is so off! I wonder whether general inference pipelines would benefit from having a more accurate language detection as a pre-processing stage, then passing the detected language and audio to Whisper for transcription? E.g. we could use https://huggingface.co/facebook/mms-lid-126\r\n\r\nInteresting to see a near 1% WER degradation for a smaller context window -> this does suggest that we should keep it pinned to 30s to prevent 'silent errors' creeping in if users aren't aware the model has a fixed context length.\r\n\r\nThanks so much for the results @MorenoLaQuatra! A very interesting discussion :)",
"Thank you for stimulating the discussion. Language induction is indeed something I'm investigating in my research 🚀"
] | 1,699 | 1,701 | 1,701 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-6.2.0-36-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu118 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Basically in whisper model I think the positional embeddings do not take in charge the maximum length set externally.
Simply execute the following script:
```python
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
model_tag = "openai/whisper-large-v3"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
# load model and processor
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v3")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v3", torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)
model.config.forced_decoder_ids = None
model = model.to(device)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = ds[0]["audio"]
input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt", return_attention_mask=True, padding="max_length", truncation=True, max_length=5 * sample["sampling_rate"])
item = {
"input_features": input_features.input_features.squeeze(0).to(torch_dtype),
"attention_mask": input_features.attention_mask.squeeze(0).to(torch_dtype),
}
# move to device
item = {k: v.to(device) for k, v in item.items()}
# generate token ids
predicted_ids = model.generate(
input_features=item["input_features"].unsqueeze(0),
attention_mask=item["attention_mask"].unsqueeze(0),
)
# decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
print(transcription)
```
It returns the following error:
`RuntimeError: The size of tensor a (250) must match the size of tensor b (1500) at non-singleton dimension 1`
How to solve:
Change [this line](https://github.com/huggingface/transformers/blob/eb30a49b2028f2411514c10e432792ad581fc08b/src/transformers/models/whisper/modeling_whisper.py#L1123C22-L1123C22) with this:
```python
embed_pos = self.embed_positions.weight[:inputs_embeds.size(1), :]
```
I don't know if it is general enough tho.
### Expected behavior
Should do the model inference and print the transcription. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27368/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27367/comments | https://api.github.com/repos/huggingface/transformers/issues/27367/events | https://github.com/huggingface/transformers/pull/27367 | 1,983,665,458 | PR_kwDOCUB6oc5e7kTr | 27,367 | test | {
"login": "ready-research",
"id": 72916209,
"node_id": "MDQ6VXNlcjcyOTE2MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/72916209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ready-research",
"html_url": "https://github.com/ready-research",
"followers_url": "https://api.github.com/users/ready-research/followers",
"following_url": "https://api.github.com/users/ready-research/following{/other_user}",
"gists_url": "https://api.github.com/users/ready-research/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ready-research/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ready-research/subscriptions",
"organizations_url": "https://api.github.com/users/ready-research/orgs",
"repos_url": "https://api.github.com/users/ready-research/repos",
"events_url": "https://api.github.com/users/ready-research/events{/privacy}",
"received_events_url": "https://api.github.com/users/ready-research/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"my mistake"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27367/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27367",
"html_url": "https://github.com/huggingface/transformers/pull/27367",
"diff_url": "https://github.com/huggingface/transformers/pull/27367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27367.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27366/comments | https://api.github.com/repos/huggingface/transformers/issues/27366/events | https://github.com/huggingface/transformers/pull/27366 | 1,983,641,131 | PR_kwDOCUB6oc5e7e8X | 27,366 | Put doctest options back to `pyproject.toml` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Need to check run results: https://github.com/huggingface/transformers/actions/runs/6799119065",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Put doctest options back to `pyproject.toml`.
Fix #27345
Checked with a run https://github.com/huggingface/transformers/actions/runs/6800607292. The list of failing doctests is the same as before, so we are good with this change. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27366/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27366",
"html_url": "https://github.com/huggingface/transformers/pull/27366",
"diff_url": "https://github.com/huggingface/transformers/pull/27366.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27366.patch",
"merged_at": 1699527020000
} |
https://api.github.com/repos/huggingface/transformers/issues/27365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27365/comments | https://api.github.com/repos/huggingface/transformers/issues/27365/events | https://github.com/huggingface/transformers/issues/27365 | 1,983,612,421 | I_kwDOCUB6oc52O4YF | 27,365 | dev-ai | {
"login": "devgupta6",
"id": 60378190,
"node_id": "MDQ6VXNlcjYwMzc4MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/60378190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devgupta6",
"html_url": "https://github.com/devgupta6",
"followers_url": "https://api.github.com/users/devgupta6/followers",
"following_url": "https://api.github.com/users/devgupta6/following{/other_user}",
"gists_url": "https://api.github.com/users/devgupta6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devgupta6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devgupta6/subscriptions",
"organizations_url": "https://api.github.com/users/devgupta6/orgs",
"repos_url": "https://api.github.com/users/devgupta6/repos",
"events_url": "https://api.github.com/users/devgupta6/events{/privacy}",
"received_events_url": "https://api.github.com/users/devgupta6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hi @devgupta6, thanks for opening an issue! \r\n\r\nThe easiest and recommended way to make a model available in `transformers` is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models. Here is a more general guide on adding models: https://huggingface.co/docs/transformers/add_new_model",
"Hi @amyeroberts \r\nI had registered my LLM model using automodel like this - AutoModelForCausalLM.register(CustomAIConfig, CustomAI) \r\nbut it is showing the error - ---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n<ipython-input-17-298cb7a13739> in <cell line: 2>()\r\n 1 # Load model directly\r\n----> 2 from transformers import CustomAI\r\n 3 model = CustomAI.from_pretrained(\"RANITBAG/gan\")\r\n\r\nImportError: cannot import name 'CustomAI' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)\r\n\r\n---------------------------------------------------------------------------\r\nNOTE: If your import is failing due to a missing package, you can\r\nmanually install dependencies using either !pip or !apt.\r\n\r\nTo view examples of installing some common dependencies, click the\r\n\"Open Examples\" button below.\r\n--------------------------------------------------------------------------- .\r\n\r\nI have tried all sorts of things but it would be of great help if you could assist me with this. \r\n\r\n\r\n",
"Could you link to your model on the hub? Without seeing the code I'm not able to see what the issue might be. ",
"Yes, sure. https://github.com/devgupta6/dev-ai this is the model link\r\n",
"@devgupta6 Please read the documentation pages I sent over as these contain all the information you should need. The model at the moment is just a torch implementation and doesn't have any of the necessary adaptations for the transformers library. For example, your model needs to inherit from `PretrainedModel`. We can help with bugs and issues but we can't write your code for you. "
] | 1,699 | 1,699 | null | NONE | null | ### Model description
It a generative artifical intelligence model. I have the architecture ready but I am facing the problem to interagte it with transformer architure. It would be great if you provide me assistance with this.
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27365/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27364/comments | https://api.github.com/repos/huggingface/transformers/issues/27364/events | https://github.com/huggingface/transformers/pull/27364 | 1,983,589,963 | PR_kwDOCUB6oc5e7TqH | 27,364 | Add Flash Attention 2 support to Bark | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the quick review, I've addressed your comments :hugs: ",
"Merging ! thanks for the quick reviews!"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Following a recent series of PRs and issues to improve Bark, this PR aims to add FA2 support to Bark. Bark self-attention class supports both causal and non-causal attention but otherwise changes are minimal.
I've also taken the opportunity to switch to `_prepare_4d_attention_mask` instead of manually creating the 4d attention mask.
Benchmarks are currently running at the moment to measure speed/memory gains!
cc @sanchit-gandhi and @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27364/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27364",
"html_url": "https://github.com/huggingface/transformers/pull/27364",
"diff_url": "https://github.com/huggingface/transformers/pull/27364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27364.patch",
"merged_at": 1699463195000
} |
https://api.github.com/repos/huggingface/transformers/issues/27363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27363/comments | https://api.github.com/repos/huggingface/transformers/issues/27363/events | https://github.com/huggingface/transformers/issues/27363 | 1,983,556,766 | I_kwDOCUB6oc52Oqye | 27,363 | Whisper Large v3 Flax | {
"login": "peregilk",
"id": 9079808,
"node_id": "MDQ6VXNlcjkwNzk4MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peregilk",
"html_url": "https://github.com/peregilk",
"followers_url": "https://api.github.com/users/peregilk/followers",
"following_url": "https://api.github.com/users/peregilk/following{/other_user}",
"gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peregilk/subscriptions",
"organizations_url": "https://api.github.com/users/peregilk/orgs",
"repos_url": "https://api.github.com/users/peregilk/repos",
"events_url": "https://api.github.com/users/peregilk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peregilk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | ### System Info
Latest. installed from source.
### Who can help?
@sanchit-gandhi
Num_mel_bins are hardcoded in class [FlaxWhisperPreTrainedModel](https://github.com/huggingface/transformers/blob/b6dbfee0a21d333447b47887dbe2cb87720ebfd0/src/transformers/models/whisper/modeling_flax_whisper.py#L870)
Since num_mel_bins has changed from 80 to 128, this should probably either be read from `num_mel_bins` in `config.json`, or `feature_size` in `preprocessor_config.json`.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
One of the issues that blocks using this class for training based on Large v3 in Flax.
### Expected behavior
Input feature size should be set in config file. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27363/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27362/comments | https://api.github.com/repos/huggingface/transformers/issues/27362/events | https://github.com/huggingface/transformers/issues/27362 | 1,983,515,090 | I_kwDOCUB6oc52OgnS | 27,362 | Mismatched dimensions error when using beam_size != 1 and return_timestamps="word" for AutomaticSpeechRecognitionPipeline with Whisper | {
"login": "nestormh",
"id": 4995707,
"node_id": "MDQ6VXNlcjQ5OTU3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4995707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nestormh",
"html_url": "https://github.com/nestormh",
"followers_url": "https://api.github.com/users/nestormh/followers",
"following_url": "https://api.github.com/users/nestormh/following{/other_user}",
"gists_url": "https://api.github.com/users/nestormh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nestormh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nestormh/subscriptions",
"organizations_url": "https://api.github.com/users/nestormh/orgs",
"repos_url": "https://api.github.com/users/nestormh/repos",
"events_url": "https://api.github.com/users/nestormh/events{/privacy}",
"received_events_url": "https://api.github.com/users/nestormh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sanchit-gandhi and @ylacombe ",
"Taking a look, thanks for taking the time to describe the issue! The code is not working though!",
"I believe this PR is taking care of this: #26699\r\n\r\n(#28007 refers to the same issue)",
"Thanks!"
] | 1,699 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.33.3
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have the following code:
```
model_name_or_path = "openai/whisper-small"
LANGUAGE = "French"
TASK = "transcribe"
CHUNK_LENGTH_S = 30 # 30 seconds chunks
SAMPLE_RATE = 16_000
model = WhisperForConditionalGeneration.from_pretrained(
pretrained_model_name_or_path=model_name_or_path,
load_in_8bit=True,
device_map="cuda:0",
)
feature_extractor = WhisperFeatureExtractor.from_pretrained(
model_name_or_path,
)
tokenizer = WhisperTokenizer.from_pretrained(
model_name_or_path,
language=LANGUAGE,
task=TASK,
)
processor = WhisperProcessor.from_pretrained(
model_name_or_path,
language=LANGUAGE,
task=TASK,
)
pipeline_generator = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=tokenizer,
feature_extractor=feature_extractor,
chunk_length_s=CHUNK_LENGTH_S,
stride_length_s=5
)
pipeline_generator.model.config.forced_decoder_ids = (
pipeline_generator.tokenizer.get_decoder_prompt_ids(
language=LANGUAGE, task=TASK
)
)
forced_decoder_ids = processor.get_decoder_prompt_ids(
language=LANGUAGE, task=TASK
)
model.config.use_cache = True
model.eval()
transcript = pipeline_generator(
inputs=inputs,
return_timestamps="word",
generate_kwargs={
"num_beams": 5,
},
)
```
After executing the code, I receive the following error:
```
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 356, in __call__
return super().__call__(inputs, **kwargs)
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1132, in __call__
return next(
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 266, in __next__
processed = self.infer(next(self.iterator), **self.params)
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1046, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 551, in _forward
tokens = self.model.generate(
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1768, in generate
outputs["token_timestamps"] = self._extract_token_timestamps(outputs, generation_config.alignment_heads)
File "/mnt/disk2/nestor/batvoice/dev/bv-etl-transcribe/env/lib/python3.9/site-packages/transformers/models/whisper/modeling_whisper.py", line 1834, in _extract_token_timestamps
timestamps[batch_idx, 1:] = torch.tensor(jump_times)
RuntimeError: The expanded size of the tensor (105) must match the existing size (135) at non-singleton dimension 0. Target sizes: [105]. Tensor sizes: [135]
```
### Expected behavior
When I change the parameter num_beams to 1, it works perfectly.
Is there any incompatibility between these two parameters? I've looked in the documentation and I haven't been able to find so.
The problem seems to be in this function (transformers/models/whisper/modeling_whisper.py):
```
def _extract_token_timestamps(self, generate_outputs, alignment_heads, time_precision=0.02):
"""
Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to
map each output token to a position in the input audio.
Returns:
tensor containing the timestamps in seconds for each predicted token
"""
# Create a list with `decoder_layers` elements, each a tensor of shape
# (batch size, attention_heads, output length, input length).
cross_attentions = []
for i in range(self.config.decoder_layers):
cross_attentions.append(torch.cat([x[i] for x in generate_outputs.cross_attentions], dim=2))
# Select specific cross-attention layers and heads. This is a tensor
# of shape (batch size, num selected, output length, input length).
weights = torch.stack([cross_attentions[l][:, h] for l, h in alignment_heads])
weights = weights.permute([1, 0, 2, 3])
# Normalize and smoothen the weights.
std, mean = torch.std_mean(weights, dim=-2, keepdim=True, unbiased=False)
weights = (weights - mean) / std
weights = _median_filter(weights, self.config.median_filter_width)
# Average the different cross-attention heads.
matrix = weights.mean(dim=1)
timestamps = torch.zeros_like(generate_outputs.sequences, dtype=torch.float32)
# Perform dynamic time warping on each element of the batch.
for batch_idx in range(timestamps.shape[0]):
text_indices, time_indices = _dynamic_time_warping(-matrix[batch_idx].double().cpu().numpy())
jumps = np.pad(np.diff(text_indices), (1, 0), constant_values=1).astype(bool)
jump_times = time_indices[jumps] * time_precision
timestamps[batch_idx, 1:] = torch.tensor(jump_times)
return timestamps
```
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27362/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27362/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27361/comments | https://api.github.com/repos/huggingface/transformers/issues/27361/events | https://github.com/huggingface/transformers/issues/27361 | 1,983,425,371 | I_kwDOCUB6oc52OKtb | 27,361 | Add how to preprocess mask for finetuning with SAM | {
"login": "rwood-97",
"id": 72076688,
"node_id": "MDQ6VXNlcjcyMDc2Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rwood-97",
"html_url": "https://github.com/rwood-97",
"followers_url": "https://api.github.com/users/rwood-97/followers",
"following_url": "https://api.github.com/users/rwood-97/following{/other_user}",
"gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions",
"organizations_url": "https://api.github.com/users/rwood-97/orgs",
"repos_url": "https://api.github.com/users/rwood-97/repos",
"events_url": "https://api.github.com/users/rwood-97/events{/privacy}",
"received_events_url": "https://api.github.com/users/rwood-97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @rwood-97, thanks for raising this issue! \r\n\r\nAgreed - being able to pass in the masks to the image processor would be ideal! Feel free to ping me on a PR for review if you'd like to open one :) "
] | 1,699 | 1,704 | 1,704 | CONTRIBUTOR | null | ### Feature request
The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size expect as input fo the SAM model.
For inference, this works fine as only the images need resizing but for fine-tuning as per [this tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb), you need to resize both your images and your masks as the SAM model produces `pred_masks` with size 256x256. If I don't resize my masks I get `ground truth has different shape (torch.Size([2, 1, 768, 1024])) from input (torch.Size([2, 1, 256, 256]))` when trying to calculate loss.
To fix this, I've currently written a resize and pad function into my code:
```
from PIL import Image
def resize_mask(image):
longest_edge = 256
# get new size
w, h = image.size
scale = longest_edge * 1.0 / max(h, w)
new_h, new_w = h * scale, w * scale
new_h = int(new_h + 0.5)
new_w = int(new_w + 0.5)
resized_image = image.resize((new_w, new_h), resample=Image.Resampling.BILINEAR)
return resized_image
def pad_mask(image):
pad_height = 256 - image.height
pad_width = 256 - image.width
padding = ((0, pad_height), (0, pad_width))
padded_image = np.pad(image, padding, mode="constant")
return padded_image
def process_mask(image):
resized_mask = resize_mask(image)
padded_mask = pad_mask(resized_mask)
return padded_mask
```
and then have added this to my definition of SAMDataset:
```
class SAMDataset(Dataset):
def __init__(self, dataset, processor, transform = None):
self.dataset = dataset
self.processor = processor
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
item = self.dataset[idx]
if self.transform:
image = self.transform(item["pixel_values"])
else:
image = item["pixel_values"]
# get bounding box prompt
padded_mask = process_mask(item["label"])
prompt = get_bounding_box(padded_mask)
# prepare image and prompt for the model
inputs = self.processor(image, input_boxes=[[prompt]], return_tensors="pt")
# remove batch dimension which the processor adds by default
inputs = {k:v.squeeze(0) for k,v in inputs.items()}
# add ground truth segmentation
inputs["ground_truth_mask"] = padded_mask
return inputs
```
This seems to work fine.
What I think would be good is to allow input of masks in the SAM image processor. For example, the [Segformer image processor](https://github.com/huggingface/transformers/blob/v4.35.0/src/transformers/models/segformer/image_processing_segformer.py#L305) takes images and masks as inputs and resizes both to the size expected by the Segformer model.
I have also seen there is a 'post_process_mask' method in the SAM image processor but I am unsure how to implement this in the tutorial I'm following. If you think this is a better way vs. what I am suggesting then please could you explain where I would add this in the code from the tutorial notebook.
### Motivation
Easier fine tuning of SAM model.
### Your contribution
I could try write a PR for this and/or make a PR to update the [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb) instead . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27361/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27360/comments | https://api.github.com/repos/huggingface/transformers/issues/27360/events | https://github.com/huggingface/transformers/pull/27360 | 1,983,416,711 | PR_kwDOCUB6oc5e6sy5 | 27,360 | [Flax Whisper] large-v3 compatibility | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Fixes #27363. Whisper large-v3 uses 128 log-mel bins, instead of 80 for previous Whisper checkpoints. This changes the shape of the input tensors to the model. For the JAX trace to work when initialising the weights, we need to set the input shape correctly based on the model dims. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27360/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27360",
"html_url": "https://github.com/huggingface/transformers/pull/27360",
"diff_url": "https://github.com/huggingface/transformers/pull/27360.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27360.patch",
"merged_at": 1699456298000
} |
https://api.github.com/repos/huggingface/transformers/issues/27359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27359/comments | https://api.github.com/repos/huggingface/transformers/issues/27359/events | https://github.com/huggingface/transformers/pull/27359 | 1,982,952,332 | PR_kwDOCUB6oc5e5Gcs | 27,359 | [`CodeLlamaTokenizer`] Nit, update __init__ to make sure the AddedTokens are not normalized because they are special | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Bridges the gap between the slow and fast version follow the updates in #26570 (similar updates were done to Llama)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27359",
"html_url": "https://github.com/huggingface/transformers/pull/27359",
"diff_url": "https://github.com/huggingface/transformers/pull/27359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27359.patch",
"merged_at": 1699521311000
} |
https://api.github.com/repos/huggingface/transformers/issues/27358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27358/comments | https://api.github.com/repos/huggingface/transformers/issues/27358/events | https://github.com/huggingface/transformers/pull/27358 | 1,982,911,577 | PR_kwDOCUB6oc5e49mD | 27,358 | Smangrul/fix failing ds ci tests | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
1. Currently 2 DeepSpeed tests are failing with the error `E AssertionError: False is not true : [zero3] /tmp/tmp5ddklgb9/checkpoint-5/pytorch_model.bin is not found`. This is because of the recent change of switching to `safetensors` as a default. This PR fixes the concerned tests.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27358/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27358",
"html_url": "https://github.com/huggingface/transformers/pull/27358",
"diff_url": "https://github.com/huggingface/transformers/pull/27358.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27358.patch",
"merged_at": 1699510644000
} |
https://api.github.com/repos/huggingface/transformers/issues/27357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27357/comments | https://api.github.com/repos/huggingface/transformers/issues/27357/events | https://github.com/huggingface/transformers/pull/27357 | 1,982,722,290 | PR_kwDOCUB6oc5e4T4n | 27,357 | Replaced residual + hidden_states with torch.add(residual, hidden_states) to improve memory allocation. | {
"login": "philrwebb",
"id": 6821955,
"node_id": "MDQ6VXNlcjY4MjE5NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6821955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philrwebb",
"html_url": "https://github.com/philrwebb",
"followers_url": "https://api.github.com/users/philrwebb/followers",
"following_url": "https://api.github.com/users/philrwebb/following{/other_user}",
"gists_url": "https://api.github.com/users/philrwebb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philrwebb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philrwebb/subscriptions",
"organizations_url": "https://api.github.com/users/philrwebb/orgs",
"repos_url": "https://api.github.com/users/philrwebb/repos",
"events_url": "https://api.github.com/users/philrwebb/events{/privacy}",
"received_events_url": "https://api.github.com/users/philrwebb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @philrwebb - thanks for opening a PR! \r\n\r\nCould you remove all of the formatting changes (splitting across lines) from the diff so that we can more easily review and assess the proposed changes? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,699 | 1,702 | 1,702 | NONE | null | replaced tensor1 + tensor2 with torch.add(tensor1, tensor2) because the latter only allocates memory for output tensor. The tensor in question were hidden_states and residual. This was blowing up a google colab TPU out of memory.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27357",
"html_url": "https://github.com/huggingface/transformers/pull/27357",
"diff_url": "https://github.com/huggingface/transformers/pull/27357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27357.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27356/comments | https://api.github.com/repos/huggingface/transformers/issues/27356/events | https://github.com/huggingface/transformers/issues/27356 | 1,982,571,554 | I_kwDOCUB6oc52K6Qi | 27,356 | Oneformer model exception on forward pass | {
"login": "nickponline",
"id": 590151,
"node_id": "MDQ6VXNlcjU5MDE1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/590151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickponline",
"html_url": "https://github.com/nickponline",
"followers_url": "https://api.github.com/users/nickponline/followers",
"following_url": "https://api.github.com/users/nickponline/following{/other_user}",
"gists_url": "https://api.github.com/users/nickponline/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickponline/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickponline/subscriptions",
"organizations_url": "https://api.github.com/users/nickponline/orgs",
"repos_url": "https://api.github.com/users/nickponline/repos",
"events_url": "https://api.github.com/users/nickponline/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickponline/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] | [
"@amyeroberts do you know if I'm missing something. Thank for the help",
"Hi @nickponline I finally had the time to look into OneFormer.\r\n\r\nUploaded a demo notebook regarding fine-tuning here: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/OneFormer/Fine_tune_OneFormer_for_semantic_segmentation.ipynb. Hope that helps!",
"Thank you that is v. helpful!\r\n\r\nOn Sun, Nov 12, 2023 at 1:01 PM NielsRogge ***@***.***> wrote:\r\n\r\n> Hi @nickponline <https://github.com/nickponline> I finally had the time\r\n> to look into OneFormer.\r\n>\r\n> Uploaded a demo notebook regarding fine-tuning here:\r\n> https://github.com/NielsRogge/Transformers-Tutorials/blob/master/OneFormer/Fine_tune_OneFormer_for_semantic_segmentation.ipynb.\r\n> Hope that helps!\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/27356#issuecomment-1807238164>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAEQCR24IYPIVUI4PMXJG7DYEE2KXAVCNFSM6AAAAAA7CDZEJ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBXGIZTQMJWGQ>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 1,699 | 1,707 | null | NONE | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): 2.13.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@amyeroberts @NielsRogge @praeclarumjj3
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction


```
image = Image.open('image.jpg').convert('RGB')
mask = Image.open('mask.png').convert('L')
processor = AutoProcessor.from_pretrained("shi-labs/oneformer_coco_swin_large")
semantic_inputs = processor(images=image, segmentation_maps=mask, task_inputs=["semantic"], return_tensors="pt")
processor.tokenizer.batch_decode(semantic_inputs.task_inputs)
model = AutoModelForUniversalSegmentation.from_pretrained("shi-labs/oneformer_coco_swin_large")
with torch.no_grad():
outputs = model(**semantic_inputs)
semantic_segmentation = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
I get the exception:
```
texts = ["a semantic photo"] * self.num_text
TypeError: can't multiply sequence by non-int of type 'NoneType'
```
### Expected behavior
It should succeed and return the loss.
Also when I modify to add `num_text=1` it still fails:
```
image = Image.open('image.jpg').convert('RGB')
mask = Image.open('mask.png').convert('L')
processor = AutoProcessor.from_pretrained("shi-labs/oneformer_coco_swin_large", num_text=1)
semantic_inputs = processor(images=image, segmentation_maps=mask, task_inputs=["semantic"], return_tensors="pt")
processor.tokenizer.batch_decode(semantic_inputs.task_inputs)
model = AutoModelForUniversalSegmentation.from_pretrained("shi-labs/oneformer_coco_swin_large")
with torch.no_grad():
outputs = model(**semantic_inputs)
semantic_segmentation = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
I still get an exception but different one:
```
text_queries = nn.functional.normalize(text_queries.flatten(1), dim=-1)
AttributeError: 'NoneType' object has no attribute 'flatten'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27356/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27355/comments | https://api.github.com/repos/huggingface/transformers/issues/27355/events | https://github.com/huggingface/transformers/pull/27355 | 1,982,569,641 | PR_kwDOCUB6oc5e3yuH | 27,355 | Update deprecated torch.range in tests/models/ibert/test_modeling_ibert.py | {
"login": "kit1980",
"id": 420184,
"node_id": "MDQ6VXNlcjQyMDE4NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/420184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kit1980",
"html_url": "https://github.com/kit1980",
"followers_url": "https://api.github.com/users/kit1980/followers",
"following_url": "https://api.github.com/users/kit1980/following{/other_user}",
"gists_url": "https://api.github.com/users/kit1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kit1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kit1980/subscriptions",
"organizations_url": "https://api.github.com/users/kit1980/orgs",
"repos_url": "https://api.github.com/users/kit1980/repos",
"events_url": "https://api.github.com/users/kit1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/kit1980/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27355). All of your documentation changes will be reflected on that endpoint.",
"@ydshieh I've removed the commented out line."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
`torch.range` is deprecated since PyTorch 0.4 and will be removed in a future PyTorch release.
This PR updates it to `torch.arange`.
Fixed with TorchFix https://github.com/pytorch/test-infra/tree/main/tools/torchfix
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @ydshieh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27355/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27355",
"html_url": "https://github.com/huggingface/transformers/pull/27355",
"diff_url": "https://github.com/huggingface/transformers/pull/27355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27355.patch",
"merged_at": 1699473516000
} |
https://api.github.com/repos/huggingface/transformers/issues/27354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27354/comments | https://api.github.com/repos/huggingface/transformers/issues/27354/events | https://github.com/huggingface/transformers/pull/27354 | 1,982,227,194 | PR_kwDOCUB6oc5e2nKm | 27,354 | Remove unused param from example script tests | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Removes a very old and unused param in the example script tests (`--fp16`) that wasn't ever really *part* of the example scripts, I think it was just a bad copy/paste when writing the tests potentially.
Fixes # (issue)
A few tests that were giving notice of weird args/params being passed to it.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27354/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27354",
"html_url": "https://github.com/huggingface/transformers/pull/27354",
"diff_url": "https://github.com/huggingface/transformers/pull/27354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27354.patch",
"merged_at": 1699452452000
} |
https://api.github.com/repos/huggingface/transformers/issues/27353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27353/comments | https://api.github.com/repos/huggingface/transformers/issues/27353/events | https://github.com/huggingface/transformers/pull/27353 | 1,982,146,075 | PR_kwDOCUB6oc5e2VI7 | 27,353 | Fix example tests from failing | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> inputs to the accelerate tests to align with the ones in the pytorch tests\r\n\r\nJust wondering where the is target of alignment (i.e. the pytorch tests). Otherwise, looks good to me as I trust you that the update makes sense (due to the change in `accelerate`)!\r\n",
"@ydshieh for how many steps that example script takes. For the torch ones it’s 10, here it’s 2 :)"
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
This PR tweaks the inputs to the accelerate tests to align with the ones in the pytorch tests, which was causing test failures. (Plus some fixes in Accelerate main)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27353",
"html_url": "https://github.com/huggingface/transformers/pull/27353",
"diff_url": "https://github.com/huggingface/transformers/pull/27353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27353.patch",
"merged_at": 1699447522000
} |
https://api.github.com/repos/huggingface/transformers/issues/27352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27352/comments | https://api.github.com/repos/huggingface/transformers/issues/27352/events | https://github.com/huggingface/transformers/pull/27352 | 1,982,011,555 | PR_kwDOCUB6oc5e13oC | 27,352 | Adds dvclive callback | {
"login": "dberenbaum",
"id": 2308172,
"node_id": "MDQ6VXNlcjIzMDgxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dberenbaum",
"html_url": "https://github.com/dberenbaum",
"followers_url": "https://api.github.com/users/dberenbaum/followers",
"following_url": "https://api.github.com/users/dberenbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions",
"organizations_url": "https://api.github.com/users/dberenbaum/orgs",
"repos_url": "https://api.github.com/users/dberenbaum/repos",
"events_url": "https://api.github.com/users/dberenbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dberenbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27352). All of your documentation changes will be reflected on that endpoint.",
"> There was a recent update which addressed the currently (unrelated) failing tests. Could you rebase on main to resolve these?\r\n\r\nDone\r\n\r\n> The integrations are maintained by their contributors. If you're happy to be pinged if there's issues then we're happy to merge :)\r\n\r\nSounds good, thanks for the quick approval!",
"@dberenbaum There were a few more unexpected failures because of new packages releases. This should now be resolved on main. Could you rebase on main again?",
"Rebased and checks passing now! ",
"FYI @dberenbaum we're seeing a failure on our nightlies after this PR (showed up after the Accelerate PR since we run transformers tests in it and dvc got added to the test requirements)\r\n\r\n```\r\nFAILED tests/trainer/test_trainer.py::TrainerIntegrationPrerunTest::test_reduce_lr_on_plateau - dvclive.error.InvalidDataTypeError: Data 'learning_rate' has not supported type <class 'list'>\r\n```\r\n"
] | 1,699 | 1,700 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
Adds a [DVCLive](https://dvc.org/doc/dvclive) callback. DVCLive is a logger for [DVC](https://dvc.org), which version data, models, metrics, etc. and keeps them connected to your Git repo. They can then be visualized in several different interfaces. For example, here is the output of [example code](https://github.com/iterative/dvclive/blob/main/examples/DVCLive-HuggingFace.ipynb) in the DVC extension for VS Code:
<img width="1440" alt="Screenshot 2023-11-06 at 4 19 11 PM" src="https://github.com/huggingface/transformers/assets/2308172/7f8a1ae0-58e6-438b-a4b1-7254709a9de7">
<img width="1440" alt="Screenshot 2023-11-06 at 4 19 28 PM" src="https://github.com/huggingface/transformers/assets/2308172/b46b9e31-7d61-4921-bafb-ec2f8986824f">
There is already a [DVCLive callback](https://dvc.org/doc/dvclive/ml-frameworks/huggingface) inside `dvclive`, and this PR migrates the callback from `dvclive` to `transformers` (the callback inside `dvclive` will be deprecated in the next major release of `dvclive`).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? (I don't see any tests for any of the callbacks but please let me know if I missed them somewhere)
## Who can review?
@sgugger It looks like you are the most relevant reviewer. Could you PTAL or redirect me if someone else should review it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27352/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27352",
"html_url": "https://github.com/huggingface/transformers/pull/27352",
"diff_url": "https://github.com/huggingface/transformers/pull/27352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27352.patch",
"merged_at": 1699532372000
} |
https://api.github.com/repos/huggingface/transformers/issues/27351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27351/comments | https://api.github.com/repos/huggingface/transformers/issues/27351/events | https://github.com/huggingface/transformers/pull/27351 | 1,981,876,099 | PR_kwDOCUB6oc5e1aCc | 27,351 | 🚨🚨 Fix beam score calculation issue for decoder-only models | {
"login": "VsonicV",
"id": 23429580,
"node_id": "MDQ6VXNlcjIzNDI5NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/23429580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VsonicV",
"html_url": "https://github.com/VsonicV",
"followers_url": "https://api.github.com/users/VsonicV/followers",
"following_url": "https://api.github.com/users/VsonicV/following{/other_user}",
"gists_url": "https://api.github.com/users/VsonicV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VsonicV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VsonicV/subscriptions",
"organizations_url": "https://api.github.com/users/VsonicV/orgs",
"repos_url": "https://api.github.com/users/VsonicV/repos",
"events_url": "https://api.github.com/users/VsonicV/events{/privacy}",
"received_events_url": "https://api.github.com/users/VsonicV/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@gante This commit is only for fixing `beam_search`. If you think the fix is good to go, I can also apply the same fix to `beam_sample`, `group_beam_search` and `constrained_beam_search`.",
"@gante I think the current tests regarding `beam_search` are using the results generated by previous \"buggy\" version, so the new `beam_search` cannot pass the test `test_eos_token_id_int_and_list_beam_search`, which uses the decoder-only GPT-2. We need to update the relevant tests as well.",
"@VsonicV the \"setup and quality\" CI can be fixed by running `make fixup` on your local `transformers` root folder and committing the changes!",
"@gante Thanks for the suggestion! I have fixed the code quality issue using `make fixup`, and updated the relevant test `test_eos_token_id_int_and_list_beam_search` with new `expectation` value. Both checks pass now. However, there is still one check failure caused by `test_run_image_classification_no_trainer` and `test_run_ner_no_trainer`, which should be irrelevant to these commits regarding beam search. Do you have any clue about how to fix it?",
"@VsonicV perfect! The failing CI is indeed unrelated (fix: https://github.com/huggingface/transformers/pull/27353), the tests should pass after it gets merged.\r\n\r\nTo keep the consistency of this fix throughout beam methods, I'd like to request you to:\r\n1. Also apply this change to other beam methods in Pytorch :)\r\n2. Add 🚨🚨 to the PR title, as this is a (correct but also) breaking change\r\n3. (optional, only if you're comfortable with it, as the fix is slightly different) Apply this change to beam methods in TF and JAX\r\n\r\nAfter 1. and 2. is done, I'll tag a core maintainer to greenlight the merge!",
"@gante Sure! I will work on 1 and 2 in the next 2 days. Will try to do 3 after that.",
"The PR causing the CI to fail was merged, and I was informed that current PRs will need to be rebased to pass CI 🤗 ",
"@gante Item 1 and 2 are done! I have applied the fix to all beam related methods: `beam_sample`, `group_beam_search` and `constrained_beam_search`. I have rebased the PR and all relevant tests have passed.\r\n\r\nRegarding the remaining check failures, the recent merge only fixes the check failure caused by `test_run_image_classification_no_trainer`, but not for `test_run_ner_no_trainer`. According to the error message `AssertionError: 0.5109489440917969 not less than 0.5`, the checking threshold for `self.assertLess(result[\"train_loss\"], 0.5)` in `test_run_ner_no_trainer` needs to be adjusted as well. Moreover, one new check failure is caused by `test_cached_model_has_minimum_calls_to_head` and `test_cached_tokenizer_has_minimum_calls_to_head`, which are unrelated to the commits in this PR (we only see this after the most recent rebase).",
"@VsonicV yes, we are still having some CI failures (unrelated to this PR) 😭 ",
"@gante Tried rebasing once more, all the previous check failures are gone, but got one new CI failure caused by `test_assisted_decoding_sample`, which should again be unrelated to this PR.",
"@ArthurZucker I have rebased this PR with all your recently added test skips, the CI failures caused by `test_assisted_decoding_sample` still persist for `blenderbot`, same failures also happened for `pegasus` and `umt5` in my previous tries, would you mind adding skips of `test_assisted_decoding_sample` for `blenderbot`, `pegasus` and `umt5` as well? Thank you!",
"Yeah I'll skip this test for everyone this is getting annoying! 😅 #27511 was merged",
"Tagging @amyeroberts for a final check",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27351). All of your documentation changes will be reflected on that endpoint.",
"@gante @amyeroberts All your suggested changes have been added and committed. All the tests have passed now (finally!). Should be ready for merge.",
"@VsonicV Thank you for iterating with us and making `transformers` better 💛\r\n\r\nAnd sorry for all the failing CI, you've caught an unfortunate series of failures 😬 ",
"@gante No problem! Regarding the fix of TF and JAX version, I have looked at the relevant codes briefly, and I think I can fix them. I will try to submit another PR fixing both TF and JAX later this week.",
"> @gante No problem! Regarding the fix of TF and JAX version, I have looked at the relevant codes briefly, and I think I can fix them. I will try to submit another PR fixing both TF and JAX later this week.\r\n\r\n@gante Sorry about the delay in the next steps. I had a severe flu last week and just recovered. Will start working on the remaining fixes.",
"@gante @amyeroberts All follow-up tasks have been completed, in three new PRs:\r\n1. #27808 further fixes some remaining issues in the Pytorch version.\r\n2. #27814 fixes the Tensorflow version.\r\n3. #27816 fixes the JAX version.\r\n\r\nAll three PRs have passed the CI checks. Ready for your review @gante .",
"@gante Hi, I noticed that in the recent release notes of v4.36.0, only this PR is listed in \"Beam score calculation for decoder-only models\" section under \"Breaking changes\". Should we also add the 3 follow-up PRs ( #27808 #27814 #27816 ) under that section? It would be more clear for people to check all the changes relevant to this breaking change. Thanks.",
"@VsonicV updated the release notes for future reference 👍 Thank you for your suggestion "
] | 1,699 | 1,704 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes issue #26624 . In the original implementation of beam search, the beam score for decoder-only models is normalized by the total length of both prompt and generated sequence. However, the length of prompt should not be included in the normalization step. This issue would cause an unexpected bias towards generating shorter sequences.
This is a simple quick fix by adding an optional parameter `decoder_prompy_len`, which stores the length of prompt in decoder, to `BeamSearchScorer.process()`, `BeamSearchScorer.finalize()` and `BeamHypotheses.add()`. Since the added new parameter is optional with a default value as 0, any existing calls of these functions without specifying `decoder_prompy_len` would still work in the same way as before, avoiding any unexpected incompatibility. The corner case in which the very first generated token happens to be eos_token (empty generation) is considered and handled.
Fixes #26624
Note: There are three follow-up PRs that complement this fix:
1. #27808 further fixes some remaining issues in the Pytorch version.
2. #27814 fixes the Tensorflow version.
3. #27816 fixes the JAX version.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27351/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27351",
"html_url": "https://github.com/huggingface/transformers/pull/27351",
"diff_url": "https://github.com/huggingface/transformers/pull/27351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27351.patch",
"merged_at": 1700052555000
} |
https://api.github.com/repos/huggingface/transformers/issues/27349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27349/comments | https://api.github.com/repos/huggingface/transformers/issues/27349/events | https://github.com/huggingface/transformers/pull/27349 | 1,981,791,058 | PR_kwDOCUB6oc5e1HXe | 27,349 | [`Whisper`] Nit converting the tokenizer | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sure, I can add one tomorrow 😉 "
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Use `nospeech` instead of `nocaption` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27349/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27349",
"html_url": "https://github.com/huggingface/transformers/pull/27349",
"diff_url": "https://github.com/huggingface/transformers/pull/27349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27349.patch",
"merged_at": 1699379006000
} |
https://api.github.com/repos/huggingface/transformers/issues/27348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27348/comments | https://api.github.com/repos/huggingface/transformers/issues/27348/events | https://github.com/huggingface/transformers/pull/27348 | 1,981,731,728 | PR_kwDOCUB6oc5e06bs | 27,348 | Remove padding_masks from `gpt_bigcode`. | {
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27348). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR removes `padding_masks` from gpt bigcode as [discussed](https://github.com/huggingface/transformers/pull/26486#issuecomment-1798466670) here.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc: @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27348",
"html_url": "https://github.com/huggingface/transformers/pull/27348",
"diff_url": "https://github.com/huggingface/transformers/pull/27348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27348.patch",
"merged_at": 1699377883000
} |
https://api.github.com/repos/huggingface/transformers/issues/27347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27347/comments | https://api.github.com/repos/huggingface/transformers/issues/27347/events | https://github.com/huggingface/transformers/pull/27347 | 1,981,692,353 | PR_kwDOCUB6oc5e0xxU | 27,347 | Resolve AttributeError by utilizing device calculation at the start of the forward function | {
"login": "folbaeni",
"id": 46280006,
"node_id": "MDQ6VXNlcjQ2MjgwMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/46280006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/folbaeni",
"html_url": "https://github.com/folbaeni",
"followers_url": "https://api.github.com/users/folbaeni/followers",
"following_url": "https://api.github.com/users/folbaeni/following{/other_user}",
"gists_url": "https://api.github.com/users/folbaeni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/folbaeni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/folbaeni/subscriptions",
"organizations_url": "https://api.github.com/users/folbaeni/orgs",
"repos_url": "https://api.github.com/users/folbaeni/repos",
"events_url": "https://api.github.com/users/folbaeni/events{/privacy}",
"received_events_url": "https://api.github.com/users/folbaeni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27347). All of your documentation changes will be reflected on that endpoint."
] | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
This commit addresses the 'NoneType' object AttributeError within the IdeficsModel forward function. Previously, the 'device' attribute was accessed directly from input_ids, resulting in a potential 'NoneType' error. Now, the device is taken by the one properly calculated at the beginning of the forward function and utilized consistently throughout, ensuring the 'image_hidden_states' are derived from the correct device. This modification enables smoother processing and compatibility, ensuring the correct device attribution for 'image_encoder_embeddings' in the IdeficsModel forward pass.
Fixes #27343
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @younesbelkada @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27347/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27347",
"html_url": "https://github.com/huggingface/transformers/pull/27347",
"diff_url": "https://github.com/huggingface/transformers/pull/27347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27347.patch",
"merged_at": 1699374375000
} |
https://api.github.com/repos/huggingface/transformers/issues/27346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27346/comments | https://api.github.com/repos/huggingface/transformers/issues/27346/events | https://github.com/huggingface/transformers/pull/27346 | 1,981,687,995 | PR_kwDOCUB6oc5e0w0a | 27,346 | Fix `Kosmos-2` device issue | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merge this now, but feel free to drop your insights if any @muellerzr 🙏 ",
"Hey @ydshieh, I had to do the same thing for [fuyu](https://github.com/huggingface/transformers/pull/26949#discussion_r1369058114) when I added the `device_map`. There was no way to make sure that those two embeddings are on the same device without specifying a very big module. I guess it is okay for now as the change is minimal but I was thinking about creating in the future a `no_split_layers` arg if it becomes too difficult + ugly to deal with. ",
"Thank you @SunMarc !"
] | 1,699 | 1,699 | 1,699 | COLLABORATOR | null | # What does this PR do?
Fix device issues in Kosmos-2
Fix #27301 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27346/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27346",
"html_url": "https://github.com/huggingface/transformers/pull/27346",
"diff_url": "https://github.com/huggingface/transformers/pull/27346.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27346.patch",
"merged_at": 1699449285000
} |
https://api.github.com/repos/huggingface/transformers/issues/27345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27345/comments | https://api.github.com/repos/huggingface/transformers/issues/27345/events | https://github.com/huggingface/transformers/issues/27345 | 1,981,637,965 | I_kwDOCUB6oc52HWVN | 27,345 | pyproject pytest config | {
"login": "diegovalenzuelaiturra",
"id": 34951638,
"node_id": "MDQ6VXNlcjM0OTUxNjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/34951638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diegovalenzuelaiturra",
"html_url": "https://github.com/diegovalenzuelaiturra",
"followers_url": "https://api.github.com/users/diegovalenzuelaiturra/followers",
"following_url": "https://api.github.com/users/diegovalenzuelaiturra/following{/other_user}",
"gists_url": "https://api.github.com/users/diegovalenzuelaiturra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diegovalenzuelaiturra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diegovalenzuelaiturra/subscriptions",
"organizations_url": "https://api.github.com/users/diegovalenzuelaiturra/orgs",
"repos_url": "https://api.github.com/users/diegovalenzuelaiturra/repos",
"events_url": "https://api.github.com/users/diegovalenzuelaiturra/events{/privacy}",
"received_events_url": "https://api.github.com/users/diegovalenzuelaiturra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @ydshieh ",
"@diegovalenzuelaiturra Thank you a lot!\r\n\r\nI opened a draft PR #27366: we need to check if everything still run as expected. ",
"Reference: https://docs.pytest.org/en/stable/reference/customize.html"
] | 1,699 | 1,699 | 1,699 | NONE | null | probably
`[tool.pytest.ini_options]` instead of `[tool.pytest]`
https://github.com/huggingface/transformers/blob/88832c01c8a962b653874c4ce4ed8df5783ac5cd/pyproject.toml#L21-L23 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27345/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.