url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/27042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27042/comments | https://api.github.com/repos/huggingface/transformers/issues/27042/events | https://github.com/huggingface/transformers/pull/27042 | 1,959,493,179 | PR_kwDOCUB6oc5dp0V7 | 27,042 | Fix slack report failing for doctest | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Maybe we can send two reports instead of truncating? WDYT?",
"The problem (we currently have) is the category `API Examples failures` (modeling file, not md file) itself is too long (due to the failures from some tensorflow files), and itself has to be truncated already.\r\n\r\nSo sending 2 reports separately (for each category) won't work without truncating.\r\n\r\nHowever if you mean: 2 reprots, but each can be truncated (if necessary) --> OK for me :-)",
"I meant that after truncating the first report, you could take the truncated text and send it over in a second report as a follow up. This way we would have all the information held in the reports",
"I made the `API Examples failures` and `MD Examples failures:` as independent slack `block`. Each report could be truncated.\r\n(The limit of `3000` characters is for a single block)\r\n\r\nSee the following screenshot: (here there is no truncation, as I just ran 4 tests to see the output)\r\n\r\n<img width=\"433\" alt=\"Screenshot 2023-10-27 105256\" src=\"https://github.com/huggingface/transformers/assets/2521628/dbd666eb-fcc3-4a5e-aa2c-bcb08d3e55f4\">\r\n\r\n(I didn't try to customize the `The following examples had failures` part for each category however - to keep it simple)"
] | 1,698 | 1,698 | 1,698 | COLLABORATOR | null | # What does this PR do?
By truncating the long reports.
The failing of the (a bit) many TF models is not related to `torch 2.2`. Previous we didn't see those failures, as `Blip2` causes OOM, and later tests may get more GPU memory due to this. (This is what I observed by re-run the doctest with different commits/torc versions).
Still need to figure if there is a way to avoid this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27042/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27042",
"html_url": "https://github.com/huggingface/transformers/pull/27042",
"diff_url": "https://github.com/huggingface/transformers/pull/27042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27042.patch",
"merged_at": 1698659304000
} |
https://api.github.com/repos/huggingface/transformers/issues/27041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27041/comments | https://api.github.com/repos/huggingface/transformers/issues/27041/events | https://github.com/huggingface/transformers/pull/27041 | 1,959,352,891 | PR_kwDOCUB6oc5dpWDK | 27,041 | Fix sliding_window hasattr in Mistral | {
"login": "IlyaGusev",
"id": 2670295,
"node_id": "MDQ6VXNlcjI2NzAyOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2670295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyaGusev",
"html_url": "https://github.com/IlyaGusev",
"followers_url": "https://api.github.com/users/IlyaGusev/followers",
"following_url": "https://api.github.com/users/IlyaGusev/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyaGusev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyaGusev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyaGusev/subscriptions",
"organizations_url": "https://api.github.com/users/IlyaGusev/orgs",
"repos_url": "https://api.github.com/users/IlyaGusev/repos",
"events_url": "https://api.github.com/users/IlyaGusev/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyaGusev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One more possible way to fix it is to use `getattr(self.config, \"sliding_window\", None)` instead.",
"Just to better understand, how can `sliding_window` be set to `None`? It always defaults to a \"non-None\" value here: https://github.com/huggingface/transformers/blob/9da451713d5feac451505c09337ca07ff6d0dba0/src/transformers/models/mistral/configuration_mistral.py#L121",
"@patrickvonplaten correct, it should never be `None` indeed - in any case the condition `hasattr(config, \"sliding_window\")` can be removed I think",
"So this way if someone wants to disable sliding windows, they should either delete the attribute or set it to None.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I think it is still relevant and should be merged.",
"Done, I've rebased.",
"Thanks for the fix! "
] | 1,698 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
It fixes a bug with checking `hasattr` result with `is not None`. `hasattr` returns `True` of `False`, so the condition is always True without this fix.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @younesbelkada or anyone else. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27041/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27041",
"html_url": "https://github.com/huggingface/transformers/pull/27041",
"diff_url": "https://github.com/huggingface/transformers/pull/27041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27041.patch",
"merged_at": 1701012517000
} |
https://api.github.com/repos/huggingface/transformers/issues/27040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27040/comments | https://api.github.com/repos/huggingface/transformers/issues/27040/events | https://github.com/huggingface/transformers/issues/27040 | 1,959,336,705 | I_kwDOCUB6oc50yRsB | 27,040 | if i use cache in gpt2 model from transformers , the logits are different if i do a forward pass from scratch | {
"login": "juanKersul",
"id": 126629823,
"node_id": "U_kgDOB4w3vw",
"avatar_url": "https://avatars.githubusercontent.com/u/126629823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juanKersul",
"html_url": "https://github.com/juanKersul",
"followers_url": "https://api.github.com/users/juanKersul/followers",
"following_url": "https://api.github.com/users/juanKersul/following{/other_user}",
"gists_url": "https://api.github.com/users/juanKersul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juanKersul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juanKersul/subscriptions",
"organizations_url": "https://api.github.com/users/juanKersul/orgs",
"repos_url": "https://api.github.com/users/juanKersul/repos",
"events_url": "https://api.github.com/users/juanKersul/events{/privacy}",
"received_events_url": "https://api.github.com/users/juanKersul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for the issue!\r\nI think that this is related to: https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535 and is expected - probably @gante can confirm if I am mistaken or no 🙏 Thanks!",
"@younesbelkada yes, it's the same expected behavior! \r\n\r\n@juanKersul I recommend reading the comment linked above if you'd like to understand why this difference exists :)",
"@gante @younesbelkada thanks for the answer\r\ndo you know how big the error can be in this model?",
"Mmm for `gpt2` you need to make sur you pass the position ids otherwise they are not created. See #21080 as well seems more like a duplicate than linked to the KV Cache this time",
"@ArthurZucker I don't think the position IDs are a problem in the specific example above -- for batches with a single row without padding, when `position_ids` are not passed, they are correctly inferred in [this line](https://github.com/huggingface/transformers/blob/9286f0ac3939a7081773fc66480f651a7d6a8404/src/transformers/models/gpt2/modeling_gpt2.py#L800) (which is present in most, if not all models)",
"> do you know how big the error can be in this model?\r\n\r\n@juanKersul it is model and input-dependent, but as a rule of thumb, it is imperceptible in FP32, and quite small in 16-bit (but big enough to occasionally result in slightly different generated text)",
"Ah right no padding so no problem \r\n",
"@gante If I use multiple rows without using padding , do I have to do anything else with the positions ids?",
"No, multiple rows without padding is also okay :) \r\n\r\nWith padding, you must explicitly build the position ids (e.g. from the attention mask), otherwise you will get a performance drop"
] | 1,698 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.33.1
- Platform: Linux-5.15.0-87-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
just run the code
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
torch.set_default_device("cuda")
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.eval()
model.to("cuda")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
seq = torch.tensor([1, 2, 3, 4, 5])
original_out = model(input_ids=seq).logits
seq2 = torch.tensor([1, 2, 3])
key_values = model(input_ids=seq2, use_cache=True).past_key_values
new_seq = torch.tensor([4, 5])
magic = model(input_ids=new_seq, past_key_values=key_values).logits
print(torch.equal(original_out[-1, :], magic[-1, :]))
```
### Expected behavior
i expected return true | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27040/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27039/comments | https://api.github.com/repos/huggingface/transformers/issues/27039/events | https://github.com/huggingface/transformers/pull/27039 | 1,959,306,238 | PR_kwDOCUB6oc5dpL0D | 27,039 | Extended semantic segmentation to image segmentation | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I addressed all the comments. Sorry I deprioritized it shortly.",
"@MKhalusova can you merch if you approve? 👉 👈 🥹 ",
"doc tests is still looking for `semantic_segmentation.md` although I've put a redirection 🧐 \r\n<img width=\"591\" alt=\"Screenshot 2023-11-20 at 17 42 47\" src=\"https://github.com/huggingface/transformers/assets/53175384/49f5eab6-4198-44fc-ab90-7dd2909c60d1\">\r\n",
"We need green CI checks before we can merge. It looks like the `utils/check_doctest_list.py` check is failing. You can run it locally to debug and fix. ",
"@MKhalusova it's my fault, I suggested to rename the file for consistency. I opened [this PR](https://github.com/merveenoyan/transformers/pull/1/files) to @merveenoyan's repo, which fixes the problem locally. It also fixes an issue with `check_task_guides.py`. Please, let me know if there's a better way to deal with this kind of situation :)\r\n\r\nI also saw a `semantic_segmentation.md` in the `ko` translation, but refrained from renaming it because there's no `_redirect.yml` for Korean. Should we add it?",
"@pcuenca Generally, we tend to avoid renaming/removing files. I have no experience safely doing so, perhaps @stevhliu could advise? ",
"It's not a big deal, we can just go back to @merveenoyan's version before the rename if that's simpler.",
"Regarding redirects. For example, `perf_infer_gpu_many: perf_infer_gpu_one`. There is a chicken-egg situation because hf.co/docs uses [_redirects.yml that is on main currently](https://github.com/huggingface/transformers/blob/main/docs/source/en/_redirects.yml). Unless, this PR gets merged hf.co/docs for now will not redirect`perf_infer_gpu_many: perf_infer_gpu_one`",
"@MKhalusova according to Mishig's response we need to merge before it turns red and then it will be green, so maybe you can make the call in this case. ",
"There are two different things here.\r\n\r\n- The redirection works for me in the docs generated for the PR, these both work:\r\nhttps://moon-ci-docs.huggingface.co/docs/transformers/pr_27039/en/tasks/semantic_segmentation (old)\r\nhttps://moon-ci-docs.huggingface.co/docs/transformers/pr_27039/en/tasks/image_segmentation (new)\r\n\r\n- There are other CI scripts that fail because of references to the old name, as shown in this PR: https://github.com/merveenoyan/transformers/pull/1/files\r\nThese tests won't become green if we merge.\r\n\r\nGiven the increased complexity and that @MKhalusova said we generally try to avoid renames, I'd suggest we remove the rename and keep the same filename it had before. Sorry for introducing noise!",
"Can this be merged by someone with write access?",
"I can merge it :) "
] | 1,698 | 1,700 | 1,700 | CONTRIBUTOR | null | This PR extends semantic segmentation guide to cover two other segmentation types (except for the big fine-tuning part) and compares them. cc @NielsRogge as discussed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27039/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27039",
"html_url": "https://github.com/huggingface/transformers/pull/27039",
"diff_url": "https://github.com/huggingface/transformers/pull/27039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27039.patch",
"merged_at": 1700755101000
} |
https://api.github.com/repos/huggingface/transformers/issues/27038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27038/comments | https://api.github.com/repos/huggingface/transformers/issues/27038/events | https://github.com/huggingface/transformers/pull/27038 | 1,959,054,014 | PR_kwDOCUB6oc5doVIc | 27,038 | Hashlib specification | {
"login": "DueViktor",
"id": 66885944,
"node_id": "MDQ6VXNlcjY2ODg1OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/66885944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DueViktor",
"html_url": "https://github.com/DueViktor",
"followers_url": "https://api.github.com/users/DueViktor/followers",
"following_url": "https://api.github.com/users/DueViktor/following{/other_user}",
"gists_url": "https://api.github.com/users/DueViktor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DueViktor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DueViktor/subscriptions",
"organizations_url": "https://api.github.com/users/DueViktor/orgs",
"repos_url": "https://api.github.com/users/DueViktor/repos",
"events_url": "https://api.github.com/users/DueViktor/events{/privacy}",
"received_events_url": "https://api.github.com/users/DueViktor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker, perhaps you can review this? ",
"Hey! As this might affect other HF repos we want to take our time and find a solution for all of them 🤗 gotta be aligned with other library maintainers! FYI @Wauplin ",
"\r\n> Hey! As this might affect other HF repos we want to take our time and find a solution for all of them 🤗 gotta be aligned with other library maintainers! FYI @Wauplin\r\n\r\nThat's fair. In case you need some references, similar changes have been included in other repos:\r\n- https://github.com/mlflow/mlflow/issues/10106#issuecomment-1778790113\r\n- https://github.com/mlflow/mlflow/pull/9961\r\n- https://github.com/streamlit/streamlit/issues/7120\r\n- https://github.com/streamlit/streamlit/issues/7526",
"Thanks a lot @DueViktor for raising the question :pray: I opened a PR in `huggingface_hub` which is the underlying library common to the other HF-libraries (transformers, diffusers, datasets,.): https://github.com/huggingface/huggingface_hub/pull/1782. My plan is to release that and then integrate with dependent libraries once adopted.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this PR without merging. This has been handled in a separate PR (see https://github.com/huggingface/transformers/issues/27034 for full context)."
] | 1,698 | 1,700 | 1,700 | NONE | null | # What does this PR do?
The following PR address the issue raised in #27034.
Fixes # (issue)
Close #27034
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27038/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27038",
"html_url": "https://github.com/huggingface/transformers/pull/27038",
"diff_url": "https://github.com/huggingface/transformers/pull/27038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27038.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27037/comments | https://api.github.com/repos/huggingface/transformers/issues/27037/events | https://github.com/huggingface/transformers/pull/27037 | 1,959,053,071 | PR_kwDOCUB6oc5doU60 | 27,037 | Safe import of rgb_to_id from FE modules | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 | COLLABORATOR | null | # What does this PR do?
Small change which makes `rgb_to_id` importable from the feature extraction modules. Previously this was removed and moved to the `image_transforms` module. However, this was a breaking change and should have been more safely handled with a warning telling users how to update their code.
This was flagged in this issue: https://huggingface.co/facebook/detr-resnet-50-panoptic/discussions/6#6537952c25f780fed22f6285
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27037/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27037",
"html_url": "https://github.com/huggingface/transformers/pull/27037",
"diff_url": "https://github.com/huggingface/transformers/pull/27037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27037.patch",
"merged_at": 1698151217000
} |
https://api.github.com/repos/huggingface/transformers/issues/27036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27036/comments | https://api.github.com/repos/huggingface/transformers/issues/27036/events | https://github.com/huggingface/transformers/issues/27036 | 1,958,959,843 | I_kwDOCUB6oc50w1rj | 27,036 | cannot import name 'SeamlessM4TModel' from 'transformers' | {
"login": "druskacik",
"id": 47897205,
"node_id": "MDQ6VXNlcjQ3ODk3MjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/47897205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/druskacik",
"html_url": "https://github.com/druskacik",
"followers_url": "https://api.github.com/users/druskacik/followers",
"following_url": "https://api.github.com/users/druskacik/following{/other_user}",
"gists_url": "https://api.github.com/users/druskacik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/druskacik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/druskacik/subscriptions",
"organizations_url": "https://api.github.com/users/druskacik/orgs",
"repos_url": "https://api.github.com/users/druskacik/repos",
"events_url": "https://api.github.com/users/druskacik/events{/privacy}",
"received_events_url": "https://api.github.com/users/druskacik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, that is expected as it is not part of the latest release. Make sure to install using `pip install git+https://github.com/huggingface/transformers.git` "
] | 1,698 | 1,698 | 1,698 | NONE | null | ### System Info
transformers 4.34.1
python 3.10.12
Windows
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to use the HuggingFace implementation of the SeamlessM4T model available here: https://huggingface.co/facebook/hf-seamless-m4t-medium .
However, importing `SeamlessM4TModel` from `transformers` raises an `ImportError`:
```python
from transformers import AutoProcessor, SeamlessM4TModel
>>> ImportError Traceback (most recent call last)
Cell In[5], line 1
----> 1 from transformers import AutoProcessor, SeamlessM4TModel
ImportError: cannot import name 'SeamlessM4TModel' from 'transformers' (C:\Users\rober\.conda\envs\product_scanner\lib\site-packages\transformers\__init__.py)
```
I use the latest `transformers` version, `4.34.1`.
### Expected behavior
Expected behaviour is to import the `SeamlessM4TModel` without an error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27036/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27036/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27035/comments | https://api.github.com/repos/huggingface/transformers/issues/27035/events | https://github.com/huggingface/transformers/issues/27035 | 1,958,916,307 | I_kwDOCUB6oc50wrDT | 27,035 | Add JinaBert Model | {
"login": "Jackmin801",
"id": 56836461,
"node_id": "MDQ6VXNlcjU2ODM2NDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/56836461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jackmin801",
"html_url": "https://github.com/Jackmin801",
"followers_url": "https://api.github.com/users/Jackmin801/followers",
"following_url": "https://api.github.com/users/Jackmin801/following{/other_user}",
"gists_url": "https://api.github.com/users/Jackmin801/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jackmin801/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jackmin801/subscriptions",
"organizations_url": "https://api.github.com/users/Jackmin801/orgs",
"repos_url": "https://api.github.com/users/Jackmin801/repos",
"events_url": "https://api.github.com/users/Jackmin801/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jackmin801/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"I will be working on this issue, feel free to assign it to me :)",
"any update on this? ",
"Hey! I think the model authors like it as is, and the model is available in [TEI](https://github.com/huggingface/text-embeddings-inference) 🥳 "
] | 1,698 | 1,702 | null | CONTRIBUTOR | null | ### Model description
Jina AI has just released an open source embedding model that can handle 8k sequence on huggingface:
- https://huggingface.co/jinaai/jina-embeddings-v2-base-en
- https://huggingface.co/jinaai/jina-embeddings-v2-small-en
These models however, currently require the `trust_remote_code` flag as they reference a custom model implementation specified at https://huggingface.co/jinaai/jina-embedding-v2/tree/main. It should be relatively simple to upstream the model implementation as it is already implemented to work when `trust_remote_code` is passed.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/jinaai/jina-embedding-v2/tree/main | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27035/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27035/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27034/comments | https://api.github.com/repos/huggingface/transformers/issues/27034/events | https://github.com/huggingface/transformers/issues/27034 | 1,958,847,865 | I_kwDOCUB6oc50waV5 | 27,034 | Hashlib usage is underspecified | {
"login": "DueViktor",
"id": 66885944,
"node_id": "MDQ6VXNlcjY2ODg1OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/66885944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DueViktor",
"html_url": "https://github.com/DueViktor",
"followers_url": "https://api.github.com/users/DueViktor/followers",
"following_url": "https://api.github.com/users/DueViktor/following{/other_user}",
"gists_url": "https://api.github.com/users/DueViktor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DueViktor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DueViktor/subscriptions",
"organizations_url": "https://api.github.com/users/DueViktor/orgs",
"repos_url": "https://api.github.com/users/DueViktor/repos",
"events_url": "https://api.github.com/users/DueViktor/events{/privacy}",
"received_events_url": "https://api.github.com/users/DueViktor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for reporting I'll see if this relevant for us 🤗 ",
"Great @ArthurZucker. The pull request have passed all tests already and are ready to merge. \r\nNo behaviour is changed. \r\n\r\nMy guess is that pretty much all federal systems in the world would have this issue.\r\n> Federal Information Processing Standards (FIPS) 140-2 is a mandatory standard for the protection of sensitive or valuable data within Federal systems. - https://www.wolfssl.com/license/fips/",
"Hey @DueViktor! Coming back to you about this request. We've finally specified hashlib usage in [huggingface_hub](https://github.com/huggingface/huggingface_hub/pull/1782), [transformers](https://github.com/huggingface/transformers/pull/27483), [datasets](https://github.com/huggingface/datasets/pull/6414) and [diffusers](https://github.com/huggingface/diffusers/pull/5790). Everything's merged now so I'll close this issue. Thanks again for the heads up!",
"Hi @Wauplin! Thanks so much for the update and for addressing the hashlib usage across all those libraries. Appreciate your team's prompt action on this matter. Keep up the fantastic work!"
] | 1,698 | 1,700 | 1,700 | NONE | null | ### Feature request
From python 3.9 hashlib introduced the `usedforsecurity` argument:
> Changed in version 3.9: All hashlib constructors take a keyword-only argument usedforsecurity with default value True. A false value allows the use of insecure and blocked hashing algorithms in restricted environments. False indicates that the hashing algorithm is not used in a security context, e.g. as a non-cryptographic one-way compression function.
`transformers` use hashing in many cases where the purpose is indeed _not for security_ purposes. This should be specifed in the code.
### Motivation
Transformers use MD5 from hashlib, which is not a secure algorithm, but are not specifying that it is for other purposes than security. This is causing issues for organisations following certain security standard. FIPS compliance could be an example.
### Your contribution
I will attach a PR specifying the usage of hashlib algorithms. Since `usedforsecurity` is only specified from 3.9+ and transformers support 3.6+, I'll add a functionality to detect python version and change kwargs based on that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27034/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27033/comments | https://api.github.com/repos/huggingface/transformers/issues/27033/events | https://github.com/huggingface/transformers/pull/27033 | 1,958,743,958 | PR_kwDOCUB6oc5dnSHq | 27,033 | Fix LlamaDynamicNTKScalingRotaryEmbedding cache | {
"login": "i4never",
"id": 10850020,
"node_id": "MDQ6VXNlcjEwODUwMDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/10850020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i4never",
"html_url": "https://github.com/i4never",
"followers_url": "https://api.github.com/users/i4never/followers",
"following_url": "https://api.github.com/users/i4never/following{/other_user}",
"gists_url": "https://api.github.com/users/i4never/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i4never/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i4never/subscriptions",
"organizations_url": "https://api.github.com/users/i4never/orgs",
"repos_url": "https://api.github.com/users/i4never/repos",
"events_url": "https://api.github.com/users/i4never/events{/privacy}",
"received_events_url": "https://api.github.com/users/i4never/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Retriggering the test may fix the test timeout.",
"Yep, rebasing on main should also help!",
"Ouch 😅 Also a 10x increase in time seems like a lot ",
"Each forward will take several millseconds more. If we cache cos/sin in first layer and pass it to remains (as [llama in tgi](https://github.com/huggingface/text-generation-inference/blob/96a982ad8fc232479384476b1596a880697cc1d0/server/text_generation_server/models/custom_modeling/flash_llama_modeling.py#L443-L460) doing), more time could be saved.",
"Yeah for sure, but that is a bit too big of a change ",
"Then only the issue of throughput remains. \r\n\r\nI have no obvious suspect of what could be causing it. I have two suggestions: \r\n1. as a sanity check, confirm that there is no throughput degradation as you increase the sequence length (and not as you reduce it). The changes you introduced should cause no changes here.\r\n2. Run your whole test script in a loop a few times, and discard the first 2 iterations. A major part of the slowdown could be due to GPU initialization time.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,703 | 1,703 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25306 . Cached pos embed will only be used when cached length is `model.config.max_position_embeddings` and asked `seq_len` is small or equal than this.
It's a little bit slower but correct. Value of sin/cos could be calculated in first decoder layer and passed to others (that's what [TGI flash_llama_modeling](https://github.com/huggingface/text-generation-inference/blob/f9910d13e296989f41e714c43eb60ce051359db3/server/text_generation_server/models/custom_modeling/flash_llama_modeling.py#L443-L460) does)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #25306
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
## Test
```python
import time
import random
import torch
from transformers.models.llama.modeling_llama import LlamaRotaryEmbedding
class LlamaDynamicNTKScalingRotaryEmbedding(LlamaRotaryEmbedding):
"""LlamaRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla"""
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
self.scaling_factor = scaling_factor
super().__init__(dim, max_position_embeddings, base, device)
def _set_cos_sin_cache(self, seq_len, device, dtype):
if seq_len <= self.max_position_embeddings:
seq_len = self.max_position_embeddings
self.max_seq_len_cached = seq_len
if seq_len > self.max_position_embeddings:
base = self.base * (
(self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
) ** (self.dim / (self.dim - 2))
else:
base = self.base
inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
t = torch.arange(self.max_seq_len_cached, device=device, dtype=inv_freq.dtype)
freqs = torch.einsum("i,j->ij", t, inv_freq)
# Different from paper, but it uses a different permutation in order to obtain the same calculation
emb = torch.cat((freqs, freqs), dim=-1)
self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False)
self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False)
def forward(self, x, seq_len=None):
# x: [bs, num_attention_heads, seq_len, head_size]
if not (seq_len <= self.max_seq_len_cached == self.max_position_embeddings):
self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)
return (
self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
)
class LlamaDynamicNTKScalingRotaryEmbeddingOld(LlamaRotaryEmbedding):
"""LlamaRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla"""
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
self.scaling_factor = scaling_factor
super().__init__(dim, max_position_embeddings, base, device)
def _set_cos_sin_cache(self, seq_len, device, dtype):
self.max_seq_len_cached = seq_len
if seq_len > self.max_position_embeddings:
base = self.base * (
(self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
) ** (self.dim / (self.dim - 2))
inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
self.register_buffer("inv_freq", inv_freq, persistent=False)
t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
# Different from paper, but it uses a different permutation in order to obtain the same calculation
emb = torch.cat((freqs, freqs), dim=-1)
self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False)
self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False)
rope_ntk_new = LlamaDynamicNTKScalingRotaryEmbedding(128)
rope_ntk_old = LlamaDynamicNTKScalingRotaryEmbeddingOld(128)
x = torch.rand(1, 9, 256)
seq_lengths = [128, 512, 1024, 2048, 4096, 8192, 4096, 2048, 1024, 512, 128]
print('=' * 32)
print('Result by new version')
result_by_rope_ntk_new = [rope_ntk_new(x, seq_length) for seq_length in seq_lengths]
for i in range(len(seq_lengths) // 2):
cos_before = result_by_rope_ntk_new[i][0]
cos_after = result_by_rope_ntk_new[len(seq_lengths) - i - 1][0]
sin_before = result_by_rope_ntk_new[i][1]
sin_after = result_by_rope_ntk_new[len(seq_lengths) - i - 1][1]
print(f"length {seq_lengths[i]} {seq_lengths[len(seq_lengths) - i - 1]}")
print(f"\tcos is equal: {torch.equal(cos_before, cos_after)}")
print(f"\tsin is equal: {torch.equal(sin_before, sin_after)}")
print('=' * 32)
print('Result by old version')
result_by_rope_ntk_old = [rope_ntk_old(x, seq_length) for seq_length in seq_lengths]
for i in range(len(seq_lengths) // 2):
cos_before = result_by_rope_ntk_old[i][0]
cos_after = result_by_rope_ntk_old[len(seq_lengths) - i - 1][0]
sin_before = result_by_rope_ntk_old[i][1]
sin_after = result_by_rope_ntk_old[len(seq_lengths) - i - 1][1]
print(f"length {seq_lengths[i]} {seq_lengths[len(seq_lengths) - i - 1]}")
print(f"\tcos is equal: {torch.equal(cos_before, cos_after)}")
print(f"\tsin is equal: {torch.equal(sin_before, sin_after)}")
seq_lengths = [random.randint(1, 32768) for _ in range(10000)]
start = time.time()
for seq_length in seq_lengths:
rope_ntk_new(x, seq_length)
end = time.time()
print(f"{end - start:.3f}s total, {(end - start) / 10000:.5f}s each")
start = time.time()
for seq_length in seq_lengths:
rope_ntk_old(x, seq_length)
end = time.time()
print(f"{end - start:.3f}s total, {(end - start) / 10000:.5f}s each")
```
Output:
```shell
================================
Result by new version
length 128 128
cos is equal: True
sin is equal: True
length 512 512
cos is equal: True
sin is equal: True
length 1024 1024
cos is equal: True
sin is equal: True
length 2048 2048
cos is equal: True
sin is equal: True
length 4096 4096
cos is equal: True
sin is equal: True
================================
Result by old version
length 128 128
cos is equal: False
sin is equal: False
length 512 512
cos is equal: False
sin is equal: False
length 1024 1024
cos is equal: False
sin is equal: False
length 2048 2048
cos is equal: False
sin is equal: False
length 4096 4096
cos is equal: False
sin is equal: False
7.573s total, 0.00076s each
0.101s total, 0.00001s each
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27033",
"html_url": "https://github.com/huggingface/transformers/pull/27033",
"diff_url": "https://github.com/huggingface/transformers/pull/27033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27033.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27032/comments | https://api.github.com/repos/huggingface/transformers/issues/27032/events | https://github.com/huggingface/transformers/pull/27032 | 1,958,661,931 | PR_kwDOCUB6oc5dnAjD | 27,032 | Added cache_block_outputs option to enable GPTQ for non-regular models | {
"login": "AlexKoff88",
"id": 25342812,
"node_id": "MDQ6VXNlcjI1MzQyODEy",
"avatar_url": "https://avatars.githubusercontent.com/u/25342812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexKoff88",
"html_url": "https://github.com/AlexKoff88",
"followers_url": "https://api.github.com/users/AlexKoff88/followers",
"following_url": "https://api.github.com/users/AlexKoff88/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexKoff88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexKoff88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexKoff88/subscriptions",
"organizations_url": "https://api.github.com/users/AlexKoff88/orgs",
"repos_url": "https://api.github.com/users/AlexKoff88/repos",
"events_url": "https://api.github.com/users/AlexKoff88/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexKoff88/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@AlexKoff88 please run `make style` to fix the tests. ",
"> @AlexKoff88 please run `make style` to fix the tests.\r\n\r\nFixed",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27032). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | cache_block_outputs enables the collection of the block output to speed up GPTQ process. However, it does not work for some models such as ChatGLM where the LayerNorm is the first layer in the block.
Just compare:
OPT structure:
model.decoder.layers.0.self_attn
model.decoder.layers.0.self_attn.k_proj
model.decoder.layers.0.self_attn.v_proj
model.decoder.layers.0.self_attn.q_proj
model.decoder.layers.0.self_attn.out_proj
model.decoder.layers.0.activation_fn
model.decoder.layers.0.self_attn_layer_norm
model.decoder.layers.0.fc1
model.decoder.layers.0.fc2
model.decoder.layers.0.final_layer_norm
ChatGLM structure:
transformer.encoder.layers.0
transformer.encoder.layers.0.input_layernorm
transformer.encoder.layers.0.self_attention
transformer.encoder.layers.0.self_attention.query_key_value
transformer.encoder.layers.0.self_attention.core_attention
transformer.encoder.layers.0.self_attention.core_attention.attention_dropout
transformer.encoder.layers.0.self_attention.dense
transformer.encoder.layers.0.post_attention_layernorm
transformer.encoder.layers.0.mlp
transformer.encoder.layers.0.mlp.dense_h_to_4h
transformer.encoder.layers.0.mlp.dense_4h_to_h
The solution is to disable SA block output caching and collect the quantizing block inputs starting from the beginning of the model. It slows down the optimization a bit but works more stable.
Related PR to Optimum: https://github.com/huggingface/optimum/pull/1479 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27032/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27032",
"html_url": "https://github.com/huggingface/transformers/pull/27032",
"diff_url": "https://github.com/huggingface/transformers/pull/27032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27032.patch",
"merged_at": 1698849439000
} |
https://api.github.com/repos/huggingface/transformers/issues/27031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27031/comments | https://api.github.com/repos/huggingface/transformers/issues/27031/events | https://github.com/huggingface/transformers/issues/27031 | 1,958,413,037 | I_kwDOCUB6oc50uwLt | 27,031 | Microsoft's GLIP Grounding Language Image Pretraining | {
"login": "ethansmith2000",
"id": 98723285,
"node_id": "U_kgDOBeJl1Q",
"avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethansmith2000",
"html_url": "https://github.com/ethansmith2000",
"followers_url": "https://api.github.com/users/ethansmith2000/followers",
"following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}",
"gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions",
"organizations_url": "https://api.github.com/users/ethansmith2000/orgs",
"repos_url": "https://api.github.com/users/ethansmith2000/repos",
"events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethansmith2000/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"@ArthurZucker or @ethansmith2000, is anyone working on this?",
"cc @amyeroberts and @rafaelpadilla WDYT",
"Recently we have added Kosmos2 that also perform object detection with visual grounding.\r\n\r\nIMO, having more models tackling the same problem is super valuable for the community. As GLIPv2 paper has ~110 citations (google scholars) and the repo +1.6k starts, I think it is relevant :) \r\n\r\nAdded: If we decide to have this model, there're 2 versions: GLIP and GLIPv2. I think the referenced repo implements to GLIPv2, right? So, if that's the case, maybe we should call it GLIPv2 (?)",
"> Recently we have added Kosmos2 that also perform object detection with visual grounding.\r\n> \r\n> IMO, having more models tackling the same problem is super valuable for the community. As GLIPv2 paper has ~110 citations (google scholars) and the repo +1.6k starts, I think it is relevant :)\r\n> \r\n> Added: If we decide to have this model, there're 2 versions: GLIP and GLIPv2. I think the referenced repo implements to GLIPv2, right? So, if that's the case, maybe we should call it GLIPv2 (?)\r\n\r\nYeap, the referenced repo implements `GLIPv2` and I agree on calling it `GLIPv2`. I can start working on this one since the other model I was working (`GroundingDINO`) will likely require little modification to be merged. \r\n\r\nI have a few other models in mind that I want to add to transformers most open-set object detection or segmentation ones. Maybe instead of only having an object detection benchmark we could create an open-set benchmark as well as more and more of these models are added, what do you think?",
"@EduardoPach Great! You're very welcome to tackle adding GLIP (v2). I can see GroundingDINO is still waiting approval but happy if you don't mind handling to two PRs. \r\n\r\nIf you have other models in mind please do open new-model issues for them! This helps us keep track of what's being added and what's popular in the community. \r\n\r\nWe can certainly expand to including other benchmarks once we have several models which tackle the same task :) ",
"Hey @EduardoPach,\r\nI think it a good idea to benchmark all vision models, so the community can easily compare them and pick the best model for their needs. \r\nWe have a [leaderboard](https://huggingface.co/spaces/hf-vision/object_detection_leaderboard) to evaluate object detection models. We would like to extend this for other tasks, like segmentation, image classification, etc.\r\n",
"> Hey @EduardoPach,\n> \n> I think it a good idea to benchmark all vision models, so the community can easily compare them and pick the best model for their needs. \n> \n> We have a [leaderboard](https://huggingface.co/spaces/hf-vision/object_detection_leaderboard) to evaluate object detection models. We would like to extend this for other tasks, like segmentation, image classification, etc.\n> \n> \n\nI knew the object detection. Not sure though if it would be the same thing for open-set object detection. How can I help out with the other benchmarks? Not sure as well if here is the best place to discuss this haha, maybe discord?",
"@EduardoPach What I'd suggest is opening an issue so that people can easily find and track any discussions there. Ideally one for each task. If there's already a collection e.g. >= 5 checkpoints that you think can be evaluated on a certain task, then I think we can start discussing what the leaderboard would look like "
] | 1,698 | 1,699 | null | NONE | null | ### Model description
Combines best practices of CLIP and object detectors.
Allows for localization and grounding of text and image content.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
github https://github.com/microsoft/GLIP
weights on hf https://huggingface.co/GLIPModel/GLIP/tree/main
colab demo https://colab.research.google.com/drive/12x7v-_miN7-SRiziK3Cx4ffJzstBJNqb?usp=sharing | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27031/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27029/comments | https://api.github.com/repos/huggingface/transformers/issues/27029/events | https://github.com/huggingface/transformers/pull/27029 | 1,957,942,118 | PR_kwDOCUB6oc5dkmAu | 27,029 | [docstring] Fix docstring for ErnieConfig, ErnieMConfig | {
"login": "Sparty",
"id": 3923604,
"node_id": "MDQ6VXNlcjM5MjM2MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3923604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparty",
"html_url": "https://github.com/Sparty",
"followers_url": "https://api.github.com/users/Sparty/followers",
"following_url": "https://api.github.com/users/Sparty/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparty/subscriptions",
"organizations_url": "https://api.github.com/users/Sparty/orgs",
"repos_url": "https://api.github.com/users/Sparty/repos",
"events_url": "https://api.github.com/users/Sparty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Feel free to ping @ydshieh when this is ready for a review! ",
"@Sparty Once ready for a review, could you remove the Draft mode. Thanks",
"I removed the draft mode. Thank you @ydshieh and @ArthurZucker ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27029). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'd want to merge, but we need this to be rebased @Sparty if you can run something like `git pull upstream main` and push 🤗 ",
"I have rebased. @ydshieh @ArthurZucker Could you merge this?",
"Thank you 🤗 "
] | 1,698 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/26638
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27029/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27029",
"html_url": "https://github.com/huggingface/transformers/pull/27029",
"diff_url": "https://github.com/huggingface/transformers/pull/27029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27029.patch",
"merged_at": 1704907239000
} |
https://api.github.com/repos/huggingface/transformers/issues/27028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27028/comments | https://api.github.com/repos/huggingface/transformers/issues/27028/events | https://github.com/huggingface/transformers/pull/27028 | 1,957,817,264 | PR_kwDOCUB6oc5dkKja | 27,028 | Fix little typo | {
"login": "mertyyanik",
"id": 32648818,
"node_id": "MDQ6VXNlcjMyNjQ4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/32648818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mertyyanik",
"html_url": "https://github.com/mertyyanik",
"followers_url": "https://api.github.com/users/mertyyanik/followers",
"following_url": "https://api.github.com/users/mertyyanik/following{/other_user}",
"gists_url": "https://api.github.com/users/mertyyanik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mertyyanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mertyyanik/subscriptions",
"organizations_url": "https://api.github.com/users/mertyyanik/orgs",
"repos_url": "https://api.github.com/users/mertyyanik/repos",
"events_url": "https://api.github.com/users/mertyyanik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mertyyanik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27028). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | Fixed little typo in Readme.md | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27028/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27028",
"html_url": "https://github.com/huggingface/transformers/pull/27028",
"diff_url": "https://github.com/huggingface/transformers/pull/27028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27028.patch",
"merged_at": 1698100603000
} |
https://api.github.com/repos/huggingface/transformers/issues/27027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27027/comments | https://api.github.com/repos/huggingface/transformers/issues/27027/events | https://github.com/huggingface/transformers/issues/27027 | 1,957,782,554 | I_kwDOCUB6oc50sWQa | 27,027 | Count of tokens seen during training in Trainer | {
"login": "jpgard",
"id": 7265452,
"node_id": "MDQ6VXNlcjcyNjU0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7265452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpgard",
"html_url": "https://github.com/jpgard",
"followers_url": "https://api.github.com/users/jpgard/followers",
"following_url": "https://api.github.com/users/jpgard/following{/other_user}",
"gists_url": "https://api.github.com/users/jpgard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpgard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpgard/subscriptions",
"organizations_url": "https://api.github.com/users/jpgard/orgs",
"repos_url": "https://api.github.com/users/jpgard/repos",
"events_url": "https://api.github.com/users/jpgard/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpgard/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"+1 \r\nI think we need this feature ",
"cc @muellerzr seems nice if we can make it efficient! ",
"Is the `tokens_per_second` we already have as part of https://github.com/huggingface/transformers/pull/25858 enough? Otherwise we can definitely add it :) ",
"Yeah, tokens/sec doesn't cut it for many use cases (although it is still very useful!!) -- similar to how tracking steps/sec doesn't obviate the need for a global step count.\r\n\r\nIf you can add it that would be amazing, I am sure this would be a useful feature to almost anyone training a language model. And I think there are some subtleties to how to make it work right in a distributed setting that you would probably be much better at handling....",
"agree tokens/sec/gpu is useful, but it fails to track `pad` tokens and if we were to do `SFTTrainer` with packing set to False, this number can be way off. So, we need a feature that tracks actual tokens seen.",
"thanks @muellerzr !!"
] | 1,698 | 1,700 | 1,699 | NONE | null | ### Feature request
The `Trainer` API should track and log the number of tokens seen during training.
While it sometimes could (maybe?) be possible to back out the number of tokens seen from the FLOS, or by iterating over the whole dataset, it would make a lot of sense for the Trainer API to track the number of tokens seen (and it shouldn't be necessary to completely iterate over a model's training loop just to compute the count of tokens, which is the only current implementation of any token-related metric in Trainer, [`Trainer.num_tokens()`](https://github.com/huggingface/transformers/blob/acc394c4f5e1283c19783581790b3dc3105a3697/src/transformers/trainer.py#L1180)).
This can't currently be implemented in a CallBack, because callbacks don't have access to the training data (only the trainer state).
### Motivation
Number of tokens seen is an essential metric tracked in nearly every LLM training run. It is widely considered one of the fundamental drivers of model quality (tokens seen during training is reported for nearly every major LLM release). It seems that any language model developer using Hugging Face would like to know this metric for their training runs -- it maybe even more important and useful than the FLOS, and perhaps as important as the number of gradient steps.
In any case, it's an extremely useful number to have, and it must be tracked during training as the model consumes examples.
### Your contribution
I'm willing to contribute this but would like some guidance on the overall design first.
In particular, here's what I think a reasonable implementation would include:
- Add a `global_tokens_seen` or similar to the `TrainerState`. This would add only a single integer value to the `TrainerState`.
- Increment this during `Trainer._inner_training_loop()`
- Probably add this information to the logging outputs
What do the folks at HF think about that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27027/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27027/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27026/comments | https://api.github.com/repos/huggingface/transformers/issues/27026/events | https://github.com/huggingface/transformers/pull/27026 | 1,957,726,911 | PR_kwDOCUB6oc5dj2lH | 27,026 | 🌐 [i18n-ZH] Translate create_a_model.md into Chinese | {
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27026). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Translate create_a_model.md into Chinese
part of #20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27026/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27026",
"html_url": "https://github.com/huggingface/transformers/pull/27026",
"diff_url": "https://github.com/huggingface/transformers/pull/27026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27026.patch",
"merged_at": 1698101082000
} |
https://api.github.com/repos/huggingface/transformers/issues/27025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27025/comments | https://api.github.com/repos/huggingface/transformers/issues/27025/events | https://github.com/huggingface/transformers/pull/27025 | 1,957,460,984 | PR_kwDOCUB6oc5di9Gx | 27,025 | Have seq2seq just use gather | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<s>@ArthurZucker any thoughts on what could fix this? I saw negligible time differences between main and my branch when running locally on CPU (~72s)</s>\r\n\r\nLooks like it's all passing now!",
"@amyeroberts:\r\n\r\n> Am I right in understanding this should only be applied to cases when evaluating generations from seq2seq models and the generation config specifies num_return_sequences > 1?\r\nCorrect, otherwise we will drop samples. Technically we can avoid this entirely I think by just using `gather`, and the test seems to show that will indeed work fine. As a result, I'll simplify this to just use `.gather`()\r\n\r\n> What happens and what should happen if I call evaluate with a generation config with num_return_sequences > 1 and then call a second time with num_return_sequences==1?\r\n\r\nPer your recommendation of the test, I tried this, and it worked as it should (because `gather_for_metrics` doesn't do anything for a bs of 1 really).\r\n\r\n",
"@amyeroberts:\r\n\r\n> Why does using gather_metrics drop samples?\r\n\r\nIt's some logic in Accelerate, `gather_for_metrics` will drop samples if we think they've been duplicated (such as filling in the last batch of data if data has been duplicated to make DDP efficient), however here is an edge case where just using `gather` is better\r\n\r\n> Is this only true for Seq2SeqTrainer. If not, why not just use gather everywhere?\r\n\r\nYes, just Seq2Seq. Otherwise `gather_for_metrics` should be used always"
] | 1,698 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
In the case of using `Seq2Seq`, we don't want `gather_for_metrics` to use its magic and we just want to do `.gather()` (since otherwise it will drop samples in the former case as accelerate will drop "duplicates" based on the batch size, which leads to a bug).
This PR sets a new `gather_function` in the `Trainer` which by default is `gather_for_metrics`, but if a particular `Trainer` needs to modify it (such as `Seq2SeqTrainer`), then it can be specified.
Fixes https://github.com/huggingface/transformers/issues/25231
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27025/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27025",
"html_url": "https://github.com/huggingface/transformers/pull/27025",
"diff_url": "https://github.com/huggingface/transformers/pull/27025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27025.patch",
"merged_at": 1699991685000
} |
https://api.github.com/repos/huggingface/transformers/issues/27024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27024/comments | https://api.github.com/repos/huggingface/transformers/issues/27024/events | https://github.com/huggingface/transformers/pull/27024 | 1,957,354,636 | PR_kwDOCUB6oc5dil_p | 27,024 | add info on TRL docs | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Feel free to merge once green!"
] | 1,698 | 1,698 | 1,698 | MEMBER | null | # What does this PR do?
As discussed offline, this adds a link to the `SFTTrainer` in `trl` in the `Trainer` section of the docs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27024",
"html_url": "https://github.com/huggingface/transformers/pull/27024",
"diff_url": "https://github.com/huggingface/transformers/pull/27024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27024.patch",
"merged_at": 1698152161000
} |
https://api.github.com/repos/huggingface/transformers/issues/27023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27023/comments | https://api.github.com/repos/huggingface/transformers/issues/27023/events | https://github.com/huggingface/transformers/issues/27023 | 1,957,263,727 | I_kwDOCUB6oc50qXlv | 27,023 | transformers Trainer has no attribute 'deepspeed_plugin' | {
"login": "shibing624",
"id": 10249622,
"node_id": "MDQ6VXNlcjEwMjQ5NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/10249622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shibing624",
"html_url": "https://github.com/shibing624",
"followers_url": "https://api.github.com/users/shibing624/followers",
"following_url": "https://api.github.com/users/shibing624/following{/other_user}",
"gists_url": "https://api.github.com/users/shibing624/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shibing624/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shibing624/subscriptions",
"organizations_url": "https://api.github.com/users/shibing624/orgs",
"repos_url": "https://api.github.com/users/shibing624/repos",
"events_url": "https://api.github.com/users/shibing624/events{/privacy}",
"received_events_url": "https://api.github.com/users/shibing624/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@shibing624 we need more info. What is your `accelerate env`? How are you running the script exactly? Please provide us with these critical details for us to reproduce",
"accelerate env:\r\n```\r\n- `Accelerate` version: 0.23.0\r\n- Platform: Linux-5.4.119-1-tlinux4-0009.3-x86_64-with-glibc2.17\r\n- Python version: 3.10.11\r\n- Numpy version: 1.24.4\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- System RAM: 1006.96 GB\r\n- GPU type: A100-SXM4-40GB\r\n- `Accelerate` default config:\r\n - compute_environment: LOCAL_MACHINE\r\n - distributed_type: MULTI_GPU\r\n - mixed_precision: fp16\r\n - use_cpu: False\r\n - debug: False\r\n - num_processes: 8\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - gpu_ids: all\r\n - rdzv_backend: static\r\n - same_network: True\r\n - main_training_function: main\r\n - downcast_bf16: no\r\n - tpu_use_cluster: False\r\n - tpu_use_sudo: False\r\n - tpu_env: []\r\n\r\n```\r\n\r\nds_report:\r\n```\r\nDeepSpeed C++/CUDA extension op report\r\n--------------------------------------------------\r\nNOTE: Ops not installed will be just-in-time (JIT) compiled at\r\n runtime if needed. Op compatibility means that your system\r\n meet the required dependencies to JIT install the op.\r\n--------------------------------------------------\r\nJIT compiled ops requires ninja\r\nninja .................. [OKAY]\r\n--------------------------------------------------\r\nop name ................ installed .. compatible\r\n--------------------------------------------------\r\n [WARNING] async_io requires the dev libaio .so object and headers but these were not found.\r\n [WARNING] async_io: please install the libaio-dev package with apt\r\n [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.\r\nasync_io ............... [NO] ....... [NO]\r\nfused_adam ............. [NO] ....... [OKAY]\r\ncpu_adam ............... [NO] ....... [OKAY]\r\ncpu_adagrad ............ [NO] ....... [OKAY]\r\ncpu_lion ............... [NO] ....... [OKAY]\r\n [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH\r\nevoformer_attn ......... [NO] ....... [NO]\r\nfused_lamb ............. [NO] ....... [OKAY]\r\nfused_lion ............. [NO] ....... [OKAY]\r\nquantizer .............. [NO] ....... [OKAY]\r\nrandom_ltd ............. [NO] ....... [OKAY]\r\n [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0\r\n [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible\r\nsparse_attn ............ [NO] ....... [NO]\r\nspatial_inference ...... [NO] ....... [OKAY]\r\ntransformer ............ [NO] ....... [OKAY]\r\nstochastic_transformer . [NO] ....... [OKAY]\r\ntransformer_inference .. [NO] ....... [OKAY]\r\n--------------------------------------------------\r\nDeepSpeed general environment info:\r\ntorch install path ............... ['/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch']\r\ntorch version .................... 2.0.1+cu117\r\ndeepspeed install path ........... ['/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/deepspeed']\r\ndeepspeed info ................... 0.11.1, unknown, unknown\r\ntorch cuda version ............... 11.7\r\ntorch hip version ................ None\r\nnvcc version ..................... 11.7\r\ndeepspeed wheel compiled w. ...... torch 2.0, cuda 11.7\r\nshared memory (/dev/shm) size .... 503.48 GB\r\n```\r\n\r\nmy training arguments is :\r\n```\r\n@dataclass\r\nclass PeftArguments(TrainingArguments):\r\n use_peft: bool = field(default=True, metadata={\"help\": \"Whether to use peft\"})\r\n target_modules: Optional[str] = field(default=\"all\")\r\n lora_rank: Optional[int] = field(default=8)\r\n lora_dropout: Optional[float] = field(default=0.05)\r\n lora_alpha: Optional[float] = field(default=32.0)\r\n modules_to_save: Optional[str] = field(default=None)\r\n peft_path: Optional[str] = field(default=None, metadata={\"help\": \"The path to the peft model\"})\r\n qlora: bool = field(default=False, metadata={\"help\": \"Whether to use qlora\"})\r\n load_in_kbits: Optional[int] = field(default=None, metadata={\"help\": \"Kbits to train the model, value is 4, 8\"})\r\n model_max_length: int = field(\r\n default=512,\r\n metadata={\"help\": \"Maximum sequence length. suggest value is 8192 * 4, 8192 * 2, 8192, 4096, 2048, 1024, 512\"}\r\n )\r\n```",
"i am try to fix it, and i add deepspeed_plugin and debug to PeftArguments, the bug fixed.\r\n\r\n```\r\n deepspeed_plugin: Optional[str] = field(default=None)\r\n debug: Optional[str] = field(\r\n default=\"\",\r\n metadata={\r\n \"help\": (\r\n \"Whether or not to enable debug mode. default is '', \"\r\n \"`underflow_overflow` (Detect underflow and overflow in activations and weights), \"\r\n )\r\n },\r\n )\r\n```\r\n\r\nmy repo fix the bug: https://github.com/shibing624/MedicalGPT/commit/b08be905d7f2b98a3e62c57bb6e8b345c0611805 \r\n\r\nreason:\r\nthe bug is here: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1174C30-L1174C30\r\n\r\ni do not use deepspeed, bug it run here. i do not know how to fix the original transformers trainer.py ",
"The true solution is we should set this to `None` by default in `TrainingArguments`, there's a method for making it a hidden attribute, I'll look into this unless you want to @shibing624 "
] | 1,698 | 1,698 | 1,698 | NONE | null | ### System Info
```
Traceback (most recent call last):
File "/apdcephfs_teg_2/share_1367250/flemingxu/MedicalGPT/supervised_finetuning.py", line 1307, in <module>
main()
File "/apdcephfs_teg_2/share_1367250/flemingxu/MedicalGPT/supervised_finetuning.py", line 1248, in main
trainer = SavePeftModelTrainer(
File "/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 335, in __init__
self.create_accelerator_and_postprocess()
File "/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3853, in create_accelerator_and_postprocess
deepspeed_plugin=self.args.deepspeed_plugin,
AttributeError: 'PeftArguments' object has no attribute 'deepspeed_plugin'
(py3.10)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
transformers==4.35.0.dev0 , run https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py with llama2 model
### Expected behavior
success | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27023/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27022/comments | https://api.github.com/repos/huggingface/transformers/issues/27022/events | https://github.com/huggingface/transformers/pull/27022 | 1,957,224,838 | PR_kwDOCUB6oc5diJuN | 27,022 | Save TB logs as part of push_to_hub | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @ArthurZucker, done (and added a test!)",
"Thanks for the ping @LysandreJik! I'm currently reviewing it. Most probably good but I want to make a few tests first with the blob patterns (I found them a bit un-intuitive TBH). Will update soon :)",
"@muellerzr Unfortunately, `ignore_patterns=[\"_*\", \"[!runs]**/*\"],` will not work as expected. I've made some tests with [fnmatch](https://docs.python.org/3/library/fnmatch.html) and TIL that even tough it claims to provide support for _\"Unix shell-style wildcards\"_, I feel it's not the case.\r\n\r\nEspecially:\r\n- `*` matches everything no matter if there is a `/` in it. Both `README.md` and `subfolder/README.md` are matched.\r\n- `\"[!runs]*\"` matches anything that starts with a characters that is not a \"r\", a \"u\", a \"n\" or a \"s\". Meaning `runs/foobar`, `nqwerty/foobar`, `sqwerty/foobar/...`, etc. are all matching\r\n- => So `\"[!runs]**/*\"` matches any folder starting with r/u/n/s which is not the expected result\r\n\r\n\r\n\r\n> Note that the filename separator ('/' on Unix) is not special to this module. \r\n\r\n---\r\n\r\nI've tried to define a good set of `allow_pattern` + `ignore_pattern` that would achieve what you want to do but unfortunately I don't think it's possible. I feel bad about it because I introduce this [`fnmatch`](https://docs.python.org/3/library/fnmatch.html) module thinking it was the good one to use but since it doesn't implement the same specs as the Unix pattern it kinda useless. I expected that the pattern would follow the same rules as described when doing `man 7 glob`.\r\n\r\n\r\n**EDIT:** just realized that `ignore_patterns = [\"_*\", \"[!r]*/*\", \"?[!u]*/*\", \"??[!n]*/*\", \"???[!s]*/*\"]` would work but it feels so stupid...",
"(here is my test script if you want to play with it. At first I thought it would be better to include any `*.tfevents.*` file, no matter the folder hence the tests. But doesn't seem possible to do anyway\r\n\r\n```py\r\nfrom fnmatch import fnmatch\r\n\r\npaths = [\r\n \"readme.md\",\r\n \"subfolder/readme.md\",\r\n \"folder/readme.md\",\r\n \"_private.json\",\r\n \"subfolder/_private.json\",\r\n \"foo.tfevents.bar\",\r\n \"subfolder/foo.tfevents.bar\",\r\n \"runs/subfolder/foo.tfevents.bar\",\r\n \"not/subfolder/foo.tfevents.bar\",\r\n]\r\n\r\n# allow_patterns = [\"*.tfevents.*\", \"*[!\\]*\"]\r\nallow_patterns = [\"*\"]\r\n\r\nignore_patterns = [\"_*\", \"[!r]*/*\", \"?[!u]*/*\", \"??[!n]*/*\", \"???[!s]*/*\"]\r\n\r\n# pattern = '**/*[!tfevents]*'\r\n# pattern = '[!runs]**/*'\r\n\r\ndef is_included(path):\r\n if any(fnmatch(path, r) for r in ignore_patterns):\r\n return False\r\n if any(fnmatch(path, r) for r in allow_patterns):\r\n return True\r\n return False\r\n\r\nfor path in paths:\r\n print(f\"{path:<40}\", is_included(path))\r\n```\r\n\r\n)",
"Actually looking back at the implementation before https://github.com/huggingface/transformers/pull/25095, do we really need to exclude every folder -so the current `ignore_patterns=\"**/*\"- ? I don't see anywhere where `Repository` used to exclude some folders. If it was in a `.gitignore`, I would be keen to see it.\r\n\r\nI already thought about the fact that we should respect the `.gitignore` file (if any) when using `upload_folder`. It is not very straightforward but even a subset of the `.gitignore`'s spec would be good. (only thinking out loud here, let's fix the PR independently of that)",
"> we should respect the `.gitignore` file (if any) when using `upload_folder`\r\n\r\nOr maybe server-side support (in case the .gitignore is uploaded first, of course). I don't know if it's the case server-side already or not, to be honest."
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
This PR brings back the default capability of pushing tensorboard logs as part of `.push_to_hub()` by modifying the `glob` exclusion filter to specifically ignore intermediate checkpoints.
You can see an example here where logs were successfully uploaded: https://huggingface.co/muellerzr/sequence_classification/tree/main
Was removed in https://github.com/huggingface/transformers/pull/25095, likely because we didn't think about the fact that users would want to push their logs with tensorboard
Fixes # (issue)
fixes https://github.com/huggingface/transformers/issues/26321
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27022/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27022",
"html_url": "https://github.com/huggingface/transformers/pull/27022",
"diff_url": "https://github.com/huggingface/transformers/pull/27022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27022.patch",
"merged_at": 1698336799000
} |
https://api.github.com/repos/huggingface/transformers/issues/27021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27021/comments | https://api.github.com/repos/huggingface/transformers/issues/27021/events | https://github.com/huggingface/transformers/pull/27021 | 1,957,163,540 | PR_kwDOCUB6oc5dh8TF | 27,021 | Add NMS utilities | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
As the Mask R-CNN is pretty big (#25348), it's split into smaller PRs.
This PR adds the utilities for NMS (non-maximum suppression), a well-known method in computer vision to remove near-duplicate bounding boxes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27021/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27021",
"html_url": "https://github.com/huggingface/transformers/pull/27021",
"diff_url": "https://github.com/huggingface/transformers/pull/27021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27021.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27020/comments | https://api.github.com/repos/huggingface/transformers/issues/27020/events | https://github.com/huggingface/transformers/pull/27020 | 1,957,132,714 | PR_kwDOCUB6oc5dh1eR | 27,020 | [`core`] Refactor of `gradient_checkpointing` | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Can you add a test to make sure setting and unsetting both work as expected (specifically for the fix we are implementing in TRL)",
"+1",
"Ran some training tests with PEFT + GC using this branch and everything seem to pass! Merging once the CI is green",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27020). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Alternative to https://github.com/huggingface/transformers/pull/26917
This way we make `set_gradient_checkpointing` more modulable, as requested by some users - e.g. https://github.com/huggingface/transformers/issues/21381#issuecomment-1757690624
Fixes some issues with DDP such as: https://github.com/huggingface/trl/issues/835
Also removed GC support from `TFSwin` as in theory `gradient_checkpointing` is used only for PT models.
Added also a CI tests for that
For users that want to use `gradient_checkpointing` with `use_reentrant=False`:
```python
...
model.enable_gradient_checkpointing(gradient_checkpointing_kwargs={"use_reentrant": False})
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27020/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27020",
"html_url": "https://github.com/huggingface/transformers/pull/27020",
"diff_url": "https://github.com/huggingface/transformers/pull/27020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27020.patch",
"merged_at": 1698228975000
} |
https://api.github.com/repos/huggingface/transformers/issues/27019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27019/comments | https://api.github.com/repos/huggingface/transformers/issues/27019/events | https://github.com/huggingface/transformers/issues/27019 | 1,957,132,020 | I_kwDOCUB6oc50p3b0 | 27,019 | issue when training Falcon 7b with deepspeed due to cache | {
"login": "Epliz",
"id": 63452361,
"node_id": "MDQ6VXNlcjYzNDUyMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/63452361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Epliz",
"html_url": "https://github.com/Epliz",
"followers_url": "https://api.github.com/users/Epliz/followers",
"following_url": "https://api.github.com/users/Epliz/following{/other_user}",
"gists_url": "https://api.github.com/users/Epliz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Epliz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Epliz/subscriptions",
"organizations_url": "https://api.github.com/users/Epliz/orgs",
"repos_url": "https://api.github.com/users/Epliz/repos",
"events_url": "https://api.github.com/users/Epliz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Epliz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"never mind; looks like it has been fixed in main"
] | 1,698 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.16
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 4x nvidia A10
- Using distributed or parallel set-up in script?: DeepSpeed
### Who can help?
@ArthurZucker , @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to fine-tune Falcon 7b with deepspeed Zero 3, cpu offloading and gradient checkpointing.
I get the following error:
```
File "<venv>/lib64/python3.8/site-packages/transformers/models/falcon/modeling_falcon.py", line 1279, in forward
transformer_outputs = self.transformer(
File "<venv>/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
File "<venv>/lib64/python3.8/site-packages/transformers/models/falcon/modeling_falcon.py", line 1189, in forward
presents = self._convert_cache_to_standard_format(presents, batch_size)
File "<venv>/lib64/python3.8/site-packages/transformers/models/falcon/modeling_falcon.py", line 954, in _convert_cache_to_standard_format
batch_size_times_num_heads, kv_length, head_dim = past_key_value[0][0].shape
IndexError: tuple index out of range
```
However, I see that use_cache should have been set to False according to
```
use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
```
But the Falcon modeling code at https://github.com/huggingface/transformers/blob/244a53e0f6a8d95d429559cfc49a07a4e85cc680/src/transformers/models/falcon/modeling_falcon.py#L1193 is written in a way that it will go in the body of the if even though use_case was disabled, due to the fact the use_cache is set to False when using checkpointing AFTER that `presents` is set to `()`.
Solution is probably to set use_cache to False earlier if needed, e.g. after L1069 directly.
### Expected behavior
It should work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27019/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27018/comments | https://api.github.com/repos/huggingface/transformers/issues/27018/events | https://github.com/huggingface/transformers/pull/27018 | 1,957,126,357 | PR_kwDOCUB6oc5dh0EB | 27,018 | [`SeamlessM4T`] fix copies with NLLB MoE int8 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 | COLLABORATOR | null | # What does this PR do?
Fixes the main CI | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27018/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27018",
"html_url": "https://github.com/huggingface/transformers/pull/27018",
"diff_url": "https://github.com/huggingface/transformers/pull/27018.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27018.patch",
"merged_at": 1698067506000
} |
https://api.github.com/repos/huggingface/transformers/issues/27017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27017/comments | https://api.github.com/repos/huggingface/transformers/issues/27017/events | https://github.com/huggingface/transformers/pull/27017 | 1,957,062,014 | PR_kwDOCUB6oc5dhl-0 | 27,017 | Add CED model for audio classification | {
"login": "jimbozhang",
"id": 1777456,
"node_id": "MDQ6VXNlcjE3Nzc0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1777456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimbozhang",
"html_url": "https://github.com/jimbozhang",
"followers_url": "https://api.github.com/users/jimbozhang/followers",
"following_url": "https://api.github.com/users/jimbozhang/following{/other_user}",
"gists_url": "https://api.github.com/users/jimbozhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimbozhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimbozhang/subscriptions",
"organizations_url": "https://api.github.com/users/jimbozhang/orgs",
"repos_url": "https://api.github.com/users/jimbozhang/repos",
"events_url": "https://api.github.com/users/jimbozhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimbozhang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there~ 👋\r\n\r\nI submitted this pull request a little while ago, and I was wondering if someone could take a look at it. I haven't received any feedback yet. I'm looking forward to hearing from you whenever you have a chance. 😊\r\n\r\nBTW, I'm not sure why `ExamplesTestsNoTrainer.test_run_image_classification_no_trainer` [failed in CircleCI test](https://app.circleci.com/pipelines/github/huggingface/transformers/77686/workflows/9d6c662b-2d36-433a-9f92-2f189388b7dc/jobs/989529/parallel-runs/0/steps/0-116)."
] | 1,698 | 1,700 | 1,700 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add a new model for audio classification: CED.
CED are simple ViT-Transformer-based models, which were proposed in [CED: Consistent ensemble distillation for audio tagging](https://arxiv.org/abs/2308.11957). Notable differences from other available models (such as [AST](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)) include:
1. Simplification for finetuning: Batchnormalization of Mel-Spectrograms. During finetuning one does not need to first compute mean/variance over the dataset, which is common for AST.
1. Support for variable length inputs. Most other models use a static time-frequency position embedding, which hinders the model's generalization to segments shorter than 10s. Many previous transformers simply pad their input to 10s in order to avoid the performance impact, which in turn slows down training/inference drastically.
1. Training/Inference speedup: 64-dimensional mel-filterbanks and 16x16 patches without overlap, leading to 248 patches from a 10s spectrogram. In comparison, AST uses 128 mel-filterbanks with 16x16 (10x10 overlap) convolution, leading to 1212 patches during training/inference. CED-Tiny runs on a common CPU as fast as a comparable MobileNetV3.
1. Performance: CED-Mini with 10M parameters outperforms the majority of previous approaches (~80M).
Here is a [notebook](https://colab.research.google.com/drive/1MOdY7t3qSN6ZTDtYqrkp991nYH6KqLS-) illustrates how to use CED to detect audio events.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Models:
- speech models: @sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27017/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27017",
"html_url": "https://github.com/huggingface/transformers/pull/27017",
"diff_url": "https://github.com/huggingface/transformers/pull/27017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27017.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27016/comments | https://api.github.com/repos/huggingface/transformers/issues/27016/events | https://github.com/huggingface/transformers/issues/27016 | 1,956,917,546 | I_kwDOCUB6oc50pDEq | 27,016 | CodeLLama generates very different outputs with flash_attention_2 | {
"login": "ikergarcia1996",
"id": 18737249,
"node_id": "MDQ6VXNlcjE4NzM3MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/18737249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikergarcia1996",
"html_url": "https://github.com/ikergarcia1996",
"followers_url": "https://api.github.com/users/ikergarcia1996/followers",
"following_url": "https://api.github.com/users/ikergarcia1996/following{/other_user}",
"gists_url": "https://api.github.com/users/ikergarcia1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikergarcia1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikergarcia1996/subscriptions",
"organizations_url": "https://api.github.com/users/ikergarcia1996/orgs",
"repos_url": "https://api.github.com/users/ikergarcia1996/repos",
"events_url": "https://api.github.com/users/ikergarcia1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikergarcia1996/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada pretty sure we have logits tests and generation for Llama but not CodeLlama",
"@ArthurZucker @younesbelkada It seems that the issue is resolved after updating Flash Attention from version `2.0.8` to `2.3.2`. ",
"Very nice @ikergarcia1996 ! Indeed, we have a similar issue here: https://github.com/huggingface/transformers/issues/26697"
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.23.0
- Accelerate config:
- compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: False
- main_training_function: main
- downcast_bf16: False
- tpu_use_cluster: False
- tpu_use_sudo: False
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes 1xNvidia A100
- Using distributed or parallel set-up in script?: No
### Who can help?
@muellerzr @pacman100 @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
First, we load codellama 7B with and without Flash Attention. I use `torch_dtype=torch.bfloat16`
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
import torch
model_weights_name_or_path = "codellama/CodeLlama-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_weights_name_or_path)
tokenizer.pad_token_id = tokenizer.unk_token_id
config = AutoConfig.from_pretrained(
model_weights_name_or_path,
trust_remote_code=False,
# pretraining_tp=1, Setting pretraining_tp=1 doesn't solve the issue
)
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=model_weights_name_or_path,
device_map=None,
max_memory=None,
quantization_config=None,
torch_dtype=torch.bfloat16,
config=config,
trust_remote_code=True,
use_flash_attention_2=False
).to("cuda")
model_flash = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=model_weights_name_or_path,
device_map=None,
max_memory=None,
quantization_config=None,
torch_dtype=torch.bfloat16,
config=config,
trust_remote_code=True,
use_flash_attention_2=True
).to("cuda")
model_flash.eval()
model.eval()
```
Prepare the input. I am using a NER example from: https://github.com/hitz-zentroa/GoLLIE/
```python
prompt = '''
# The following lines describe the task definition
@dataclass
class PrivateSpaceCompany(Entity):
"""Refers to private companies primarily focused on space exploration, transportation,
satellite launch, or space-based services. These are non-governmental entities that have
a commercial interest in space activities."""
span: str # Such as: "Blue origin", "Boeing", "Northrop Grumman", "Arianespace"
@dataclass
class PublicSpaceCompany(Entity):
"""Refers to governmental entities or agencies that are primarily focused on space
exploration, research, transportation, satellite launch, or other space-based services.
These entities are state-owned and operated and are generally funded through public funds.
"""
span: str # Such as "ESA", "ISRO", "CNSA"
@dataclass
class Planet(Entity):
"""Refers to celestial bodies that orbit a star. Planets are large enough
to have cleared their orbits of other debris and have a nearly round shape
due to their self-gravity."""
span: str # Such as: "Earth", "Jupiter", "Venus", "Mercury", "Saturn"
@dataclass
class Launcher(Entity):
"""Refers to a vehicle designed primarily to transport payloads from the Earth's
surface to space. Launchers can carry various payloads, including satellites,
crewed spacecraft, and cargo, into various orbits or even beyond Earth's orbit.
They are usually multi-stage vehicles that use rocket engines for propulsion."""
span: str # Such as: "Sturn V", "Atlas V", "Soyuz", "Ariane 5"
# This is the text to analyze
text = "SpaceX is colaborating with NASA in the mission to bring humans to Mars using their new Starship rocket."
# The annotation instances that take place in the text above are listed here
result ='''.strip()
model_input = tokenizer(prompt, add_special_tokens=True, return_tensors="pt")
```
Finally, we can run both models with the same input
```python
model_flash_ouput = model_flash.generate(
**model_input.to(model_flash.device),
max_new_tokens=128,
do_sample=False,
min_new_tokens=0,
num_beams=1,
num_return_sequences=1,
)
model_ouput = model.generate(
**model_input.to(model.device),
max_new_tokens=128,
do_sample=False,
min_new_tokens=0,
num_beams=1,
num_return_sequences=1,
)
print(tokenizer.batch_decode(model_flash_ouput)[0])
print(tokenizer.batch_decode(model_ouput)[0])
```
The result (I skip the input tokens for brevity) is:
```python
# FLASH ATTENTION 2 MODEL
result = [ <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE> <PRE>
# BASELINE MODEL
result = [
PrivateSpaceCompany(span="SpaceX"),
PublicSpaceCompany(span="NASA"),
Planet(span="Mars"),
Launcher(span="Starship"),
]
```
### Expected behavior
Both models should produce the same result. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27016/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27015/comments | https://api.github.com/repos/huggingface/transformers/issues/27015/events | https://github.com/huggingface/transformers/issues/27015 | 1,956,883,565 | I_kwDOCUB6oc50o6xt | 27,015 | transformers==4.34.1 have error for load chatglm model | {
"login": "Lzhang-hub",
"id": 57925599,
"node_id": "MDQ6VXNlcjU3OTI1NTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/57925599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lzhang-hub",
"html_url": "https://github.com/Lzhang-hub",
"followers_url": "https://api.github.com/users/Lzhang-hub/followers",
"following_url": "https://api.github.com/users/Lzhang-hub/following{/other_user}",
"gists_url": "https://api.github.com/users/Lzhang-hub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lzhang-hub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lzhang-hub/subscriptions",
"organizations_url": "https://api.github.com/users/Lzhang-hub/orgs",
"repos_url": "https://api.github.com/users/Lzhang-hub/repos",
"events_url": "https://api.github.com/users/Lzhang-hub/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lzhang-hub/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! This uses `trust_remote_code = True` and has custom code on the hub! Recommend you to open an issue on this repo! ",
"transformers==4.33.3 is work well\n\n---- Replied Message ----\n| From | ***@***.***> |\n| Date | 10/23/2023 19:10 |\n| To | huggingface/transformers ***@***.***> |\n| Cc | Lzhang-hub ***@***.***>,\nAuthor ***@***.***> |\n| Subject | Re: [huggingface/transformers] transformers==4.34.1 have error for load chatglm model (Issue #27015) |\n\nHey! This uses trust_remote_code = True and has custom code on the hub! Recommend you to open an issue on this repo!\n\n—\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>"
] | 1,698 | 1,699 | 1,699 | NONE | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.34.1
- Platform: Linux-4.19.96-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
load chatglm model failed:
```
File "/root/.cache/huggingface/modules/transformers_modules/tokenization_chatglm.py", line 112, in get_vocab
vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
File "/root/.cache/huggingface/modules/transformers_modules/tokenization_chatglm.py", line 108, in vocab_size
return self.tokenizer.n_words
AttributeError: 'ChatGLMTokenizer' object has no attribute 'tokenizer'. Did you mean: 'tokenize'?
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformer import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b")
### Expected behavior
load model correct | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27015/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27014/comments | https://api.github.com/repos/huggingface/transformers/issues/27014/events | https://github.com/huggingface/transformers/issues/27014 | 1,956,864,654 | I_kwDOCUB6oc50o2KO | 27,014 | Multi GPU Whisper Training Error! | {
"login": "xyx361100238",
"id": 19569322,
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyx361100238",
"html_url": "https://github.com/xyx361100238",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"param:\r\nCUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 run_speech_recognition_seq2seq.py \\\r\n\t--model_name_or_path=${modFile} \\\r\n\t--data_list=${listPath} \\\r\n\t--do_train \\\r\n\t--language=${lang} \\\r\n\t--task=\"transcribe\" \\\r\n\t--output_dir=${outPath} \\\r\n\t--per_device_train_batch_size=\"16\" \\\r\n\t--per_device_eval_batch_size=\"16\" \\\r\n\t--auto_find_batch_size=\"True\" \\\r\n\t--logging_steps=\"5\" \\\r\n\t--learning_rate=\"1e-5\" \\\r\n\t--warmup_steps=\"500\" \\\r\n\t--num_train_epochs=50 \\\r\n\t--evaluation_strategy=\"steps\" \\\r\n\t--eval_steps=\"5000\" \\\r\n\t--save_strategy=\"steps\" \\\r\n\t--save_steps=\"5000\" \\\r\n\t--save_total_limit=10 \\\r\n\t--load_best_model_at_end=\"True\" \\\r\n\t--generation_max_length=\"225\" \\\r\n\t--preprocessing_num_workers=\"32\" \\\r\n\t--length_column_name=\"input_length\" \\\r\n\t--max_duration_in_seconds=\"30\" \\\r\n\t--audio_column_name=\"audio\" \\\r\n\t--text_column_name=\"sentence\" \\\r\n\t--freeze_feature_encoder=\"False\" \\\r\n\t--gradient_checkpointing \\\r\n\t--fp16 \\\r\n\t--group_by_length \\\r\n\t--cache_dir=\"/nasStore/000-CacheData/transfomer_whisper_map_cache/\" \\\r\n\t--overwrite_output_dir \\\r\n\t--predict_with_generate \\\r\n\t--optim=\"adamw_torch\"",
"The failure is occurring in a multi-GPU set-up since there is a time-out limit of 30 minutes when running multiple processes in PyTorch.\r\n\r\nFor large datasets being used for multi-GPU training it is advised to run the pre-processing ahead of time on a **single machine** by appending the `--preprocessing_only` flag to your command:\r\nhttps://github.com/huggingface/transformers/blob/d2a980ec74db8bc3ea0104e126cbf5d4b1f0e73b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L512\r\n\r\nYou can then remove the flag, and launch with a multi-GPU set-up as you have done before. The dataset will be loaded straight from cache, so pre-processing won't be performed a second time.\r\n\r\nOtherwise, if you want all GPUs to be waiting while you pre-process the dataset, you can use your original command, but with the `--ddp_timeout` flag set to the number of seconds for the multi-GPU timeout in seconds. You can set it to something large like 10800 (3 hours). But this is slightly wasteful since we don't need to use any GPUs for the pre-processing!",
"thanks!I've already done this according to document"
] | 1,698 | 1,700 | 1,700 | NONE | null | ### System Info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-165-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
use officially script(changed loaddata) to finetune my own datasets, it works in single GPU, according to [multi GPU train](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#multi-gpu-whisper-training):Attempting to use torch.distributed.launch / torch.distributed.run / torchrun methods, will result in an error message stating that the log file has been uploaded
[run.log](https://github.com/huggingface/transformers/files/13069037/run.log)
### Expected behavior
The step of ‘preprocess train dataset’ is no longer limited to 30 minutes and can complete multi card concurrent training | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27014/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27013/comments | https://api.github.com/repos/huggingface/transformers/issues/27013/events | https://github.com/huggingface/transformers/pull/27013 | 1,956,848,715 | PR_kwDOCUB6oc5dg3jo | 27,013 | skip two tests | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27013). All of your documentation changes will be reflected on that endpoint."
] | 1,698 | 1,698 | 1,698 | COLLABORATOR | null | # What does this PR do?
Skipping them for now | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27013/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27013",
"html_url": "https://github.com/huggingface/transformers/pull/27013",
"diff_url": "https://github.com/huggingface/transformers/pull/27013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27013.patch",
"merged_at": 1698058326000
} |
https://api.github.com/repos/huggingface/transformers/issues/27012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27012/comments | https://api.github.com/repos/huggingface/transformers/issues/27012/events | https://github.com/huggingface/transformers/pull/27012 | 1,956,837,036 | PR_kwDOCUB6oc5dg0_8 | 27,012 | [`NLLB-MoE`] Fix NLLB MoE 4bit inference | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks!"
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://github.com/huggingface/transformers/issues/26898
The hidden states gets silenty casted in `unint8` leading to the error described in #26898
The check `and self.fc2.weight.dtype != torch.int8` is not sufficient in order to cover 4bit models, for these models the weights are in `uint8`, hence adding an extra condition to cover 4bit models fixes the inference issue
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27012/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27012/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27012",
"html_url": "https://github.com/huggingface/transformers/pull/27012",
"diff_url": "https://github.com/huggingface/transformers/pull/27012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27012.patch",
"merged_at": 1698065663000
} |
https://api.github.com/repos/huggingface/transformers/issues/27011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27011/comments | https://api.github.com/repos/huggingface/transformers/issues/27011/events | https://github.com/huggingface/transformers/issues/27011 | 1,956,828,739 | I_kwDOCUB6oc50otZD | 27,011 | Add Model Support for xLSTM | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Sounds like a money grab. If it is something useful, he should have chosen the academic path or at least filing patent.\r\n\r\nThis way of boldly claiming success via non-serious media channels is highly unprofessional. It smells like publicity is more relevant than results which further supports motivations like funding/personal gains/politics.",
"If I understood it correctly, a patent is on its way, and at least a paper about xLSTM will be published in less than 6 month.",
"I have some doubts if this is planned as an open source model. ",
"I have some doubts if that thing is actually \"there\" or even remotely competitive with something like GPT4, llama2 (not even thining about GPT5, lamma3 which are obviously in the making)."
] | 1,698 | 1,706 | null | COLLABORATOR | null | ### Model description
Inspired by [recent rumors](https://www.youtube.com/watch?v=hwIt7ezy6t8) about xLSTM - a hidden successor to LSTM - by Sepp Hochreiter, this issue tracks the open source implementation about adding xLSTM to Transformers library.
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
At the moment no public paper/implementation does exist.
Only rumors that xLSTM surpasses GPT-2 on various (small) downstream datasets.
Good overview is the [xLSTM Resources](https://github.com/AI-Guru/xlstm-resources) repository from @AI-Guru. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27011/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27010/comments | https://api.github.com/repos/huggingface/transformers/issues/27010/events | https://github.com/huggingface/transformers/pull/27010 | 1,956,807,565 | PR_kwDOCUB6oc5dgujT | 27,010 | Limit to inferior fsspec version | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 | MEMBER | null | The newly released fsspec version breaks the implementation in transformers' CI, see the following error:
```
"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/datasets/builder.py", line 1173, in as_dataset
raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
/home/circleci/transformers/src/transformers/models/beit/modeling_beit.py:737: UnexpectedException
```
Additionally:
https://github.com/huggingface/datasets/pull/6331 and https://github.com/huggingface/huggingface_hub/pull/1773 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27010/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27010",
"html_url": "https://github.com/huggingface/transformers/pull/27010",
"diff_url": "https://github.com/huggingface/transformers/pull/27010.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27010.patch",
"merged_at": 1698057261000
} |
https://api.github.com/repos/huggingface/transformers/issues/27009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27009/comments | https://api.github.com/repos/huggingface/transformers/issues/27009/events | https://github.com/huggingface/transformers/issues/27009 | 1,956,733,962 | I_kwDOCUB6oc50oWQK | 27,009 | Unable to load FuyuProcessor, FuyuForCausalLM from transformers | {
"login": "akshaytheau",
"id": 15026338,
"node_id": "MDQ6VXNlcjE1MDI2MzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/15026338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshaytheau",
"html_url": "https://github.com/akshaytheau",
"followers_url": "https://api.github.com/users/akshaytheau/followers",
"following_url": "https://api.github.com/users/akshaytheau/following{/other_user}",
"gists_url": "https://api.github.com/users/akshaytheau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshaytheau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshaytheau/subscriptions",
"organizations_url": "https://api.github.com/users/akshaytheau/orgs",
"repos_url": "https://api.github.com/users/akshaytheau/repos",
"events_url": "https://api.github.com/users/akshaytheau/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshaytheau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! That's is expected, you need to use the `main` branch and install from source as t is not part of the latest release. \r\n`pip install git+https://github.com/huggingface/transformers`",
"Thanks it worked.",
"it didnt work for me, i m trying to run it through colab",
"You need to reload the kernel",
"> it didnt work for me, i m trying to run it through colab\r\n\r\nTry install `torch` and `pillow`, it works for me.\r\n```\r\npip install torch pillow\r\n```"
] | 1,698 | 1,698 | 1,698 | NONE | null | ### System Info
I tried running the code on google colab. I am using the latest transformers verison 4.34.1
@ArthurZucker
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Try importing FuyuProcessor and FuyuforCasualLM from transformers
### Expected behavior
It is should be imported properly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27009/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27008/comments | https://api.github.com/repos/huggingface/transformers/issues/27008/events | https://github.com/huggingface/transformers/issues/27008 | 1,956,687,476 | I_kwDOCUB6oc50oK50 | 27,008 | 'Owlv2Processor' object has no attribute 'post_process' | {
"login": "dinhanhx",
"id": 38489776,
"node_id": "MDQ6VXNlcjM4NDg5Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/38489776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dinhanhx",
"html_url": "https://github.com/dinhanhx",
"followers_url": "https://api.github.com/users/dinhanhx/followers",
"following_url": "https://api.github.com/users/dinhanhx/following{/other_user}",
"gists_url": "https://api.github.com/users/dinhanhx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dinhanhx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dinhanhx/subscriptions",
"organizations_url": "https://api.github.com/users/dinhanhx/orgs",
"repos_url": "https://api.github.com/users/dinhanhx/repos",
"events_url": "https://api.github.com/users/dinhanhx/events{/privacy}",
"received_events_url": "https://api.github.com/users/dinhanhx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nThanks for flagging (note that you can also open a discussion on the huggingface model repo). The model cards are now updated, one needs to use `post_process_object_detection` instead of `post_process`.",
"I understand. Thanks for the help."
] | 1,698 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-5.15.133+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (gpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: NVIDIA P-100
- Using distributed or parallel set-up in script?: Non
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example code from huggingface model repo, https://huggingface.co/google/owlv2-base-patch16,
Kaggle notebook, https://www.kaggle.com/inhanhv/google-owlv2
### Expected behavior
'Owlv2Processor' object has attribute 'post_process' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27008/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27007/comments | https://api.github.com/repos/huggingface/transformers/issues/27007/events | https://github.com/huggingface/transformers/pull/27007 | 1,956,685,062 | PR_kwDOCUB6oc5dgUNf | 27,007 | Fuyu: improve image processing | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This version of the processor now correctly supports batching, dtype casting, and the left-padded batch generation yields the same results as single-input generation. \r\n\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\nimport io\r\nfrom transformers import FuyuForCausalLM, FuyuProcessor, FuyuImageProcessor, AutoTokenizer\r\nfrom PIL import Image\r\n\r\npretrained_path = \"adept/fuyu-8b\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(pretrained_path, pad_token_id=0)\r\nimage_processor = FuyuImageProcessor()\r\nprocessor = FuyuProcessor(image_processor=image_processor, tokenizer=tokenizer)\r\n\r\ntext_prompt = \"Answer the following DocVQA question based on the image. \\n Which is the metro in California that has a good job Outlook?\"\r\njobs_image_url = \"https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/jobs.png\"\r\njobs_image_pil = Image.open(io.BytesIO(requests.get(jobs_image_url).content))\r\n\r\nsecond_text_prompt = \"Answer the following DocVQA question based on the image. \\n What if the maximum male life expectancy?\"\r\nchart_image_url = \"https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/chart.png\"\r\nchart_image_pil = Image.open(io.BytesIO(requests.get(chart_image_url).content))\r\n\r\nthird_text_prompt = \"Answer the following DocVQA question based on the image. \\n What sport is that?\"\r\nskate_image_url = \"https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/skateboard.png\"\r\nskate_image_pil = Image.open(io.BytesIO(requests.get(skate_image_url).content))\r\n\r\nfourth_text_prompt = \"Answer the following DocVQA question based on the image. \\n What was the fair amount of paid vacation days in the United Kingdom?\"\r\nvacations_image_url = \"https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/vacation_days_hr.png\"\r\nvacations_image_pil = Image.open(io.BytesIO(requests.get(vacations_image_url).content)).convert('RGB')\r\n\r\ntexts = [text_prompt, second_text_prompt, third_text_prompt, fourth_text_prompt]\r\nimages = [jobs_image_pil, chart_image_pil, skate_image_pil, vacations_image_pil]\r\n\r\nmodel_inputs = processor(text=texts, images=images).to('cuda')\r\n\r\n\r\nmodel = FuyuForCausalLM.from_pretrained(pretrained_path, device_map='auto')\r\n\r\ngeneration = processor.tokenizer.batch_decode(model.generate(\r\n **model_inputs, max_new_tokens=10)[:, -10:], skip_special_tokens=True)\r\n\r\nsingle_generations = ['Los Angeles', '80.7',\r\n 'skateboarding', '28']\r\n\r\n\r\nfor single_generation, batched_generation in zip(single_generations, generation):\r\n answer = batched_generation.split('\\x04 ', 1)[1] if '\\x04' in batched_generation else ''\r\n assert (single_generation == answer)\r\n\r\n```",
"I think current version of image processing and tokenization does not support the usage sample code in the original release, right?\r\n\r\n```\r\nfrom transformers import FuyuProcessor, FuyuForCausalLM\r\nfrom PIL import Image\r\n\r\n# load model and processor\r\nmodel_id = \"adept/fuyu-8b\"\r\nprocessor = FuyuProcessor.from_pretrained(model_id)\r\nmodel = FuyuForCausalLM.from_pretrained(model_id, device_map=\"cuda:0\")\r\n\r\n# prepare inputs for the model\r\ntext_prompt = \"Generate a coco-style caption.\\n\"\r\nimage_path = \"bus.png\" # https://huggingface.co/adept-hf-collab/fuyu-8b/blob/main/bus.png\r\nimage = Image.open(image_path)\r\n\r\ninputs = processor(text=text_prompt, images=image, return_tensors=\"pt\")\r\nfor k, v in inputs.items():\r\n inputs[k] = v.to(\"cuda:0\")\r\n\r\n# autoregressively generate text\r\ngeneration_output = model.generate(**inputs, max_new_tokens=7)\r\ngeneration_text = processor.batch_decode(generation_output[:, -7:], skip_special_tokens=True)\r\nassert generation_text == ['A bus parked on the side of a road.']\r\n\r\n```\r\n\r\nI am trying to run the above code, and the error occurs that the inputs['image_patches'] is now a list and cannot be put to device.\r\n\r\nI suggested that either you can also support this type of processing or you can directly update the sample code on the huggingface release page in [link](https://huggingface.co/adept/fuyu-8b)",
"Hi,\r\n\r\nI've updated the code snippet on the model card, it works for me as expected (note that you need to install Transformers from the main branch: pip install -q git+https://github.com/huggingface/transformers.git)"
] | 1,698 | 1,699 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
This PR aims at aligning the FuyuImageProcessor class with other vision/language models within transformers. Fuyu model expects a tensor of token ids, a tensor of patch embeddings, and an indexing tensor indicating where to put rows of patch embeddings into the token embeddings, separated by the input ids. Currently the image processor does not separate the steps necessary to achieve this output in the Processor. It also limits the inference size to batches of size 1. It also aims at improving readability and code quality of the processor to possibly enable pipelining later on.
Pending tasks:
- [x] Return a `BatchFeature` with arbitrary batch size
- [x] add `do_rescale`, `do_normalize`, `do_pad` arguments in the `ImageProcessor` constructor
- [x] align patch-ification methods to ViTMAE and possibly pix2struct
- [x] rework and refactor method `process_images_for_model_input`, currently hard to read
- [x] test long images, stretched images, usual processor edge cases
- [x] test images and no text, text and no image in Processor class leveraging tokenizer + ImageProcessor
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27007/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27007",
"html_url": "https://github.com/huggingface/transformers/pull/27007",
"diff_url": "https://github.com/huggingface/transformers/pull/27007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27007.patch",
"merged_at": 1698924341000
} |
https://api.github.com/repos/huggingface/transformers/issues/27006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27006/comments | https://api.github.com/repos/huggingface/transformers/issues/27006/events | https://github.com/huggingface/transformers/pull/27006 | 1,956,631,891 | PR_kwDOCUB6oc5dgIyS | 27,006 | Enable ProtST on ESM | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Rocketknight1 ",
"Hi @jiqing-feng! The ProtST paper looks good but I don't think we should add classes to ESM like this - it will make the ESM codebase very messy. Instead, why not add ProtST as a custom model, and just copy the classes you need from ESM? There are instructions here: https://huggingface.co/docs/transformers/custom_models",
"Hi @Rocketknight1 , we would like to discuss some more details with you to see if there could be any second thought on ProtST integration to HF. ProtST is a multi-modality model based on ESM and PubMedBERT for protein science. \r\n\r\nAs far as we understand, HF does not support multi-modal interface explicitly and directly. People interested in multi-modality learning using LMs can only clone a repo and merge codes from different models by themselves. The main difficulty we believe is separate use of tokenizers for different modality and a lack of an integrating model interface as a general entry for different modality data. \r\n\r\nHowever, multi-modality has already become the main stream for almost all domains and it may be an interesting idea to have people pile up these existing single-modal models from HF with an easy-to-use interface. \r\n\r\nThe public released repo for ProtST is based on TorchDrug, which we strongly agree that is not suitable to be integrated. We have modified the codebase to be under HF. It's still a private undergoing repo, but **we would really like to share with you about it if you feel interested**. \r\n\r\nOverall, we believe supporting multi-modality is very important for HF's development. It's OK no matter whether ProtST would be viewed as appropriate or not to be integrated into the main repo of HF due to its specific application. But we would really like to **push forward the multi-modality part in HF to make the community better**:)\r\n\r\n",
"Hi @KatarinaYuan, yes, we agree! The model looks really good and we'd love to have it in the Hugging Face Hub, but there is no need to modify the ESM class code. You can just upload the model as a checkpoint with custom code, without needing to change anything in the `transformers` codebase! Several other important Bio-ML models on the Hugging Face Hub already do this, such as [Nucleotide Transformer V2](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-50m-multi-species).\r\n\r\nWe agree that multi-modal models are important, and we'd be happy to help by answering any questions during the port, or by promoting the model on Twitter after it's added to the Hub.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 | CONTRIBUTOR | null | This PR enables [ProtST](https://arxiv.org/pdf/2301.12040.pdf) on ESM | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27006/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27006/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27006",
"html_url": "https://github.com/huggingface/transformers/pull/27006",
"diff_url": "https://github.com/huggingface/transformers/pull/27006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27006.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27005/comments | https://api.github.com/repos/huggingface/transformers/issues/27005/events | https://github.com/huggingface/transformers/issues/27005 | 1,956,626,913 | I_kwDOCUB6oc50n8Hh | 27,005 | Fine tune decoder-only transformers in seq2seq manner | {
"login": "YerongLi",
"id": 13112023,
"node_id": "MDQ6VXNlcjEzMTEyMDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/13112023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YerongLi",
"html_url": "https://github.com/YerongLi",
"followers_url": "https://api.github.com/users/YerongLi/followers",
"following_url": "https://api.github.com/users/YerongLi/following{/other_user}",
"gists_url": "https://api.github.com/users/YerongLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YerongLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YerongLi/subscriptions",
"organizations_url": "https://api.github.com/users/YerongLi/orgs",
"repos_url": "https://api.github.com/users/YerongLi/repos",
"events_url": "https://api.github.com/users/YerongLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YerongLi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 | NONE | null | ### Feature request
https://github.com/huggingface/transformers/issues/1464
This posts discuss fine-tuning GPT2,
With GPT2/LLaMA, by default, we need to input the [prompt label] the whole sentence `model([prompt label])` in fine-tuning and caculate the CrossEntropy on the label part, and the model output the model().logits.
**Are there any ways to input the prompt only and do the fine-tuning in the seq2seq manner?** (`model(prompt)`), this way we minimize the loss of log p(y|x).
Get the feature of `model(prompt)` rather than `model([prompt label])` is the whole point.
### Motivation
seq2seq equivalence fine-tuning workflow for decoder-only transformers.
### Your contribution
I could submit PR with discussion as a guidance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27005/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27004/comments | https://api.github.com/repos/huggingface/transformers/issues/27004/events | https://github.com/huggingface/transformers/issues/27004 | 1,956,604,930 | I_kwDOCUB6oc50n2wC | 27,004 | pip install transformers==4.34.1 not usable on SDXL automatic1111 - conflict with tokenizer and huggingface_hub | {
"login": "DennisBalkan",
"id": 144777586,
"node_id": "U_kgDOCKEhcg",
"avatar_url": "https://avatars.githubusercontent.com/u/144777586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DennisBalkan",
"html_url": "https://github.com/DennisBalkan",
"followers_url": "https://api.github.com/users/DennisBalkan/followers",
"following_url": "https://api.github.com/users/DennisBalkan/following{/other_user}",
"gists_url": "https://api.github.com/users/DennisBalkan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DennisBalkan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DennisBalkan/subscriptions",
"organizations_url": "https://api.github.com/users/DennisBalkan/orgs",
"repos_url": "https://api.github.com/users/DennisBalkan/repos",
"events_url": "https://api.github.com/users/DennisBalkan/events{/privacy}",
"received_events_url": "https://api.github.com/users/DennisBalkan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, sorry I have no idea how to reproduce your issue or use this library. A snippet or a traceback would bel helpful! 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 | NONE | null | ### System Info
pip install transformers==4.34.1 not usable on SDXL automatic1111 - conflict with tokenizer and huggingface_hub. it s very long test. try it yourself to update and you will see all conflict steps.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
updating without conflicts
### Expected behavior
updating without conflicts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27004/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27003/comments | https://api.github.com/repos/huggingface/transformers/issues/27003/events | https://github.com/huggingface/transformers/pull/27003 | 1,956,576,357 | PR_kwDOCUB6oc5df9D6 | 27,003 | [OWLv2] Add method | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27003). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
This PR adds a new `image_guided_detection_v2` method, which unlike the original method, leverages the objectness head to get the top predicted object in the query image.
It also fixes the documentation example which has bad threshold values.
Fixes #26920 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27003/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27003",
"html_url": "https://github.com/huggingface/transformers/pull/27003",
"diff_url": "https://github.com/huggingface/transformers/pull/27003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27003.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27002/comments | https://api.github.com/repos/huggingface/transformers/issues/27002/events | https://github.com/huggingface/transformers/issues/27002 | 1,956,560,026 | I_kwDOCUB6oc50nrya | 27,002 | Flash_attn not able to import with transformer==4.34.1 | {
"login": "Girrajjangid",
"id": 34280160,
"node_id": "MDQ6VXNlcjM0MjgwMTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/34280160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Girrajjangid",
"html_url": "https://github.com/Girrajjangid",
"followers_url": "https://api.github.com/users/Girrajjangid/followers",
"following_url": "https://api.github.com/users/Girrajjangid/following{/other_user}",
"gists_url": "https://api.github.com/users/Girrajjangid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Girrajjangid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Girrajjangid/subscriptions",
"organizations_url": "https://api.github.com/users/Girrajjangid/orgs",
"repos_url": "https://api.github.com/users/Girrajjangid/repos",
"events_url": "https://api.github.com/users/Girrajjangid/events{/privacy}",
"received_events_url": "https://api.github.com/users/Girrajjangid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Girrajjangid \r\nThanks for the issue, I think that you might be having a package conflict, can you make sure to have flash attention version greater than 2.0?\r\nAlso, you might consider switching to main branch to avoid this issue e.g. this fix: https://github.com/huggingface/transformers/pull/26785 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 | NONE | null | ### System Info
!pip -q install auto-gptq==0.4.2
!pip -q install optimum==1.13.2
!pip -q install bitsandbytes==0.41.1
!pip -q install accelerate==0.23.0
!pip -q install transformers==4.34.1
!pip -q install mlflow==2.7.0
--------------------------------------------------------
- `transformers` version: 4.34.1
- Platform: Linux-5.15.0-1045-aws-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.11.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in a script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` python
model_name_or_path = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ"
revision = "main"
model_path = snapshot_download(repo_id = model_name_or_path, revision = revision)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path, use_fast=True)
model = transformers.AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map="auto").eval()
```
# Error that I got:
```python
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/utils/import_utils.py:1282, in _LazyModule._get_module(self, module_name)
1281 try:
-> 1282 return importlib.import_module("." + module_name, self.__name__)
1283 except Exception as e:
File /usr/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:45
44 if is_flash_attn_available():
---> 45 from flash_attn import flash_attn_func, flash_attn_varlen_func
46 from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
ImportError: cannot import name 'flash_attn_func' from 'flash_attn' (/databricks/python/lib/python3.10/site-packages/flash_attn/__init__.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
File <command-236141353542329>, line 8
5 model_path = snapshot_download(repo_id = model_name_or_path, revision = revision)
7 tokenizer = transformers.AutoTokenizer.from_pretrained(model_path, use_fast=True)
----> 8 model = transformers.AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map="auto").eval()
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:564, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
560 return model_class.from_pretrained(
561 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
562 )
563 elif type(config) in cls._model_mapping.keys():
--> 564 model_class = _get_model_class(config, cls._model_mapping)
565 return model_class.from_pretrained(
566 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
567 )
568 raise ValueError(
569 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
570 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
571 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:387, in _get_model_class(config, model_mapping)
386 def _get_model_class(config, model_mapping):
--> 387 supported_models = model_mapping[type(config)]
388 if not isinstance(supported_models, (list, tuple)):
389 return supported_models
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:739, in _LazyAutoMapping.__getitem__(self, key)
737 if model_type in self._model_mapping:
738 model_name = self._model_mapping[model_type]
--> 739 return self._load_attr_from_module(model_type, model_name)
741 # Maybe there was several model types associated with this config.
742 model_types = [k for k, v in self._config_mapping.items() if v == key.__name__]
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:753, in _LazyAutoMapping._load_attr_from_module(self, model_type, attr)
751 if module_name not in self._modules:
752 self._modules[module_name] = importlib.import_module(f".{module_name}", "transformers.models")
--> 753 return getattribute_from_module(self._modules[module_name], attr)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:697, in getattribute_from_module(module, attr)
695 if isinstance(attr, tuple):
696 return tuple(getattribute_from_module(module, a) for a in attr)
--> 697 if hasattr(module, attr):
698 return getattr(module, attr)
699 # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the
700 # object at the top level.
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/utils/import_utils.py:1272, in _LazyModule.__getattr__(self, name)
1270 value = self._get_module(name)
1271 elif name in self._class_to_module.keys():
-> 1272 module = self._get_module(self._class_to_module[name])
1273 value = getattr(module, name)
1274 else:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-33e11ec6-0ebd-4ce4-bae6-a63d862b86d4/lib/python3.10/site-packages/transformers/utils/import_utils.py:1284, in _LazyModule._get_module(self, module_name)
1282 return importlib.import_module("." + module_name, self.__name__)
1283 except Exception as e:
-> 1284 raise RuntimeError(
1285 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1286 f" traceback):\n{e}"
1287 ) from e
RuntimeError: Failed to import transformers.models.mistral.modeling_mistral because of the following error (look up to see its traceback):
cannot import name 'flash_attn_func' from 'flash_attn' (/databricks/python/lib/python3.10/site-packages/flash_attn/__init__.py)
```
### Expected behavior
The same code is working fine on colab. The difference is that on colab there is no `flash_attn`. But on my cluster `flash_attn==1.0.7` is installed.
The same code is also working fine when I install.
!pip -q install transformers==4.34.0 # downgrade.
!pip -q install torch==2.1.0 # upgrade.
!pip -q install flash-attn==2.3.2 --no-build-isolation # upgrade.
But it doesnt work when I create serving endpoint. [Failed during container Image build] with this eror.
```txt
#11 0.283 channels:
#11 0.283 - conda-forge
#11 0.283 dependencies:
#11 0.283 - python=3.10.12
#11 0.283 - pip<=22.2.2
#11 0.283 - pip:
#11 0.283 - auto-gptq==0.4.2
#11 0.283 - bitsandbytes==0.41.1
#11 0.283 - boto3==1.24.28
#11 0.283 - cloudpickle==2.0.0
#11 0.283 - configparser==5.2.0
#11 0.283 - defusedxml==0.7.1
#11 0.283 - flash-attn==2.3.2
#11 0.283 - google-cloud-storage==2.10.0
#11 0.283 - gunicorn==20.1.0
#11 0.283 - ipython==8.10.0
#11 0.283 - numpy==1.21.5
#11 0.283 - optimum==1.13.2
#11 0.283 - packaging==21.3
#11 0.283 - pandas==1.4.4
#11 0.283 - protobuf==3.19.4
#11 0.283 - pyarrow==8.0.0
#11 0.283 - pyyaml==6.0
#11 0.283 - requests==2.28.1
#11 0.283 - scipy==1.9.1
#11 0.283 - sentencepiece==0.1.99
#11 0.283 - soundfile==0.12.1
#11 0.283 - tensorflow==2.11.1
#11 0.283 - auto-gptq==0.4.2
#11 0.283 - optimum==1.13.2
#11 0.283 - bitsandbytes==0.41.1
#11 0.283 - accelerate==0.23.0
#11 0.283 - transformers==4.34.0
#11 0.283 - mlflow==2.7.0
#11 0.283 - torch==2.1.0
#11 0.283 name: mlflow-env
#11 0.616 Collecting package metadata (repodata.json): ...working... done
#11 55.01 Solving environment: ...working... done
#11 58.25
#11 58.25
#11 58.25 ==> WARNING: A newer version of conda exists. <==
#11 58.25 current version: 4.10.3
#11 58.25 latest version: 23.9.0
#11 58.25
#11 58.25 Please update conda by running
#11 58.25
#11 58.25 $ conda update -n base -c defaults conda
#11 58.25
#11 58.25
#11 58.26
#11 58.26 Downloading and Extracting Packages
#11 58.26
tzdata-2023c | 115 KB | | 0%
tzdata-2023c | 115 KB | #3 | 14%
tzdata-2023c | 115 KB | ########## | 100%
#11 58.46
libsqlite-3.43.2 | 820 KB | | 0%
libsqlite-3.43.2 | 820 KB | ########## | 100%
#11 58.55
pip-22.2.2 | 1.5 MB | | 0%
pip-22.2.2 | 1.5 MB | ########## | 100%
pip-22.2.2 | 1.5 MB | ########## | 100%
#11 58.90
libgomp-13.2.0 | 411 KB | | 0%
libgomp-13.2.0 | 411 KB | ########## | 100%
#11 58.95
xz-5.2.6 | 409 KB | | 0%
xz-5.2.6 | 409 KB | ########## | 100%
xz-5.2.6 | 409 KB | ########## | 100%
#11 59.08
wheel-0.41.2 | 56 KB | | 0%
wheel-0.41.2 | 56 KB | ########## | 100%
#11 59.15
libzlib-1.2.13 | 60 KB | | 0%
libzlib-1.2.13 | 60 KB | ########## | 100%
#11 59.21
bzip2-1.0.8 | 484 KB | | 0%
bzip2-1.0.8 | 484 KB | ########## | 100%
bzip2-1.0.8 | 484 KB | ########## | 100%
#11 59.32
libffi-3.4.2 | 57 KB | | 0%
libffi-3.4.2 | 57 KB | ########## | 100%
#11 59.38
ncurses-6.4 | 860 KB | | 0%
ncurses-6.4 | 860 KB | ########## | 100%
ncurses-6.4 | 860 KB | ########## | 100%
#11 59.64
openssl-3.1.3 | 2.5 MB | | 0%
openssl-3.1.3 | 2.5 MB | ########## | 100%
openssl-3.1.3 | 2.5 MB | ########## | 100%
#11 59.77
tk-8.6.13 | 3.1 MB | | 0%
tk-8.6.13 | 3.1 MB | ########## | 100%
tk-8.6.13 | 3.1 MB | ########## | 100%
#11 59.90
libuuid-2.38.1 | 33 KB | | 0%
libuuid-2.38.1 | 33 KB | ########## | 100%
#11 59.96
ca-certificates-2023 | 146 KB | | 0%
ca-certificates-2023 | 146 KB | ########## | 100%
#11 60.01
ld_impl_linux-64-2.4 | 688 KB | | 0%
ld_impl_linux-64-2.4 | 688 KB | ########## | 100%
#11 60.09
libgcc-ng-13.2.0 | 753 KB | | 0%
libgcc-ng-13.2.0 | 753 KB | ########## | 100%
#11 60.16
libnsl-2.0.1 | 33 KB | | 0%
libnsl-2.0.1 | 33 KB | ########## | 100%
#11 60.21
readline-8.2 | 275 KB | | 0%
readline-8.2 | 275 KB | ########## | 100%
#11 60.26
python-3.10.12 | 24.4 MB | | 0%
python-3.10.12 | 24.4 MB | ##9 | 29%
python-3.10.12 | 24.4 MB | #######3 | 73%
python-3.10.12 | 24.4 MB | ########## | 100%
#11 60.95
_libgcc_mutex-0.1 | 3 KB | | 0%
_libgcc_mutex-0.1 | 3 KB | ########## | 100%
#11 60.99
setuptools-68.2.2 | 454 KB | | 0%
setuptools-68.2.2 | 454 KB | ########## | 100%
#11 61.07
_openmp_mutex-4.5 | 23 KB | | 0%
_openmp_mutex-4.5 | 23 KB | ########## | 100%
#11 61.12 Preparing transaction: ...working... done
#11 61.33 Verifying transaction: ...working... done
#11 62.26 Executing transaction: ...working... done
#11 64.36 Installing pip dependencies: ...working... Pip subprocess error:
#11 68.27 error: subprocess-exited-with-error
#11 68.27
#11 68.27 × python setup.py egg_info did not run successfully.
#11 68.27 │ exit code: 1
#11 68.27 ╰─> [6 lines of output]
#11 68.27 Traceback (most recent call last):
#11 68.27 File "<string>", line 2, in <module>
#11 68.27 File "<pip-setuptools-caller>", line 34, in <module>
#11 68.27 File "/tmp/pip-install-xbx5x5ho/flash-attn_d51c6af2cbbd43a7ba38c9152f326ee4/setup.py", line 8, in <module>
#11 68.27 from packaging.version import parse, Version
#11 68.27 ModuleNotFoundError: No module named 'packaging'
#11 68.27 [end of output]
#11 68.27
#11 68.27 note: This error originates from a subprocess, and is likely not a problem with pip.
#11 68.27 error: metadata-generation-failed
#11 68.27
#11 68.27 × Encountered error while generating package metadata.
#11 68.27 ╰─> See above for output.
#11 68.27
#11 68.27 note: This is an issue with the package mentioned above, not pip.
#11 68.27 hint: See above for details.
#11 68.27
#11 68.27 Ran pip subprocess with arguments:
#11 68.27 ['/opt/conda/envs/mlflow-env/bin/python', '-m', 'pip', 'install', '-U', '-r', '/model/condaenv.s6k7436y.requirements.txt']
#11 68.27 Pip subprocess output:
#11 68.27 Collecting auto-gptq==0.4.2
#11 68.27 Downloading auto_gptq-0.4.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.8 MB)
#11 68.27 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 31.7 MB/s eta 0:00:00
#11 68.27 Collecting bitsandbytes==0.41.1
#11 68.27 Downloading bitsandbytes-0.41.1-py3-none-any.whl (92.6 MB)
#11 68.27 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.6/92.6 MB 19.4 MB/s eta 0:00:00
#11 68.27 Collecting boto3==1.24.28
#11 68.27 Downloading boto3-1.24.28-py3-none-any.whl (132 kB)
#11 68.27 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.5/132.5 kB 25.5 MB/s eta 0:00:00
#11 68.27 Collecting cloudpickle==2.0.0
#11 68.27 Downloading cloudpickle-2.0.0-py3-none-any.whl (25 kB)
#11 68.27 Collecting configparser==5.2.0
#11 68.27 Downloading configparser-5.2.0-py3-none-any.whl (19 kB)
#11 68.27 Collecting defusedxml==0.7.1
#11 68.27 Downloading defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
#11 68.27 Collecting flash-attn==2.3.2
#11 68.27 Downloading flash_attn-2.3.2.tar.gz (2.3 MB)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27002/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27001/comments | https://api.github.com/repos/huggingface/transformers/issues/27001/events | https://github.com/huggingface/transformers/pull/27001 | 1,956,504,278 | PR_kwDOCUB6oc5dftiZ | 27,001 | [Fuyu] Add tests | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for adding this!\r\n> \r\n> Improvements on the modeling file are great. For the tests, lets coordinate with @molbap to figure the best sequence of merges as there's a few pending PRs relating to Fuyu that are due to be merged e.g. #27133 #27007 #27083 and might affect this PR\r\n\r\n@NielsRogge, I think once you remove the extra file `src/transformers/models/fuyu/test.py`, and when CI green, we can request a final review.",
"We have to put `src/transformers/models/fuyu/modeling_fuyu.py` in `utils/slow_documentation_tests.txt` to avoid timeout on CircleCI"
] | 1,698 | 1,700 | 1,700 | CONTRIBUTOR | null | # What does this PR do?
Fuyu currently is not being tested. This PR makes sure the model is tested.
To do:
- [ ] the model should ideally work out-of-the-box with the `image-to-text` and `vqa` pipelines, so processor + image processor should be battle tested to make sure they fit the API. => can be addressed in a separate PR
For the latter, the following should ideally work without any additional code:
```
from transformers import FuyuProcessor, FuyuForCausalLM
processor = FuyuProcessor.from_pretrained("adept/fuyu-8b")
model = FuyuForCausalLM.from_pretrained("adept/fuyu-8b")
inputs = processor(images=image, text=text, return_tensors="pt")
outputs = model.generate(**inputs)
predictions = processor.batch_decode(outputs[0], skip_special_tokens=True)
```
cc @molbap | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27001/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27001",
"html_url": "https://github.com/huggingface/transformers/pull/27001",
"diff_url": "https://github.com/huggingface/transformers/pull/27001.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27001.patch",
"merged_at": 1700037185000
} |
https://api.github.com/repos/huggingface/transformers/issues/27000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27000/comments | https://api.github.com/repos/huggingface/transformers/issues/27000/events | https://github.com/huggingface/transformers/issues/27000 | 1,956,450,175 | I_kwDOCUB6oc50nQ9_ | 27,000 | Unrecognized tensor type ID: AutocastCUDA | {
"login": "fancyerii",
"id": 5372812,
"node_id": "MDQ6VXNlcjUzNzI4MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5372812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fancyerii",
"html_url": "https://github.com/fancyerii",
"followers_url": "https://api.github.com/users/fancyerii/followers",
"following_url": "https://api.github.com/users/fancyerii/following{/other_user}",
"gists_url": "https://api.github.com/users/fancyerii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fancyerii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fancyerii/subscriptions",
"organizations_url": "https://api.github.com/users/fancyerii/orgs",
"repos_url": "https://api.github.com/users/fancyerii/repos",
"events_url": "https://api.github.com/users/fancyerii/events{/privacy}",
"received_events_url": "https://api.github.com/users/fancyerii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada \r\n",
"> ### System Info\r\n> * `transformers` version: 4.34.1\r\n> * Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31\r\n> * Python version: 3.9.18\r\n> * Huggingface_hub version: 0.17.3\r\n> * Safetensors version: 0.4.0\r\n> * Accelerate version: 0.23.0\r\n> * Accelerate config: - compute_environment: LOCAL_MACHINE\r\n> - distributed_type: MULTI_GPU\r\n> - mixed_precision: fp16\r\n> - use_cpu: False\r\n> - debug: False\r\n> - num_processes: 2\r\n> - machine_rank: 0\r\n> - num_machines: 2\r\n> - gpu_ids: 0\r\n> - main_process_ip: 10.8.0.7\r\n> - main_process_port: 29500\r\n> - rdzv_backend: c10d\r\n> - same_network: False\r\n> - main_training_function: main\r\n> - downcast_bf16: no\r\n> - tpu_use_cluster: False\r\n> - tpu_use_sudo: False\r\n> - tpu_env: []\r\n> * PyTorch version (GPU?): 2.1.0+cu118 (True)\r\n> * Tensorflow version (GPU?): not installed (NA)\r\n> * Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n> * Jax version: not installed\r\n> * JaxLib version: not installed\r\n> * Using GPU in script?:\r\n> * Using distributed or parallel set-up in script?:\r\n> \r\n> ### Who can help?\r\n> I have similar problem when I try [this tutorial](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing#scrollTo=vT0XjNc2jYKy).\r\n> \r\n> File \"/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl return forward_call(*args, **kwargs) File \"/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/peft/tuners/lora.py\", line 1255, in forward result = self.quant_linear_module(x) File \"/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File \"/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl return forward_call(*args, **kwargs) File \"/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py\", line 221, in forward self.autogptq_cuda.vecquant4matmul_old(x, self.qweight, out, self.scales.float(), self.qzeros, self.group_size) RuntimeError: Unrecognized tensor type ID: AutocastCUDA\r\n> \r\n> Hardware details Nvidia A100 40GB\r\n> \r\n> Software ubuntu 20.04 torch=2.1.0+cu118 auto-gptq=0.4.2+cu117 accelerate=0.23.0\r\n> \r\n> I have found related issue in [here](https://github.com/PanQiWei/AutoGPTQ/issues/374) but found no answer.\r\n> \r\n> ### Information\r\n> * [ ] The official example scripts\r\n> * [ ] My own modified scripts\r\n> \r\n> ### Tasks\r\n> * [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n> * [ ] My own task or dataset (give details below)\r\n> \r\n> ### Reproduction\r\n> run https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing#scrollTo=vT0XjNc2jYKy\r\n> \r\n> ### Expected behavior\r\n> no error.\r\n\r\n@fancyerii Not a good solution, but can you try downgrading the torch version to 2.0.1+cu118",
"Also seems to be an autoGPTQ issue rather than a transformers one! ",
"Closing as duplicate of https://github.com/PanQiWei/AutoGPTQ/issues/374"
] | 1,698 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 2
- gpu_ids: 0
- main_process_ip: 10.8.0.7
- main_process_port: 29500
- rdzv_backend: c10d
- same_network: False
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
I have similar problem when I try [this tutorial](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing#scrollTo=vT0XjNc2jYKy).
File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/peft/tuners/lora.py", line 1255, in forward
result = self.quant_linear_module(x)
File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py", line 221, in forward
self.autogptq_cuda.vecquant4matmul_old(x, self.qweight, out, self.scales.float(), self.qzeros, self.group_size)
RuntimeError: Unrecognized tensor type ID: AutocastCUDA
Hardware details
Nvidia A100 40GB
Software
ubuntu 20.04 torch=2.1.0+cu118 auto-gptq=0.4.2+cu117 accelerate=0.23.0
I have found related issue in [here](https://github.com/PanQiWei/AutoGPTQ/issues/374) but found no answer.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing#scrollTo=vT0XjNc2jYKy
### Expected behavior
no error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27000/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26999/comments | https://api.github.com/repos/huggingface/transformers/issues/26999/events | https://github.com/huggingface/transformers/issues/26999 | 1,956,387,117 | I_kwDOCUB6oc50nBkt | 26,999 | IndexError: list index out of range in encode function inside SentenceTransformer.py | {
"login": "mali404",
"id": 112294417,
"node_id": "U_kgDOBrF6EQ",
"avatar_url": "https://avatars.githubusercontent.com/u/112294417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mali404",
"html_url": "https://github.com/mali404",
"followers_url": "https://api.github.com/users/mali404/followers",
"following_url": "https://api.github.com/users/mali404/following{/other_user}",
"gists_url": "https://api.github.com/users/mali404/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mali404/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mali404/subscriptions",
"organizations_url": "https://api.github.com/users/mali404/orgs",
"repos_url": "https://api.github.com/users/mali404/repos",
"events_url": "https://api.github.com/users/mali404/events{/privacy}",
"received_events_url": "https://api.github.com/users/mali404/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Not sure I understand if this is related to `transformers` or langchain. If it's indeed a bug in transformers could you share a minimal reproducer? Isolating the bug",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,701 | 1,701 | NONE | null | ### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1.
```ruby
#####BGE_LARGE####
#https://github.com/hwchase17/chroma-langchain/blob/master/persistent-qa.ipynb
from langchain.vectorstores import Chroma
from langchain.embeddings import HuggingFaceBgeEmbeddings
persist_directory = r'C:\Study\XXX\Code\Data\chroma\bge_large'
#defining embedding model
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
embedding_function = HuggingFaceBgeEmbeddings(
model_name=model_name,
cache_folder=r'C:\Study\XXX\Code\Data\chroma\models_cached',
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
#creating vector embeddings and storing them locally
vectordb = Chroma.from_texts(texts=chunks_redone, embedding=embedding_function, collection_name='iit_Bulletin', persist_directory=persist_directory)
```
2. Default settings for batch_size = 32. Using that, following error occurred: **IndexError:** list index out of range @ line 192 of SentenceTransformers.py ()
```ruby
all_embeddings = [all_embeddings[idx] for idx in np.argsort(length_sorted_idx)]
```
3. This error happened here because all_embeddings list has multiple times fewer elements than length_sorted_idx list.
This error was removed by changing batch_size from 32 to 1. Because it was observed in two cases that IndexError was coming at a point when len(all_embeddings) was approximately equal to len(length_sorted_idx)/{batch_size}. Changing the batch size to 1 ensured that len(all_embeddings) = len(length_sorted_idx)/{batch_size}
### Expected behavior
1. I would expect the **_encode_** function to effectively handle parallel processes when batch_size>1. Especially so while storing the embeddings into all_embedding list.
2. Disclaimer: I have only shared the _symptoms_ of the matter. I could not figure out its _root cause_. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26999/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26998/comments | https://api.github.com/repos/huggingface/transformers/issues/26998/events | https://github.com/huggingface/transformers/pull/26998 | 1,956,189,003 | PR_kwDOCUB6oc5depZW | 26,998 | I added new doc string into the MusicGenAttention class. | {
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"i m ok",
"Hey! Thanks for the feedback. But the docstring is not copied content.",
"What I mean is that the reason why the CI are broken is because the module to which you are adding docstring uses the `# Copied from` statement, which enforces that the entire module is copied from the source! ",
"Would it be okay to close this pull request now?\r\n\r\n",
"Of course! 🤗 "
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | I added a comprehensive and informative docstring to the MusicGenAttention class. It describes each of the parameters in the class in detail.
And now see what looks like my doc-string.

## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26998/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26998",
"html_url": "https://github.com/huggingface/transformers/pull/26998",
"diff_url": "https://github.com/huggingface/transformers/pull/26998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26998.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26997/comments | https://api.github.com/repos/huggingface/transformers/issues/26997/events | https://github.com/huggingface/transformers/pull/26997 | 1,956,105,018 | PR_kwDOCUB6oc5deW6t | 26,997 | Fuyu Finetuning Example | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The issue I am getting right now is that I am unable to pad the image patches correct it seems for batch training. The image processor doesn't seem to be doing it correct.",
"I thought it would be as simple as modifying variable_size var to false: https://github.com/huggingface/transformers/blob/main/src/transformers/models/fuyu/processing_fuyu.py#L508C1-L508C37\r\n\r\nAnd that seems to patch the image patches correct, but the `image_patches_indices` var doesn't seem to be updated properly",
"> The issue I am getting right now is that I am unable to pad the image patches correct it seems for batch training. The image processor doesn't seem to be doing it correct.\r\n\r\nYeah, the current image processor seems only outputs batch-size=1 result",
"Hey! Thanks a lot for wanting to contribute! We usually share links to training scripts like in the ressource section (like [ this one](https://twitter.com/m_olbap/status/1715362812757999991)), but we don't add them to the repo for every new model! @molbap is working on a fix for the image processor see #27007",
"Heyya @ArthurZucker ! In that case, should I close this PR or should this PR be the first of multi-modal model training examples? Seems like the space is getting super interesting and so it might be nice for people to see example scripts in the official repo for huggingface",
"Hi Is this working now?\r\n",
"This is working for me, but only with batch size 1. Using 4-bit Lora training this does fit on a 24GB GPU.\r\n\r\nUsing higher than batch 1 gives the error I reported in: #27255\r\n\r\n\r\n\r\n",
"Hi @ncoop57! It was mentioned before not sure if this PR should be merged here but it's a great resource. We don't have official training scripts, but we updated the `FuyuProcessor` has been updated with batching + attention masks with left-padding, see https://github.com/huggingface/transformers/blob/88832c01c8a962b653874c4ce4ed8df5783ac5cd/src/transformers/models/fuyu/processing_fuyu.py#L335C1-L383C1. Is that factored in for your loss calculation? I'm not 100% sure about the code but I think it would need to be adjusted a bit, such as \r\n\r\n```python\r\ndef collate_fn(examples):\r\n texts = [e[text_column_name] for e in examples]\r\n images = [e[image_column_name] for e in examples]\r\n output = processor(\r\n text=texts,\r\n images=images,\r\n padding=\"max_length\",\r\n truncation=True\r\n )\r\n first_non_padded = (output[\"input_ids\"] != tokenizer.pad_token_id).nonzero(as_tuple=False)\r\n first_non_padded_indices = first_non_padded[:, 1].view(-1, 1)\r\n labels = torch.full_like(output[\"input_ids\"], -100)\r\n for i, index in enumerate(first_non_padded_indices):\r\n labels[i, index:] = output[\"input_ids\"][i, index:]\r\n output[\"labels\"] = labels\r\n return output\r\n\r\n```\r\nlet me know what you think!",
"@molbap @ArthurZucker I would be pro adding a finetuning script for Fuyu to our examples because we yet have an example for this modality / task. ",
"Super happy to see more excitement around this example script! Thanks for the loss pointer, I'm looking into it. I had the same error as @Carolinabanana mentioned but only when using the Trainer object, not when doing it myself. I did try to account for padding/image patches because I don't think we want to train on the prediction of the image patch indices, but I did forget to take into account the left padding!",
"The script doesn't support VQA, does it?",
"Can it fit in a 80GB card with bfloat16 for tuning? I faced a CUDA memory issue when running `run_fuyu_no_trainer.py` on a single A100 card.",
"Can anyone provide a train script? Thanks!\r\n",
"Hi @ncoop57 Any update on the script and readiness for PR review? Already looks very good! Let us know if you need any help adding it to the library",
"AH sadly haven't had a chance to look back into this. Luckily Ill be off all next week so gonna add this to my list to complete and will reach out with help that I need, ty!",
"Thanks @ncoop57 ! I would love an example training script to try finetuning Fuyu-8B for a domain specific image captioning dataset ",
"@amyeroberts think I got it fully in and ready for review. @molbap if you have some time, I'd love for you to checkout how I am handling the left padding:\r\n\r\n```python\r\ndef collate_fn(examples):\r\n texts = [e[text_column_name] for e in examples]\r\n images = [e[image_column_name] for e in examples]\r\n output = processor(\r\n text=texts, images=images, padding=\"max_length\", truncation=True\r\n )\r\n position = (output[\"input_ids\"] == tokenizer.vocab[\"<s>\"]).nonzero(\r\n as_tuple=True\r\n )[0][\r\n 0\r\n ] # This gets the index of the first '1' in the tensor by passing the left padding\r\n output[\"labels\"] = torch.full_like(\r\n output[\"input_ids\"], -100\r\n ) # This creates a tensor filled with -100\r\n output[\"labels\"][position:] = output[\"input_ids\"][position:]\r\n return output\r\n```\r\n\r\nLet me know what else is needed to get this merged in 🤓 ",
"I wasn't able to get the Trainer working so right now this is just with a standard training loop and uses accelerate to handle all the FSDP stuff",
"@ncoop57 Great! For the quality checks, make sure toe run `make fixup` and resolve any issues it flags locally and push those changes. This should make the CI green. For example, I can see from [the recent CI](https://app.circleci.com/pipelines/github/huggingface/transformers/82174/workflows/f3b64855-fc57-4c9d-b74f-d9d6c8c03241/jobs/1056941) theres a few objects imported by never used in the script. ",
"@amyeroberts looks like I got all the CI stuff working 🤓 lemme know if any other stuff are needed",
"maybe one tweak would be renaming the folder to \"visual-language-modeling\"?",
"@amyeroberts any updates on this?",
"Awesome, thanks for the review and recommendations! 🤓 I've been working on finetuning using the https://huggingface.co/datasets/HuggingFaceM4/WebSight dataset, so once I have the model trained I'll make sure to add it here and to the README!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,698 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR is about contributing an example for finetuning the recent Adept Fuyu model architecture. It adds the ability to calculate the loss for the model as well as add a new folder for image-text examples. It is currently in DRAFT model as it still has a bug due to the data collator padding and an easy way of performing evaluation. I'm opening the PR in this state so that I can get help.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@molbap would love some help since you initially created the model!
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
Library:
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26997",
"html_url": "https://github.com/huggingface/transformers/pull/26997",
"diff_url": "https://github.com/huggingface/transformers/pull/26997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26997.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26996/comments | https://api.github.com/repos/huggingface/transformers/issues/26996/events | https://github.com/huggingface/transformers/pull/26996 | 1,956,047,300 | PR_kwDOCUB6oc5deLiq | 26,996 | Nits in Llama2 docstring | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,698 | 1,698 | 1,698 | MEMBER | null | This PR fixes minor typos and a broken docstring format (`torch_dtype = 'float16'` was not closed correctly) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26996/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26996",
"html_url": "https://github.com/huggingface/transformers/pull/26996",
"diff_url": "https://github.com/huggingface/transformers/pull/26996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26996.patch",
"merged_at": 1698063599000
} |
https://api.github.com/repos/huggingface/transformers/issues/26995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26995/comments | https://api.github.com/repos/huggingface/transformers/issues/26995/events | https://github.com/huggingface/transformers/pull/26995 | 1,955,864,967 | PR_kwDOCUB6oc5ddoB7 | 26,995 | python falcon doc-string example typo | {
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26995). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
<!--
-->
<!-- Remove if not applicable -->
fix small typo .
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26995",
"html_url": "https://github.com/huggingface/transformers/pull/26995",
"diff_url": "https://github.com/huggingface/transformers/pull/26995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26995.patch",
"merged_at": 1698058295000
} |
https://api.github.com/repos/huggingface/transformers/issues/26994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26994/comments | https://api.github.com/repos/huggingface/transformers/issues/26994/events | https://github.com/huggingface/transformers/issues/26994 | 1,955,846,981 | I_kwDOCUB6oc50k9tF | 26,994 | The current architecture does not support Flash Attention 2.0 - distilgpt2, gpt2-medium | {
"login": "unnir",
"id": 1344378,
"node_id": "MDQ6VXNlcjEzNDQzNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1344378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unnir",
"html_url": "https://github.com/unnir",
"followers_url": "https://api.github.com/users/unnir/followers",
"following_url": "https://api.github.com/users/unnir/following{/other_user}",
"gists_url": "https://api.github.com/users/unnir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unnir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unnir/subscriptions",
"organizations_url": "https://api.github.com/users/unnir/orgs",
"repos_url": "https://api.github.com/users/unnir/repos",
"events_url": "https://api.github.com/users/unnir/events{/privacy}",
"received_events_url": "https://api.github.com/users/unnir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @younesbelkada let's add them to the #26350 ? "
] | 1,697 | 1,698 | null | NONE | null | Could you please add the flash attention 2.0 support for these models: distilgpt2, gpt2-medium? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26994/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26993/comments | https://api.github.com/repos/huggingface/transformers/issues/26993/events | https://github.com/huggingface/transformers/issues/26993 | 1,955,835,723 | I_kwDOCUB6oc50k69L | 26,993 | Memory leak on trainer's evaluation | {
"login": "omermazig",
"id": 95534441,
"node_id": "U_kgDOBbG9aQ",
"avatar_url": "https://avatars.githubusercontent.com/u/95534441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omermazig",
"html_url": "https://github.com/omermazig",
"followers_url": "https://api.github.com/users/omermazig/followers",
"following_url": "https://api.github.com/users/omermazig/following{/other_user}",
"gists_url": "https://api.github.com/users/omermazig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omermazig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omermazig/subscriptions",
"organizations_url": "https://api.github.com/users/omermazig/orgs",
"repos_url": "https://api.github.com/users/omermazig/repos",
"events_url": "https://api.github.com/users/omermazig/events{/privacy}",
"received_events_url": "https://api.github.com/users/omermazig/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@omermazig is the notebook 1:1 that one, bar your dataset? Do you experience the memory leak occurring in that notebook as well? When testing locally, I couldn't indentify any memory leaks in the basic scripts that we have (like `run_glue.py`)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for the late response. Apparently this assumption:\r\n\r\n> (I'm getting the generic \"RuntimeError: Failed to load video after 10 retries\", but I see the memory is full so I'm pretty sure that's the problem)\r\n\r\nwas wrong, and after I stopped working with a dataset from google drive, and copy it to the machine before training (instead of working with the path to google drive), this ceases to happen."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
I run a training process Based on [this](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) notebook, and I use a dataset of 8000 train videos.
My inference process is memory-heavy. My val dataset's clip sampler is "random_multi", which means that for every video I take 5 clips of it, pass them to the model, and aggregate the results together for classification.
My problem is that there seems to be a memory leak in the evaluation process. I run on colab, and I can see that at each epoch, the memory usage stays the same throughout the epoch, goes up on evaluation, and stays there for the next epoch, and so on. So eventually after 2 or 3 epochs I'm running out of memory (I'm getting the generic "RuntimeError: Failed to load video after 10 retries", but I see the memory is full so I'm pretty sure that's the problem).
Is there an explanation for it? Is there something being saved **in memory** between epochs and if so, is there a way to release it? What should I do?
Thank you!
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
No way really. If it's needed I can supply my own notebook.
### Expected behavior
The memory usage should stay the same between epochs, and the memory needed for the evaluation process at the and of each epoch should be freed by the time the next epoch starts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26993/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26992/comments | https://api.github.com/repos/huggingface/transformers/issues/26992/events | https://github.com/huggingface/transformers/issues/26992 | 1,955,799,138 | I_kwDOCUB6oc50kyBi | 26,992 | Can't load wangchanberta (Camembert model) for train text classification model | {
"login": "wannaphong",
"id": 8536487,
"node_id": "MDQ6VXNlcjg1MzY0ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8536487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wannaphong",
"html_url": "https://github.com/wannaphong",
"followers_url": "https://api.github.com/users/wannaphong/followers",
"following_url": "https://api.github.com/users/wannaphong/following{/other_user}",
"gists_url": "https://api.github.com/users/wannaphong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wannaphong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wannaphong/subscriptions",
"organizations_url": "https://api.github.com/users/wannaphong/orgs",
"repos_url": "https://api.github.com/users/wannaphong/repos",
"events_url": "https://api.github.com/users/wannaphong/events{/privacy}",
"received_events_url": "https://api.github.com/users/wannaphong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nThis is not a bug, you're not preparing any `labels` for the model (the dataset only consists of `input_ids` and `attention_mask`).",
"@NielsRogge I was fixed the notebook but It still has the error.\r\n\r\n```\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\nColab: https://colab.research.google.com/drive/1TthDy7Li1veLHAVpuSlpMkGLsK8JK2i3?usp=sharing",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.4 (gpu)
- Jax version: 0.4.16
- JaxLib version: 0.4.16
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante @Rocketknight1 @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I fellow [https://huggingface.co/docs/transformers/tasks/sequence_classification](https://huggingface.co/docs/transformers/tasks/sequence_classification). I changed model and dataset for train text classification model with [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased). It can't train. It say
```
You're using a CamembertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-18-3435b262f1ae>](https://localhost:8080/#) in <cell line: 1>()
----> 1 trainer.train()
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs)
2816 else:
2817 if isinstance(outputs, dict) and "loss" not in outputs:
-> 2818 raise ValueError(
2819 "The model did not return a loss from the inputs, only the following keys: "
2820 f"{','.join(outputs.keys())}. For reference, the inputs it received are {','.join(inputs.keys())}."
ValueError: The model did not return a loss from the inputs, only the following keys: logits. For reference, the inputs it received are input_ids,attention_mask.
```
Google colab: [https://colab.research.google.com/drive/1TthDy7Li1veLHAVpuSlpMkGLsK8JK2i3?usp=sharing](https://colab.research.google.com/drive/1TthDy7Li1veLHAVpuSlpMkGLsK8JK2i3?usp=sharing)
### Expected behavior
It should can train. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26992/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26991/comments | https://api.github.com/repos/huggingface/transformers/issues/26991/events | https://github.com/huggingface/transformers/issues/26991 | 1,955,790,844 | I_kwDOCUB6oc50kv_8 | 26,991 | PreTrainedTokenizerFast v4.34.1 fails on `self._tokenizer.get_added_tokens_decoder()` | {
"login": "programjames",
"id": 35083764,
"node_id": "MDQ6VXNlcjM1MDgzNzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/35083764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/programjames",
"html_url": "https://github.com/programjames",
"followers_url": "https://api.github.com/users/programjames/followers",
"following_url": "https://api.github.com/users/programjames/following{/other_user}",
"gists_url": "https://api.github.com/users/programjames/gists{/gist_id}",
"starred_url": "https://api.github.com/users/programjames/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/programjames/subscriptions",
"organizations_url": "https://api.github.com/users/programjames/orgs",
"repos_url": "https://api.github.com/users/programjames/repos",
"events_url": "https://api.github.com/users/programjames/events{/privacy}",
"received_events_url": "https://api.github.com/users/programjames/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Could you share the tokenizer file? I cannot reproduce this one my side otherwise.\r\nHere is what I used:\r\n```python \r\n>>> !wget https://huggingface.co/RWKV/rwkv-4-3b-pile/raw/main/tokenizer.json\r\n>>> PreTrainedTokenizerFast(tokenizer_file='tokenizer.json')\r\n```",
"I'm using [https://github.com/BlinkDL/RWKV-LM/blob/main/RWKV-v4/20B_tokenizer.json](https://github.com/BlinkDL/RWKV-LM/blob/main/RWKV-v4/20B_tokenizer.json).",
"Still can't reproduce this as it's working fine for me. Make sure you have `tokenizers==0.14.1` :\r\n```python \r\n>>> !wget https://raw.githubusercontent.com/BlinkDL/RWKV-LM/main/RWKV-v4/20B_tokenizer.json\r\n>>> from transformers import PreTrainedTokenizerFast\r\n>>> tokenizer = PreTrainedTokenizerFast(tokenizer_file = \"tokenizer.json\")\r\n>>> tokenizer\r\nPreTrainedTokenizerFast(name_or_path='', vocab_size=50254, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={}, clean_up_tokenization_spaces=True), added_tokens_decoder={\r\n\t0: AddedToken(\"<|endoftext|>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n\t1: AddedToken(\"<|padding|>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n\t50254: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50255: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50256: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50257: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50258: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50259: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50260: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50261: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50262: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50263: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50264: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50265: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50266: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50267: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50268: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50269: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50270: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50271: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50272: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50273: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50274: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50275: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n\t50276: AddedToken(\" \", rstrip=False, lstrip=False, single_word=False, normalized=True, special=False),\r\n}\r\n```",
"I'm getting the same error, and am also able to reproduce the error from the example provided by OP. I have `tokenizers==0.14.1` and `transformers==4.35.0`. Also, dropping to `transformers==4.34.0` did fix the issue for me as well.",
"@shreyasgm are you using the same checkpoints? \r\nThat's very strange I ran this last month, re-ran it today and I do not have an issue. Can you make sure your intepreter has the correct tokenizers and transformers version by priting it and provide a reproducible snippet / link to a colab?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,703 | 1,703 | NONE | null | ### System Info
System info:
```
- Platform: Linux-REDACTED
- Python version: 3.11.REDACTED
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
Note: No GPU.
Here's the stacktrace.
```
Traceback (most recent call last):
File "/home/james/Documents/ml/rwkv-steering/rwkv_model/tokenizer.py", line 5, in <module>
MyTokenizer = PreTrainedTokenizerFast(tokenizer_file=os.path.join(my_dir, "rwkv-3b", "20B_tokenizer.json"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/james/miniconda3/envs/rwkv-steering/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 167, in __init__
encoder = list(self.added_tokens_encoder.keys()) + [str(token) for token in tokens_to_add]
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/james/miniconda3/envs/rwkv-steering/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 226, in added_tokens_encoder
return {k.content: v for v, k in sorted(self.added_tokens_decoder.items(), key=lambda item: item[0])}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/james/miniconda3/envs/rwkv-steering/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 236, in added_tokens_decoder
return self._tokenizer.get_added_tokens_decoder()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'tokenizers.Tokenizer' object has no attribute 'get_added_tokens_decoder'
```
Dropping down to v4.34.0 succeeded, so this is in the latest version. Also, I likely won't be replying to comments.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just this line of code:
```python3
from transformers import PreTrainedTokenizerFast
PreTrainedTokenizerFast(tokenizer_file="rwkv-3b/20B_tokenizer.json"))
```
### Expected behavior
It shouldn't throw an error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26991/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26991/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26990/comments | https://api.github.com/repos/huggingface/transformers/issues/26990/events | https://github.com/huggingface/transformers/issues/26990 | 1,955,610,578 | I_kwDOCUB6oc50kD_S | 26,990 | [Efficiency] The llama model with flash attention is slower than that without flash attention | {
"login": "KexinFeng",
"id": 23562091,
"node_id": "MDQ6VXNlcjIzNTYyMDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/23562091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KexinFeng",
"html_url": "https://github.com/KexinFeng",
"followers_url": "https://api.github.com/users/KexinFeng/followers",
"following_url": "https://api.github.com/users/KexinFeng/following{/other_user}",
"gists_url": "https://api.github.com/users/KexinFeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KexinFeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KexinFeng/subscriptions",
"organizations_url": "https://api.github.com/users/KexinFeng/orgs",
"repos_url": "https://api.github.com/users/KexinFeng/repos",
"events_url": "https://api.github.com/users/KexinFeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/KexinFeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @KexinFeng \r\nThanks for the issue, usually the speedup is quite considerable for a large sequence length. Can you try out your experiment with for example seq_len=2048? Also make sure to use a batch size that is divisble by 2",
"@younesbelkada Thanks for pointing out the sequence length. Indeed, at seq_len=3500, the flash_attention gains speed up. However, it is not significant compared to non-flash attention.\r\n\r\n```\r\nInput_length = 3500\r\nbatch_size = 4\r\nMax_gen_token = [300, 100, 50, 20]\r\n```\r\n\r\nCorresponding to each max_gen_token:\r\n\r\n`flash_attn=True`\r\n```\r\ntoken_latency = 33.9 ms/token, 39.7 ms/token, 49.3 ms/token, 78.8 ms/token \r\n```\r\n`flash_attn = False`\r\n```\r\ntoken_latency = 28.8 ms/token, 39.9 ms/token, 57.3 ms/token, 110 ms/token \r\n```\r\n\r\nI thought the expected behaviour should be that the flash_attention should be purely faster than non-flash attention. What factor contributed the overhead to the flash_attention compared to non-flash attention?\r\n\r\nFrom the benchmark above, it seems that as gen_token gets longer, the flash_attention is slower. This means that this overhead contributed to the flash_attention only is induced at every decoding step. So the speed up gained at the prefill step is gradually overridden by such overhead as decoding steps proceed.",
"If you are passing the attention mask to the model, I think the `pad` and `unpad` operation add a non negligeable overhead",
"@ArthurZucker Yes, indeed, I fed the attention mask into the model, with a lot of 0 entries (corresponding to the PAD token). Thanks for this insight. But is there any plan of removing this overhead? It seems to me that flash_attention algorithm in principle doesn't necesarily require the `pad` and `unpad` operation. Currently, it looks that the advantage of flash_attention over non flash one is not clear.",
"Hi @KexinFeng \r\nAs stated by @ArthurZucker adding padd tokens in the sequence length adds a considerable overhead in FA modules. The expected speedups and best scenarios on when to use FA-2 are clearly stated in this section of the docs: https://huggingface.co/docs/transformers/perf_infer_gpu_one#expected-speedups",
"@younesbelkada Thank you for pointing this document to me! Indeed, the issue I brought up here has been documented there. What's more, the document also shows the data of how the speedup depends on prompt max length, which is also very helpful. \r\n\r\nHowever regarding the solution proposed in the document, \r\n\r\n> To overcome this, one should use Flash Attention without padding tokens in the sequence for training (e.g., by packing a dataset, i.e., concatenating sequences until reaching the maximum sequence length. An example is provided [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516).\r\n\r\nit doesn't seem to be applicable on model inference and serving scenario, which is where this issue originates. Especially with dynamically batching inference, this packing of dataset doesn't work. It seems to me that padding is unavoidable in the inference scenarios. A possible way to avoid it is to switch the flash attention kernal to something like var_len_single_query_attention (already exists in the flash attention repo), where the input is flattened into 1D tensor.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### System Info
The test ran with this fix applied: https://github.com/huggingface/transformers/pull/26984
```
- `transformers` version: 4.34.0
- Platform: Linux-5.15.0-1045-aws-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The model loading:
```python
def get_model_tokenizer(model_id, flash_attn=False):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_id_or_path = "huggyllama/llama-7b"
model = AutoModelForCausalLM.from_pretrained(
model_id_or_path, device_map='auto' if device.type == 'cuda' else 'cpu',
use_flash_attention_2=flash_attn)
lm_block = HuggingfaceBlock(model)
tokenizer = AutoTokenizer.from_pretrained(model_id_or_path,
padding_side='left')
tokenizer.pad_token = "[PAD]"
return lm_block, tokenizer
```
Input_length = 760
batch_size = 13
Max_gen_token = [300, 100, 50, 20]
When `flash_attn==True':
```
token_latency: [18.3 ms/token, 20.7 ms/token, 26.4 ms/token , 44.1 ms/token ]
```
When 'flash_attn' == False':
```
token_latency: [14.1 ms/token, 17.8 ms/token, 24.3 ms/token , 44.2 ms/token ]
```
### Expected behavior
Flash attention should accelerate the inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26990/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26989/comments | https://api.github.com/repos/huggingface/transformers/issues/26989/events | https://github.com/huggingface/transformers/issues/26989 | 1,955,609,713 | I_kwDOCUB6oc50kDxx | 26,989 | Loading LongT5 model with `device_map` causes some layers to not be loaded | {
"login": "GiantTreeLP",
"id": 2050966,
"node_id": "MDQ6VXNlcjIwNTA5NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2050966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GiantTreeLP",
"html_url": "https://github.com/GiantTreeLP",
"followers_url": "https://api.github.com/users/GiantTreeLP/followers",
"following_url": "https://api.github.com/users/GiantTreeLP/following{/other_user}",
"gists_url": "https://api.github.com/users/GiantTreeLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GiantTreeLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GiantTreeLP/subscriptions",
"organizations_url": "https://api.github.com/users/GiantTreeLP/orgs",
"repos_url": "https://api.github.com/users/GiantTreeLP/repos",
"events_url": "https://api.github.com/users/GiantTreeLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/GiantTreeLP/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hey! Before going further could you make sur to run this code on the latest release of transfomers? (4.34.1) this was probably fixed!",
"Hi,\r\nI have updated my issue text. I have added the actual code to test the issue and the actual as well as expected output.",
"Thanks, @SunMarc let's try this maybe with a public model and see if we can reproduce this anyway",
"Sorry for the delay @GiantTreeLP, this happens because you are using safetensors files + `device_map`. This [PR](https://github.com/huggingface/transformers/pull/27204/files) makes sure that the weights will be tied if `self.config.tie_word_embeddings` is `True` (set to `True` by default when creating a config). However, there is a good chance that in your config, it is set to False if you finetuned a model similar to`Stancld/longt5-tglobal-large-16384-pubmed-3k_steps`. \r\nTo solve this, you can do the following: \r\n\r\n```py\r\ninput_ids = tokenizer(LONG_ARTICLE, return_tensors=\"pt\").input_ids.to(\"cuda\")\r\nconfig = AutoConfig.from_pretrained(\"Stancld/longt5-tglobal-large-16384-pubmed-3k_steps\", tie_word_embeddings=True)\r\nmodel = LongT5ForConditionalGeneration.from_pretrained(\"Stancld/longt5-tglobal-large-16384-pubmed-3k_steps\", config=config, device_map=0).to(\"cuda\")\r\n```\r\n\r\n",
"Hi @SunMarc,\r\n\r\nI have tried to use `tie_word_embeddings=True` with and without `device_map=0`.\r\nWithout setting a `device_map`, the model gets loaded but produces wrong results, even whilst not warning about initialized weights.\r\n\r\nSo your provided solution does not fix my issue.\r\n\r\nFor LongT5, the model contains a `shared.weight` layer/weight that is used by both, the encoder and the decoder embeddings.\r\nWhilst debugging I found an inconsistency:\r\nWhen trying to determine the tied parameters, without a `device_map`, the following code runs and returns a list of lists with an entry containing three elements (shared.weight, encoder.embed_tokens.weight and decoder.embed_tokens.weight):\r\nhttps://github.com/huggingface/transformers/blob/514de24abfd4416aeba6a6455ad5920f57f3567d/src/transformers/modeling_utils.py#L3662-L3669\r\n\r\nIn contrast with `device_map` set, the `find_tied_parameters` function is called which does not return the same list but an empty list:\r\nhttps://github.com/huggingface/transformers/blob/514de24abfd4416aeba6a6455ad5920f57f3567d/src/transformers/modeling_utils.py#L3672\r\n\r\nThis causes the warning to be displayed.\r\nAs far as I can tell, this is only regarding the symptom but not the actual cause of the untied weights.\r\n\r\nThe code that is supposed to move the missing weights back from the meta device to the cpu does not move back the shared weights, those get left on the meta device when initializing:\r\nhttps://github.com/huggingface/transformers/blob/514de24abfd4416aeba6a6455ad5920f57f3567d/src/transformers/modeling_utils.py#L3693-L3722\r\n\r\nThe safetensors stored model does, however, contain all the necessary weights.\r\n\r\nI will look further into why the weights don't get properly loaded/tied in the next days.",
"Hi @GiantTreeLP, were you able to fix your problem ? ",
"Hi @SunMarc, I haven't had much time in December and January to investigate this issue further.\r\n\r\nAs I am not using the `device_map` argument this issue doesn't affect any of my projects.\r\nI haven't found anything else that is worth mentioning.\r\n\r\nDue to other projects and interests, I probably won't have the time to investigate this issue further.\r\nI also believe this issue is above my current skill level to solve.\r\n\r\nI can close this issue, if necessary but it's not fixed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Trying to load a [LongT5](https://huggingface.co/docs/transformers/model_doc/longt5) model using `AutoModelForSeq2SeqLM.from_pretrained` and passing `device_map` causes the model to be loaded with the following error message:
```
Some weights of LongT5ForConditionalGeneration were not initialized from the model checkpoint at ./ and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Running inference then results in bogus output.
If the `device_map` parameter is not passed, the model is loaded correctly.
Code to load the model:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig
config = AutoConfig.from_pretrained("./")
tokenizer = AutoTokenizer.from_pretrained("./")
model = AutoModelForSeq2SeqLM.from_pretrained("./", config=config, device_map=0)
tokenized_input = tokenizer.encode(
"Van der Valk ist eine britische Fernseh-Krimiserie aus den Jahren 1972 bis 1977 und wieder 1991/1992 mit <hl>Barry Foster<hl> in der Rolle des Piet van der Valk.",
return_tensors="pt")
tokenized_input = tokenized_input.to(model.device)
generated_tokens = model.generate(tokenized_input, max_length=1024)
decoded_tokens = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(decoded_tokens)
```
The model has been fine-tuned on a QG dataset in the German language but that does not change the issue.
Output:
```
Some weights of LongT5ForConditionalGeneration were not initialized from the model checkpoint at ./ and are newly initialized: ['decoder.embed_tokens.weight', 'encoder.embed_tokens.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\generation\utils.py:1421: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )
warnings.warn(
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\models\longt5\modeling_longt5.py:171: UserWarning: An output with one or more elements was resized since it had shape [42], which does not match the required output shape [1, 42]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\Resize.cpp:35.)
true_block_ends = torch.logical_and(block_ends, block_ids >= 0)
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\models\longt5\modeling_longt5.py:147: UserWarning: An output with one or more elements was resized since it had shape [1, 1, 128, 1], which does not match the required output shape [1, 1, 128, 384]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\Resize.cpp:35.)
local_attention_mask = torch.logical_and(_blocked_attention_mask, _3blocked_attention_mask)
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\modeling_utils.py:859: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
['ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss.']
```
### Expected behavior
The model should be loaded to the device given the `device_map` parameter and all layers and weights should be loaded correctly.
Expected output:
```
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\generation\utils.py:1421: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )
warnings.warn(
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\models\longt5\modeling_longt5.py:171: UserWarning: An output with one or more elements was resized since it had shape [42], which does not match the required output shape [1, 42]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\Resize.cpp:35.)
true_block_ends = torch.logical_and(block_ends, block_ids >= 0)
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\models\longt5\modeling_longt5.py:147: UserWarning: An output with one or more elements was resized since it had shape [1, 1, 128, 1], which does not match the required output shape [1, 1, 128, 384]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\Resize.cpp:35.)
local_attention_mask = torch.logical_and(_blocked_attention_mask, _3blocked_attention_mask)
G:\dev\anaconda3\envs\transformers-26989\Lib\site-packages\transformers\modeling_utils.py:859: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
['Wer ist der Producer von Van der Valk?']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26989/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26989/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26988/comments | https://api.github.com/repos/huggingface/transformers/issues/26988/events | https://github.com/huggingface/transformers/pull/26988 | 1,955,603,797 | PR_kwDOCUB6oc5dc10O | 26,988 | small typos found | {
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | just very small typos found
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker , @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26988/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26988",
"html_url": "https://github.com/huggingface/transformers/pull/26988",
"diff_url": "https://github.com/huggingface/transformers/pull/26988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26988.patch",
"merged_at": 1698070120000
} |
https://api.github.com/repos/huggingface/transformers/issues/26987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26987/comments | https://api.github.com/repos/huggingface/transformers/issues/26987/events | https://github.com/huggingface/transformers/pull/26987 | 1,955,580,477 | PR_kwDOCUB6oc5dcxGm | 26,987 | Added readme in Thai | {
"login": "grimreapermanasvi",
"id": 114558299,
"node_id": "U_kgDOBtQFWw",
"avatar_url": "https://avatars.githubusercontent.com/u/114558299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grimreapermanasvi",
"html_url": "https://github.com/grimreapermanasvi",
"followers_url": "https://api.github.com/users/grimreapermanasvi/followers",
"following_url": "https://api.github.com/users/grimreapermanasvi/following{/other_user}",
"gists_url": "https://api.github.com/users/grimreapermanasvi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grimreapermanasvi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grimreapermanasvi/subscriptions",
"organizations_url": "https://api.github.com/users/grimreapermanasvi/orgs",
"repos_url": "https://api.github.com/users/grimreapermanasvi/repos",
"events_url": "https://api.github.com/users/grimreapermanasvi/events{/privacy}",
"received_events_url": "https://api.github.com/users/grimreapermanasvi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @stevhliu ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26987). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | READ ME in Thai language
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26987",
"html_url": "https://github.com/huggingface/transformers/pull/26987",
"diff_url": "https://github.com/huggingface/transformers/pull/26987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26987.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26986/comments | https://api.github.com/repos/huggingface/transformers/issues/26986/events | https://github.com/huggingface/transformers/issues/26986 | 1,955,575,029 | I_kwDOCUB6oc50j7T1 | 26,986 | Resize failing with torch and np objects | {
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hei! This seems to be quite straightforward. I can work on this if help is needed.",
"Hi @daniilgaltsev ,\r\n\r\nThanks for your interest in fixing it. \r\n\r\nI had just spoken with @amyeroberts , one of our core maintainers, and she explained me that this is actually not a bug.\r\nIn fact, the image transformations functions like `resize`, `normalize`, `center_crop`, etc should work with numpy arrays. So, this is just a very small docstring and typing fix. \r\n\r\nIn [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py#L289), we just needed to replace\r\n```python\r\nimage (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`):\r\n```\r\nby\r\n```python\r\nimage (`np.ndarray`):\r\n```\r\nIf you still want to contribute, please, feel free to take it 🤗 \r\nAlso, please, tag your PR with `#Close #26986`."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | ### System Info
`transformers.__version__: '4.34.1'`
### Who can help?
@rafaelpadilla @amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers.image_transforms import resize
# dummy torch image
img = torch.zeros((3, 100,100), dtype=torch.uint8)
# any new size
new_size = (20, 20)
resized_img = resize(img, new_size)
>>> Traceback (most recent call last):
>>> File "<stdin>", line 1, in <module>
>>> File "/home/rafael/anaconda3/envs/hf/lib/python3.11/site-packages/transformers/image_transforms.py", line 326, in resize
>>> do_rescale = _rescale_for_pil_conversion(image)
>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> File "/home/rafael/anaconda3/envs/hf/lib/python3.11/site-packages/transformers/image_transforms.py", line 139, in _rescale_for_pil_conversion
>>> elif np.allclose(image, image.astype(int)):
>>> ^^^^^^^^^^^^
>>> AttributeError: 'Tensor' object has no attribute 'astype'. Did you mean: 'dtype'?
```
### Expected behavior
`resize()` should be flexible and accept any image type (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`) as in [here](https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py#L289).
However, it fails if image type is torch.
The reason occurs because the `_rescale_for_pil_conversion()` is being called [here](https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py#L326) and assumes that the input `image` is `np.array`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26986/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26985/comments | https://api.github.com/repos/huggingface/transformers/issues/26985/events | https://github.com/huggingface/transformers/issues/26985 | 1,955,564,742 | I_kwDOCUB6oc50j4zG | 26,985 | Cannot enable gradient checkpointing for pre_trained models. | {
"login": "VINUK0",
"id": 58259367,
"node_id": "MDQ6VXNlcjU4MjU5MzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/58259367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VINUK0",
"html_url": "https://github.com/VINUK0",
"followers_url": "https://api.github.com/users/VINUK0/followers",
"following_url": "https://api.github.com/users/VINUK0/following{/other_user}",
"gists_url": "https://api.github.com/users/VINUK0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VINUK0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VINUK0/subscriptions",
"organizations_url": "https://api.github.com/users/VINUK0/orgs",
"repos_url": "https://api.github.com/users/VINUK0/repos",
"events_url": "https://api.github.com/users/VINUK0/events{/privacy}",
"received_events_url": "https://api.github.com/users/VINUK0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi... this seems to be a good first bug. Please assign it to me.",
"Sure, feel free to open a PR and link it to this issue 😉 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### System Info
### Environment Number 1 (Google Cloab)
*Transformers Version : 4.34.1*
*Pytorch Version : 2.0.1 (Latest)*
*Python Version : 3.10*
*Accelerate Version : 0.23.0*
*GPU Count : 1*
*GPU Name : Tesla T4*
### Environment Number 2 (Kaggle)
*Transformers Version : 4.34.1*
*Pytorch Version : 2.0.1 (Latest)*
*Python Version : 3.10*
*Accelerate Version : 0.23.0*
*GPU Count : 2*
*GPU Name : Tesla T4 (FSDP) and (DDP)*
### Environment Number 3 (AWS)
*Transformers Version : 4.34.1*
*Pytorch Version : 2.0.1 (Latest)*
*Python Version : 3.10*
*Accelerate Version : 0.23.0*
*GPU Count : 1*
*GPU Name : Tesla T4*
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## The first code down below is the code that is causing the error. I cannot enable gradient_checkpointing for this model when calling `.from_pretrained` but it's possible to enable gradient checkpointing when creating a new model from scratch using a config. and i have tested `GPTNeo` models and got the same results.
### This is the example of the way that is not working.
```py
from accelerate import Accelerator
from transformers import AutoTokenizer, OPTForCausalLM
from transformers.optimization import Adafactor
from torch.utils.data import DataLoader
from Dataset import CustomDataset
accelerator = Accelerator()
device = accelerator.device
print(f'Currently using {device}')
tokenizer = AutoTokenizer.from_pretrained('facebook/opt-350m')
tokenizer.pad_token = tokenizer.eos_token
model = OPTForCausalLM.from_pretrained('facebook/opt-350m')
model.gradient_checkpointing_enable()
Train_DataLoader = DataLoader(CustomDataset('/content/HFA/data.db', tokenizer=tokenizer, dataset_line_max_length=1024), batch_size=3, shuffle=False)
optimizer = Adafactor(model.parameters(),
lr=7e-5,
clip_threshold=1.0,
decay_rate=-1.0,
weight_decay=0.05,
scale_parameter=False,
relative_step=False,
warmup_init=False)
model, optimizer, Train_DataLoader = accelerator.prepare(model, optimizer, Train_DataLoader)
for i in range(20):
for batch_idx, batch in enumerate(Train_DataLoader):
optimizer.zero_grad()
input_ids, attention_mask, labels = batch
sm_output = model(input_ids, attention_mask=attention_mask, labels=labels)
s_loss = sm_output[0]
print(f'Current Loss is {s_loss}\n', end='\r')
print(f'Total Batch {len(Train_DataLoader)}/{batch_idx}\n\n', end='\r')
accelerator.backward(s_loss)
optimizer.step()
print(f'Current Epoch is {i}\n\n\n', end='\r')
if (i + 1) % 2 == 0:
model.save_pretrained('/content/HFA/')
```
### This is the working code that let me enable gradient checkpointing.
```py
from accelerate import Accelerator
from transformers import AutoModelForCausalLM, AutoTokenizer, OPTForCausalLM, OPTConfig
from transformers.optimization import Adafactor
from torch.utils.data import DataLoader
import torch.nn as nn
from Dataset import CustomDataset
import torch
accelerator = Accelerator()
device = accelerator.device
print(f'Currently using {device}')
tokenizer = AutoTokenizer.from_pretrained('facebook/opt-350m')
tokenizer.pad_token = tokenizer.eos_token
model_config = OPTConfig(vocab_size=50272, hidden_size=624, num_hidden_layers=24, ffn_dim=4096, max_position_embeddings=2048, do_layer_norm_before=False, word_embed_proj_dim=768, num_attention_heads=16, activation_function="silu", attention_dropout=0.0, layer_norm_elementwise_affine=True, layerdrop=0.0, init_std=0.02, dropout=0.1, enable_bias=True, eos_token_id=2, bos_token_id=2)
model = OPTForCausalLM(model_config)
model.gradient_checkpointing_enable()
Train_DataLoader = DataLoader(CustomDataset('/content/HFA/data.db', tokenizer=tokenizer, dataset_line_max_length=1024), batch_size=3, shuffle=False)
optimizer = Adafactor(model.parameters(),
lr=8e-5,
clip_threshold=1.0,
decay_rate=-1.0,
weight_decay=0.05,
scale_parameter=False,
relative_step=False,
warmup_init=False)
model, optimizer, Train_DataLoader = accelerator.prepare(model, optimizer, Train_DataLoader)
for i in range(20):
for batch_idx, batch in enumerate(Train_DataLoader):
optimizer.zero_grad()
input_ids, attention_mask, labels = batch
output = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = output[0]
print(f'Current Loss is {loss}\n', end='\r')
print(f'Total Batch {len(Train_DataLoader)}/{batch_idx}\n\n', end='\r')
accelerator.backward(loss)
optimizer.step()
print(f'Current Epoch is {i}\n\n\n', end='\r')
if (i + 1) % 2 == 0:
model.save_pretrained('/content/HFA/')
```
### Expected behavior
*In both of the code's provided training process continues without problems, but when using the first code i cannot enable gradient checkpointing. while using the second code allow me to enable gradient checkpointing.* | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26985/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26984/comments | https://api.github.com/repos/huggingface/transformers/issues/26984/events | https://github.com/huggingface/transformers/pull/26984 | 1,955,558,940 | PR_kwDOCUB6oc5dctBo | 26,984 | [fix] llama_dtype_fix triggered when flash attention is on | {
"login": "KexinFeng",
"id": 23562091,
"node_id": "MDQ6VXNlcjIzNTYyMDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/23562091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KexinFeng",
"html_url": "https://github.com/KexinFeng",
"followers_url": "https://api.github.com/users/KexinFeng/followers",
"following_url": "https://api.github.com/users/KexinFeng/following{/other_user}",
"gists_url": "https://api.github.com/users/KexinFeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KexinFeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KexinFeng/subscriptions",
"organizations_url": "https://api.github.com/users/KexinFeng/orgs",
"repos_url": "https://api.github.com/users/KexinFeng/repos",
"events_url": "https://api.github.com/users/KexinFeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/KexinFeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Yes, #26846 should deal with this issue! Closing this PR - feel free to re-open if you think that's not the case"
] | 1,697 | 1,700 | 1,700 | NONE | null | # What does this PR do?
## Setup
```python
model_id_or_path = "huggyllama/llama-7b"
model = AutoModelForCausalLM.from_pretrained(
model_id_or_path, device_map='auto' if device.type == 'cuda' else 'cpu',
use_flash_attention_2=flash_attn)
```
## Fix
Inside `attn_output = self.o_proj(attn_output)`, the weight dtype is torch.float32 which mismatches attn_output's dtype, torch.float16.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26984/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26984",
"html_url": "https://github.com/huggingface/transformers/pull/26984",
"diff_url": "https://github.com/huggingface/transformers/pull/26984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26984.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26983/comments | https://api.github.com/repos/huggingface/transformers/issues/26983/events | https://github.com/huggingface/transformers/pull/26983 | 1,955,528,733 | PR_kwDOCUB6oc5dcnRC | 26,983 | Cross attention | {
"login": "archana53",
"id": 59822348,
"node_id": "MDQ6VXNlcjU5ODIyMzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/59822348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/archana53",
"html_url": "https://github.com/archana53",
"followers_url": "https://api.github.com/users/archana53/followers",
"following_url": "https://api.github.com/users/archana53/following{/other_user}",
"gists_url": "https://api.github.com/users/archana53/gists{/gist_id}",
"starred_url": "https://api.github.com/users/archana53/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/archana53/subscriptions",
"organizations_url": "https://api.github.com/users/archana53/orgs",
"repos_url": "https://api.github.com/users/archana53/repos",
"events_url": "https://api.github.com/users/archana53/events{/privacy}",
"received_events_url": "https://api.github.com/users/archana53/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26983/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26983",
"html_url": "https://github.com/huggingface/transformers/pull/26983",
"diff_url": "https://github.com/huggingface/transformers/pull/26983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26983.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26972/comments | https://api.github.com/repos/huggingface/transformers/issues/26972/events | https://github.com/huggingface/transformers/issues/26972 | 1,955,384,584 | I_kwDOCUB6oc50jM0I | 26,972 | UnboundLocalError: local variable 'active_adapters' referenced before assignment | {
"login": "AjayP13",
"id": 5404177,
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AjayP13",
"html_url": "https://github.com/AjayP13",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @AjayP13 \r\ncan you share a reproducible small snippet for the issue?",
"Any update?",
"hi @dsdanielpark @AjayP13 \r\nI would really appreciate if anyone can provide a simple reproducer of the bug, that way I can open a fix very quickly and resolve the issue. Thanks!",
"@younesbelkada, I think this bug was being caused by incorrect usage which was saving the adapter to disk, loading it with Transformers's `from_pretrained` and then again applying PeftModel's `from_pretrained` (leading to the adapter being loaded twice, leading to multiple adapters) and then doing `merge_and_unload` would cause this error. Going to close this as I don't think it's an issue that would occur if used in the correct way.",
"> @younesbelkada, I think this bug was being caused by incorrect usage which was saving the adapter to disk, loading it with Transformers's `from_pretrained` and then again applying PeftModel's `from_pretrained` (leading to the adapter being loaded twice, leading to multiple adapters) and then doing `merge_and_unload` would cause this error. Going to close this as I don't think it's an issue that would occur if used in the correct way.\r\n\r\n```python\r\ntrainer.save_model(saver_dir)\r\n```\r\n\r\nThis way, the adapter will be saved (without the base model).",
"> hi @dsdanielpark @AjayP13 I would really appreciate if anyone can provide a simple reproducer of the bug, that way I can open a fix very quickly and resolve the issue. Thanks!\r\n\r\n@younesbelkada @AjayP13 @dsdanielpark \r\n\r\n```python\r\ntrainer.train()\r\n\r\n# Save the adapter\r\ntrainer.save_model(saver_dir)\r\n\r\n# Retrieve the base model\r\nmodel = trainer.model # Please note that in this way, only the adapter will be returned without the base model\r\n\r\n# Loading the adapter\r\nmodel = PeftModel.from_pretrained(model, model_id=saver_dir, device_map=\"auto\")\r\n```\r\n\r\nIf you change the `model = trainer.model` to `model = trainer.model.base_model`, the error will be gone.",
"> @younesbelkada, I think this bug was being caused by incorrect usage which was saving the adapter to disk, loading it with Transformers's `from_pretrained` and then again applying PeftModel's `from_pretrained` (leading to the adapter being loaded twice, leading to multiple adapters) and then doing `merge_and_unload` would cause this error. Going to close this as I don't think it's an issue that would occur if used in the correct way.\r\n\r\nHi @AjayP13 ! I'm encountering the same bug and have the same incorrect usage that you described. However, I struggle to find the correct usage and would really appreciate if you could share your knowledge.\r\nThanks again, would appreciate any help and code examples of loading, merging the models and saving or pushing to hub.",
"@SuperBruceJia I have the same error ... basically I have my trainer which I save which saves the 360Mb adapters ... however, I then need to use a merged model to convert to gguf \r\nHow can I create a 13Gb model with my merged weights ?\r\nthe above does not show how to create a merged model that can be used for this purpose",
"> @SuperBruceJia I have the same error ... basically I have my trainer which I save which saves the 360Mb adapters ... however, I then need to use a merged model to convert to gguf How can I create a 13Gb model with my merged weights ? the above does not show how to create a merged model that can be used for this purpose\r\n\r\nCould you please try:\r\n\r\n```python\r\nsave_path = \"YOUR_SAVE_PATH\"\r\n\r\nmodel = trainer.model.base_model\r\nmodel.save_pretrained(save_path)\r\n```\r\n\r\nBest regards,\r\n\r\nShuyue\r\nJan 15th, 2024\r\n"
] | 1,697 | 1,705 | 1,698 | CONTRIBUTOR | null | ### System Info
Happens when doing `save_pretrained()` or `push_to_hub()` on a T5-small model with a single LoraConfig after doing `merge_and_unload()`.
This has now broken `merge_and_unload()` as you can't do anything with the model.
```
transformers==4.34.0
peft==0.5.0
```
### Who can help?
@younesbelkada
@patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python3
def active_adapters(self) -> List[str]:
"""
If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT
official documentation: https://huggingface.co/docs/peft
Gets the current active adapters of the model. In case of multi-adapter inference (combining multiple adapters
for inference) returns the list of all active adapters so that users can deal with them accordingly.
For previous PEFT versions (that does not support multi-adapter inference), `module.active_adapter` will return
a single string.
"""
check_peft_version(min_version=MIN_PEFT_VERSION)
if not is_peft_available():
raise ImportError("PEFT is not available. Please install PEFT to use this function: `pip install peft`.")
if not self._hf_peft_config_loaded:
raise ValueError("No adapter loaded. Please load an adapter first.")
from peft.tuners.tuners_utils import BaseTunerLayer
for _, module in self.named_modules():
if isinstance(module, BaseTunerLayer):
active_adapters = module.active_adapter
break
# For previous PEFT versions
> if isinstance(active_adapters, str):
E UnboundLocalError: local variable 'active_adapters' referenced before assignment
```
```
train.py:415: in publish_to_hf_hub
model.save_pretrained(
.../lib/python3.10/site-packages/transformers/modeling_utils.py:2002: in save_pretrained
state_dict = model_to_save.get_adapter_state_dict()
.../lib/python3.10/site-packages/transformers/integrations/peft.py:415: in get_adapter_state_dict
adapter_name = self.active_adapter()
.../lib/python3.10/site-packages/transformers/integrations/peft.py:393: in active_adapter
return self.active_adapters()[0]
```
### Expected behavior
N/A | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26972/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26971/comments | https://api.github.com/repos/huggingface/transformers/issues/26971/events | https://github.com/huggingface/transformers/issues/26971 | 1,955,300,523 | I_kwDOCUB6oc50i4Sr | 26,971 | NotImplementedError: Cannot copy out of meta tensor; no data! with Multi-node training | {
"login": "ari9dam",
"id": 14134882,
"node_id": "MDQ6VXNlcjE0MTM0ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14134882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ari9dam",
"html_url": "https://github.com/ari9dam",
"followers_url": "https://api.github.com/users/ari9dam/followers",
"following_url": "https://api.github.com/users/ari9dam/following{/other_user}",
"gists_url": "https://api.github.com/users/ari9dam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ari9dam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ari9dam/subscriptions",
"organizations_url": "https://api.github.com/users/ari9dam/orgs",
"repos_url": "https://api.github.com/users/ari9dam/repos",
"events_url": "https://api.github.com/users/ari9dam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ari9dam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Relevant: https://github.com/huggingface/transformers/pull/26631 @pacman100 ",
"```\r\ncompute_environment: LOCAL_MACHINE\r\ndistributed_type: FSDP\r\ndowncast_bf16: 'no'\r\nfsdp_config:\r\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\r\n fsdp_backward_prefetch_policy: BACKWARD_PRE\r\n fsdp_forward_prefetch: false\r\n fsdp_offload_params: true\r\n fsdp_sharding_strategy: 1\r\n fsdp_state_dict_type: FULL_STATE_DICT\r\n fsdp_sync_module_states: true\r\n fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer\r\n fsdp_use_orig_params: true\r\nmain_training_function: main\r\nmixed_precision: bf16\r\nnum_machines: 2\r\nnum_processes: 16\r\nrdzv_backend: static\r\nsame_network: false\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n\r\n```",
"Hello @ari9dam,\n\nThe PR you tagged above should resolve this issue. Please recreate the FSDP config via `accelerate config` command and answer `False` for RAM efficient loading of the pretrained model.",
"Thank you that solved it. I've one more question: @pacman100 \r\n model = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_args.model_name_or_path,\r\n cache_dir=training_args.cache_dir,\r\n use_flash_attention_2=True\r\n )\r\n\r\nshould I pass torch dtype here while loading the model? I'm using bf16 in accelerate config. I get warnings:\r\n\r\nYou are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour\r\nYou are attempting to use Flash Attention 2.0 with a model initialized on CPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.\r\n",
"also had this issue and fixed it by changing\r\n\r\n```\r\n if (\r\n is_deepspeed_zero3_enabled() and torch.distributed.is_initialized() and torch.distributed.get_rank() > 0\r\n ) or (is_fsdp_enabled() and not is_local_dist_rank_0()):\r\n map_location = \"meta\"\r\n```\r\nto\r\n```\r\n if (\r\n (is_deepspeed_zero3_enabled() or is_fsdp_enabled())\r\n and torch.distributed.is_initialized()\r\n and (torch.distributed.get_rank() % 8 != 0)\r\n ):\r\n map_location = \"meta\"\r\n```\r\nhere https://github.com/huggingface/transformers/blob/29e7a1e1834f331a4916853ecd58549ed78235d6/src/transformers/modeling_utils.py#L512\r\n(this is for 8 gpus per node; for 4 gpus per node should be 4 etc)"
] | 1,697 | 1,703 | 1,697 | NONE | null | ### System Info
<pre>
A100
Cuda 11.7
PyTorch 2.0.1
# This dependencies file is produced by 'conda export'
{
"channels": [
"pytorch",
"defaults"
],
"dependencies": [
"_libgcc_mutex=0.1=main",
"_openmp_mutex=5.1=1_gnu",
"ca-certificates=2023.01.10=h06a4308_0",
"ld_impl_linux-64=2.38=h1181459_1",
"libffi=3.4.4=h6a678d5_0",
"libgcc-ng=11.2.0=h1234567_1",
"libgomp=11.2.0=h1234567_1",
"libstdcxx-ng=11.2.0=h1234567_1",
"magma-cuda117=2.6.1=1",
"ncurses=6.4=h6a678d5_0",
"openssl=1.1.1t=h7f8727e_0",
"pip=23.0.1=py38h06a4308_0",
"python=3.8.16=h7a1cb2a_3",
"readline=8.2=h5eee18b_0",
"sqlite=3.41.2=h5eee18b_0",
"tk=8.6.12=h1ccaba5_0",
"xz=5.4.2=h5eee18b_0",
"zlib=1.2.13=h5eee18b_0",
{
"pip": [
"absl-py==2.0.0",
"accelerate==0.24.0.dev0",
"adal==1.2.7",
"aiofiles==23.1.0",
"aiohttp==3.8.4",
"aiosignal==1.3.1",
"altair==5.1.2",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"apex==0.1",
"applicationinsights==0.11.10",
"argcomplete==2.1.2",
"asttokens==2.4.0",
"async-timeout==4.0.2",
"attrs==23.1.0",
"azure-common==1.1.28",
"azure-core==1.26.4",
"azure-graphrbac==0.61.1",
"azure-identity==1.13.0",
"azure-mgmt-authorization==3.0.0",
"azure-mgmt-containerregistry==10.2.0",
"azure-mgmt-core==1.4.0",
"azure-mgmt-keyvault==10.3.0",
"azure-mgmt-resource==22.0.0",
"azure-mgmt-storage==21.0.0",
"azure-ml==0.0.1",
"azure-ml-component==0.9.18.post2",
"azure-storage-blob==12.13.0",
"azureml-automl-common-tools==1.51.0",
"azureml-automl-core==1.51.0.post1",
"azureml-contrib-services==1.51.0",
"azureml-core==1.51.0",
"azureml-dataprep==4.10.9",
"azureml-dataprep-native==38.0.0",
"azureml-dataprep-rslex==2.17.12",
"azureml-dataset-runtime==1.51.0",
"azureml-defaults==1.51.0",
"azureml-inference-server-http==0.8.4.1",
"azureml-mlflow==1.51.0",
"azureml-pipeline==1.51.0",
"azureml-pipeline-core==1.51.0",
"azureml-pipeline-steps==1.51.0",
"azureml-sdk==1.51.0",
"azureml-telemetry==1.51.0",
"azureml-train-automl-client==1.51.0.post1",
"azureml-train-core==1.51.0",
"azureml-train-restclients-hyperdrive==1.51.0",
"backcall==0.2.0",
"backports-tempfile==1.0",
"backports-weakref==1.0.post1",
"bcrypt==4.0.1",
"bytecode==0.15.1",
"cachetools==5.3.0",
"cerberus==1.3.4",
"certifi==2023.5.7",
"cffi==1.15.1",
"charset-normalizer==3.1.0",
"click==8.1.7",
"cloudpickle==2.2.1",
"cmake==3.26.3",
"coloredlogs==15.0.1",
"comm==0.1.4",
"contextlib2==21.6.0",
"coverage==6.3.1",
"cryptography==40.0.2",
"cycler==0.12.1",
"databricks-cli==0.18.0",
"datasets==2.14.5",
"debugpy==1.6.7.post1",
"decorator==5.1.1",
"deepspeed==0.9.1",
"dill==0.3.7",
"distro==1.8.0",
"docker==6.1.3",
"dotnetcore2==3.1.23",
"einops==0.7.0",
"entrypoints==0.4",
"evaluate==0.4.1",
"exceptiongroup==1.1.3",
"executing==2.0.0",
"fairscale==0.4.13",
"fastapi==0.104.0",
"ffmpy==0.3.1",
"filelock==3.12.0",
"flash-attn==2.3.2",
"flask==2.2.5",
"flask-cors==3.0.10",
"flatbuffers==23.5.9",
"fonttools==4.43.1",
"frozenlist==1.3.3",
"fsspec==2023.5.0",
"fusepy==3.0.1",
"gitdb==4.0.11",
"gitpython==3.1.40",
"google-api-core==2.11.0",
"google-auth==2.19.0",
"google-auth-oauthlib==0.4.6",
"googleapis-common-protos==1.59.0",
"gradio==3.23.0",
"grpcio==1.59.0",
"gunicorn==20.1.0",
"h11==0.14.0",
"h5py==3.8.0",
"hjson==3.1.0",
"horovod==0.24.2",
"httpcore==0.18.0",
"httpx==0.25.0",
"huggingface-hub==0.17.3",
"humanfriendly==10.0",
"idna==3.4",
"igraph==0.10.4",
"importlib-metadata==6.6.0",
"importlib-resources==6.1.0",
"inference-schema==1.5.1",
"inflector==3.1.0",
"iniconfig==2.0.0",
"intel-openmp==2021.4.0",
"ipykernel==6.25.2",
"ipython==8.12.3",
"isodate==0.6.1",
"itsdangerous==2.1.2",
"jedi==0.19.1",
"jeepney==0.8.0",
"jinja2==3.1.2",
"jmespath==1.0.1",
"joblib==1.3.2",
"jsonlines==4.0.0",
"jsonpickle==3.0.2",
"jsonschema==4.19.1",
"jsonschema-specifications==2023.7.1",
"jupyter-client==8.4.0",
"jupyter-core==5.4.0",
"kiwisolver==1.4.5",
"knack==0.10.1",
"lightning-utilities==0.8.0",
"linkify-it-py==2.0.2",
"lit==16.0.5",
"lxml==4.9.2",
"markdown==3.5",
"markdown-it-py==2.2.0",
"markdown2==2.4.10",
"markupsafe==2.1.2",
"matplotlib==3.5.3",
"matplotlib-inline==0.1.6",
"mdit-py-plugins==0.3.3",
"mdurl==0.1.2",
"mkl==2021.4.0",
"mkl-include==2021.4.0",
"mlflow-skinny==2.7.1",
"mpi4py==3.1.1",
"mpmath==1.3.0",
"msal==1.22.0",
"msal-extensions==1.0.0",
"msccl==2.3.0",
"msrest==0.7.1",
"msrestazure==0.6.4",
"multidict==6.0.4",
"multiprocess==0.70.15",
"ndg-httpsclient==0.5.1",
"nebulaml==0.16.2",
"nest-asyncio==1.5.6",
"networkx==3.1",
"ninja==1.10.2",
"nltk==3.8.1",
"numpy==1.22.2",
"oauthlib==3.2.2",
"omegaconf==2.3.0",
"onnx==1.14.0",
"onnxruntime-gpu==1.16.1",
"onnxruntime-training==1.14.1",
"opencensus==0.11.2",
"opencensus-context==0.1.3",
"opencensus-ext-azure==1.1.9",
"opencensus-ext-logging==0.1.1",
"orjson==3.9.9",
"packaging==23.0",
"pandas==2.0.3",
"paramiko==3.3.1",
"parso==0.8.3",
"pathspec==0.11.2",
"pexpect==4.8.0",
"pickleshare==0.7.5",
"pillow==9.5.0",
"pkginfo==1.9.6",
"pkgutil-resolve-name==1.3.10",
"platformdirs==3.11.0",
"pluggy==1.0.0",
"portalocker==2.7.0",
"prompt-toolkit==3.0.39",
"protobuf==3.20.3",
"psutil==5.8.0",
"ptyprocess==0.7.0",
"pure-eval==0.2.2",
"py==1.11.0",
"py-cpuinfo==5.0.0",
"py-spy==0.3.12",
"pyarrow==9.0.0",
"pyasn1==0.5.0",
"pyasn1-modules==0.3.0",
"pybind11==2.11.1",
"pycparser==2.21",
"pydantic==1.10.8",
"pydash==7.0.6",
"pydub==0.25.1",
"pygments==2.16.1",
"pyjwt==2.7.0",
"pynacl==1.5.0",
"pyopenssl==23.2.0",
"pyparsing==3.1.1",
"pysocks==1.7.1",
"pytest==7.1.0",
"pytest-mpi==0.6",
"python-dateutil==2.8.2",
"python-multipart==0.0.6",
"pytorch-lightning==1.9.3",
"pytz==2023.3.post1",
"pyyaml==6.0",
"pyzmq==25.1.1",
"referencing==0.30.2",
"regex==2023.10.3",
"requests==2.31.0",
"requests-oauthlib==1.3.1",
"responses==0.18.0",
"rouge-score==0.1.2",
"rpds-py==0.10.6",
"rsa==4.9",
"ruamel-yaml==0.17.16",
"ruamel-yaml-clib==0.2.8",
"safetensors==0.4.0",
"scipy==1.7.3",
"secretstorage==3.3.3",
"semantic-version==2.10.0",
"sentencepiece==0.1.99",
"setuptools==67.6.0",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.0",
"sqlparse==0.4.4",
"stack-data==0.6.3",
"starlette==0.27.0",
"supervisor==4.2.5",
"svgwrite==1.4.3",
"sympy==1.12",
"tabulate==0.9.0",
"tbb==2021.9.0",
"tensorboard==2.11.2",
"tensorboard-data-server==0.6.1",
"tensorboard-plugin-wit==1.8.1",
"texttable==1.6.7",
"timm==0.9.7",
"tokenizers==0.14.1",
"toml==0.10.2",
"tomli==2.0.1",
"toolz==0.12.0",
"torch==2.0.1+cu117",
"torch-nebula==0.16.2",
"torch-ort==1.14.0",
"torch-tb-profiler==0.4.3",
"torchaudio==2.0.2+cu117",
"torchmetrics==0.11.3",
"torchsnapshot==0.1.0",
"torchvision==0.15.2+cu117",
"tornado==6.3.3",
"tqdm==4.62.3",
"traitlets==5.11.2",
"transformers==4.35.0.dev0",
"triton==2.0.0",
"tutel==0.1",
"typing-extensions==4.8.0",
"tzdata==2023.3",
"uc-micro-py==1.0.2",
"urllib3==1.26.16",
"uvicorn==0.23.2",
"wavedrom==2.0.3.post3",
"wcwidth==0.2.8",
"websocket-client==1.6.4",
"websockets==11.0.3",
"werkzeug==3.0.0",
"wheel==0.40.0",
"wrapt==1.12.1",
"xxhash==3.4.1",
"yarl==1.9.2",
"z3-solver==4.12.2.0",
"zipp==3.15.0"
]
}
],
"name": "ptca",
"prefix": "/opt/conda/envs/ptca"
}
</pre>
### Who can help?
@muellerz @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = transformers.AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-v0.1",
torch_dtype=torch.bfloat16,
use_flash_attention_2=True
)
trainer = Trainer(model=model,
tokenizer=tokenizer,
args=training_args,
compute_metrics = None,
**data_module)
trainer.train()
```
The training job works on A100 with 1 node and 8 GPUs. It fails when job uses more than 1 node with the error:
```
File "./trainer.py", line 206, in <module>
train()
File "./trainer.py", line 157, in train
model = transformers.AutoModelForCausalLM.from_pretrained(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3333, in from_pretrained
) = cls._load_pretrained_model(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3723, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/modeling_utils.py", line 744, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 317, in set_module_tensor_to_device
new_value = value.to(device)
NotImplementedError: Cannot copy out of meta tensor; no data!
```
### Expected behavior
No error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26971/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26970/comments | https://api.github.com/repos/huggingface/transformers/issues/26970/events | https://github.com/huggingface/transformers/issues/26970 | 1,955,228,518 | I_kwDOCUB6oc50imtm | 26,970 | 'LlamaAWQForCausalLM' object has no attribute 'config' | {
"login": "OriginalGoku",
"id": 120199256,
"node_id": "U_kgDOByoYWA",
"avatar_url": "https://avatars.githubusercontent.com/u/120199256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OriginalGoku",
"html_url": "https://github.com/OriginalGoku",
"followers_url": "https://api.github.com/users/OriginalGoku/followers",
"following_url": "https://api.github.com/users/OriginalGoku/following{/other_user}",
"gists_url": "https://api.github.com/users/OriginalGoku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OriginalGoku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OriginalGoku/subscriptions",
"organizations_url": "https://api.github.com/users/OriginalGoku/orgs",
"repos_url": "https://api.github.com/users/OriginalGoku/repos",
"events_url": "https://api.github.com/users/OriginalGoku/events{/privacy}",
"received_events_url": "https://api.github.com/users/OriginalGoku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @OriginalGoku \r\nIt seems you are using `from awq import AutoAWQForCausalLM` which is not an object from transformers, we will integrate soon AWQ in transformers cc @SunMarc for visibility",
"Hi @OriginalGoku , you could try doing\r\n`model=model.model,`",
"Hi @ptanov \r\nI did not understand your code",
"Hi everyone, \r\nNow we have integrated AWQ in transformers, you can directly use it via `AutoModelForCausalLM` interface, make sure to first `pip install -U transformers`. And check out this demo: https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY?usp=sharing for understanding how to use AWQ integration and this documentation section: https://huggingface.co/docs/transformers/main_classes/quantization#awq-integration for more details",
"> Hi @ptanov I did not understand your code\r\n\r\n@OriginalGoku, instead of\r\n\r\n```python\r\npipe = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n max_new_tokens=512,\r\n do_sample=True,\r\n temperature=0.7,\r\n top_p=0.95,\r\n top_k=40,\r\n repetition_penalty=1.1\r\n)\r\n```\r\n\r\nwrite\r\n```python\r\npipe = pipeline(\r\n \"text-generation\",\r\n model=model.model,\r\n tokenizer=tokenizer,\r\n max_new_tokens=512,\r\n do_sample=True,\r\n temperature=0.7,\r\n top_p=0.95,\r\n top_k=40,\r\n repetition_penalty=1.1\r\n)\r\n```",
"> Hi everyone, Now we have integrated AWQ in transformers, you can directly use it via `AutoModelForCausalLM` interface, make sure to first `pip install -U transformers`. And check out this demo: https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY?usp=sharing for understanding how to use AWQ integration and this documentation section: https://huggingface.co/docs/transformers/main_classes/quantization#awq-integration for more details\r\n\r\nHi @younesbelkada is there any way to set `fuse_layers=True` (in `AutoAWQForCausalLM.from_quantized`)? [This option](https://github.com/casper-hansen/AutoAWQ#fused-modules) seems to improve overall performance of `autoawq` significantly.",
"Hi @ptanov \r\nYes I am working on it here: https://github.com/huggingface/transformers/pull/27411/ and indeed I can confirm the huge performance boost. For now it seems to work fine on Llama & Mistral checkpoints - it will require autoawq==0.1.7 (coming soon) - cc @casper-hansen for visibility",
"I have been working hard on making 0.1.7 ready! And it soon will be. After that, you will get the equivalent speedup straight from transformers - stay tuned ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"CLosing as #27411 has been merged!"
] | 1,697 | 1,702 | 1,702 | NONE | null | ### System Info
I am trying to run a CodeLlama model on Colab with a free GPU.
The code was copied from here:
[https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-AWQ](CodeLlama7B)
### Who can help?
@ArthurZucker
@younesbelkada
@Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the code:
```
The code is pretty simple:
!pip3 install autoawq
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
from transformers import pipeline
# model_name_or_path = "TheBloke/CodeLlama-13B-Instruct-AWQ"
model_name_or_path = "TheBloke/CodeLlama-7B-Instruct-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=True, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
prompt = "Tell me about AI"
# This was the default prompt and i did not change it
prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
{prompt}
[/INST]
'''
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
and here is the error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-7-6fa1284003fe>](https://localhost:8080/#) in <cell line: 5>()
3
4 print("*** Pipeline:")
----> 5 pipe = pipeline(
6 "text-generation",
7 model=model,
1 frames
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py](https://localhost:8080/#) in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
842 )
843
--> 844 model_config = model.config
845 hub_kwargs["_commit_hash"] = model.config._commit_hash
846 load_tokenizer = type(model_config) in TOKENIZER_MAPPING or model_config.tokenizer_class is not None
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)
1693 if name in modules:
1694 return modules[name]
-> 1695 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
1696
1697 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'LlamaAWQForCausalLM' object has no attribute 'config'
```
### Expected behavior
When I do the inference with the following code, everything works:
```
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26970/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26969/comments | https://api.github.com/repos/huggingface/transformers/issues/26969/events | https://github.com/huggingface/transformers/issues/26969 | 1,955,086,107 | I_kwDOCUB6oc50iD8b | 26,969 | Need to explicitly set use_reentrant when calling checkpoint | {
"login": "FartyPants",
"id": 23346289,
"node_id": "MDQ6VXNlcjIzMzQ2Mjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/23346289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FartyPants",
"html_url": "https://github.com/FartyPants",
"followers_url": "https://api.github.com/users/FartyPants/followers",
"following_url": "https://api.github.com/users/FartyPants/following{/other_user}",
"gists_url": "https://api.github.com/users/FartyPants/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FartyPants/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FartyPants/subscriptions",
"organizations_url": "https://api.github.com/users/FartyPants/orgs",
"repos_url": "https://api.github.com/users/FartyPants/repos",
"events_url": "https://api.github.com/users/FartyPants/events{/privacy}",
"received_events_url": "https://api.github.com/users/FartyPants/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @fxmarty would you like to have a look at this? 😉 ",
"Seems like @younesbelkada also needs this in #26917",
"You can set it explicitly in the training_args arguments by using the **gradient_checkpointing_kwargs** argument\r\n\r\n```python\r\ntraining_args = TrainingArguments(\r\n # Arguments\r\n gradient_checkpointing=True,\r\n gradient_checkpointing_kwargs={'use_reentrant':False} # OR gradient_checkpointing_kwargs={'use_reentrant':True} \r\n # Arguments\r\n)\r\n```",
"FYI, this solution does not work when using SFTTrainer() from trl as the parameter is not exposed. ",
"@GrahamEckel can you elaborate on the issue you face with TRL SFTTrainer? Ideally with a small reproducer 🙏 ",
"Are we able to fix this when NOT using the trainer? I tried passing `gradient_checkpointing_kwargs={'use_reentrant':False}` to `model.gradient_checkpointing_enabled()`, but it just bombs-out with a \"use_reentrant is an unrecognized argument\" error.\r\n\r\nI'm currently on Transformers 4.35.2.",
"@LuciferianInk which model are you using? \r\n\r\n```python\r\nmodel.gradient_checkpointing_enable(gradient_checkpointing_kwargs={\"use_reentrant\": False})\r\n```\r\nShould work for all standard transformers model. We also have CI tests for that: https://github.com/huggingface/transformers/blob/main/tests/test_modeling_common.py#L575 and https://github.com/huggingface/transformers/blob/ac975074e69eccfb4cea3433f2041029d0f7be9f/tests/test_modeling_common.py#L626",
"Oops, syntax error. Sorry for the false alarm. With your example, I was able to fix that!",
"Awesome, thanks ! ",
"I am trying to finetune mistral 7b using SFT and PEFT, but i get the following error when I have `gradient_checkpointing=True`\r\n`ValueError: Attention mask should be of size (1, 1, 2700, 5400), but is torch.Size([1, 1, 2700, 2700])`\r\n\r\nI have tried `gradient_checkpointing=True` and `gradient_checkpointing_kwargs={\"use_reentrant\": True}` and I still get the above error.\r\n\r\nThese are the versions I have:\r\nTransformers version: 4.36.1\r\nPEFT version: 0.7.1\r\nTRL version: 0.7.4\r\n\r\nHere is my code:\r\n```\r\nquantization_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=\"float16\",\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"mistralai/Mistral-7B-v0.1\",\r\n quantization_config=quantization_config,\r\n device_map=\"auto\",\r\n trust_remote_code=True,\r\n torch_dtype=torch.bfloat16,\r\n)\r\nif torch.cuda.device_count() > 1: # If more than 1 GPU\r\n model.is_parallelizable = True\r\n model.model_parallel = True\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"models\",\r\n per_device_train_batch_size=1,\r\n gradient_accumulation_steps=4,\r\n learning_rate=1.41e-5,\r\n logging_steps=1,\r\n num_train_epochs=1,\r\n # max_steps=100,\r\n report_to=None,\r\n save_steps=30,\r\n save_total_limit=2,\r\n evaluation_strategy=\"steps\",\r\n eval_steps=10,\r\n do_eval=True,\r\n greater_is_better=False,\r\n load_best_model_at_end=True,\r\n auto_find_batch_size=True,\r\n optim=\"paged_adamw_8bit\",\r\n warmup_ratio=0.03,\r\n lr_scheduler_type=\"cosine\",\r\n gradient_checkpointing=True, # Leads to reduction in memory at slighly decrease in speed\r\n gradient_checkpointing_kwargs={\"use_reentrant\": True},\r\n)\r\n\r\n# LoraConfig\r\npeft_config = LoraConfig(\r\n r=32,\r\n lora_alpha=32, \r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n target_modules=[\"q_proj\", \"v_proj\"],\r\n)\r\n\r\nearly_stop = EarlyStoppingCallback(10)\r\n\r\ntrainer = SFTTrainer(\r\n model=model,\r\n tokenizer=tokenizer,\r\n args=training_args,\r\n peft_config=peft_config,\r\n max_seq_length=2700, \r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n dataset_text_field=\"text\",\r\n packing=True,\r\n neftune_noise_alpha=5,\r\n callbacks=[early_stop],\r\n)\r\n\r\ntrainer.train()\r\n```\r\n",
"Hi @manmax31 \r\nThe issue is fixed by #28031 please see my comment here: https://github.com/huggingface/transformers/issues/28056#issuecomment-1857571060\r\nCan you try out with transformers main? `pip install -U git+https://github.com/huggingface/transformers`",
"Thank you. Is this fix not in pypi yet?\nAs that's only way our systems can access it. ",
"cc @ArthurZucker @amyeroberts would it makes sense to do a patch release to include #28031 ? it fixes a regression issue - i.e. users were able to train as usual with PEFT and GC before introducing the attention refactor and #28031 fixes it ",
"That will be great. I am currently now back to 4.35.2",
"@younesbelkada If it's a regression, then yes, I think we should do a patch release (also including #28043 and #28061) cc @ArthurZucker WDYT?",
"Yes 👍🏻 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Was fixed and released so closing"
] | 1,697 | 1,704 | 1,704 | NONE | null | ### System Info
windows
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
according to new pytorch, you need to now explicitly set use_reentrant as it will be changed from use_reentrant=True to use_reentrant=False in near future
transformers.models.llama.modeling_llama
def forward...
layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(decoder_layer), hidden_states, attention_mask, position_ids
)
### Expected behavior
need to explicitly set use_reentrant | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26969/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26969/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26968/comments | https://api.github.com/repos/huggingface/transformers/issues/26968/events | https://github.com/huggingface/transformers/pull/26968 | 1,954,943,268 | PR_kwDOCUB6oc5dasp_ | 26,968 | [docstring] Fix docstring for ErnieConfig, ErnieMConfig | {
"login": "Sparty",
"id": 3923604,
"node_id": "MDQ6VXNlcjM5MjM2MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3923604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparty",
"html_url": "https://github.com/Sparty",
"followers_url": "https://api.github.com/users/Sparty/followers",
"following_url": "https://api.github.com/users/Sparty/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparty/subscriptions",
"organizations_url": "https://api.github.com/users/Sparty/orgs",
"repos_url": "https://api.github.com/users/Sparty/repos",
"events_url": "https://api.github.com/users/Sparty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26968/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26968",
"html_url": "https://github.com/huggingface/transformers/pull/26968",
"diff_url": "https://github.com/huggingface/transformers/pull/26968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26968.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26967/comments | https://api.github.com/repos/huggingface/transformers/issues/26967/events | https://github.com/huggingface/transformers/pull/26967 | 1,954,759,583 | PR_kwDOCUB6oc5daFa9 | 26,967 | Add MoLFormer | {
"login": "hoffmansc",
"id": 18314063,
"node_id": "MDQ6VXNlcjE4MzE0MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/18314063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoffmansc",
"html_url": "https://github.com/hoffmansc",
"followers_url": "https://api.github.com/users/hoffmansc/followers",
"following_url": "https://api.github.com/users/hoffmansc/following{/other_user}",
"gists_url": "https://api.github.com/users/hoffmansc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoffmansc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoffmansc/subscriptions",
"organizations_url": "https://api.github.com/users/hoffmansc/orgs",
"repos_url": "https://api.github.com/users/hoffmansc/repos",
"events_url": "https://api.github.com/users/hoffmansc/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoffmansc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for opening a PR, this model looks like a prefect candidate for [`code on the hub`](https://huggingface.co/docs/transformers/custom_models) under the IBM organization like [this one](https://huggingface.co/ibm/mpt-7b-instruct2)! We would love to help you 🤗 ",
"I see, thanks for the tip. I think I got it working [here](https://huggingface.co/ibm/MoLFormer-XL-both-10pct) so I'll close this now."
] | 1,697 | 1,698 | 1,698 | NONE | null | # What does this PR do?
Add MoLFormer model from IBM (`MolformerModel`, `MolformerForMaskedLM`, `MolformerForSequenceClassification`, etc.). MoLFormer is trained on a large corpus of small molecules represented by SMILES strings with a masked language modeling objective, fast linear attention, and rotary positional embeddings.
Closes #26966
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Models:
- text models: @ArthurZucker and @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26967",
"html_url": "https://github.com/huggingface/transformers/pull/26967",
"diff_url": "https://github.com/huggingface/transformers/pull/26967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26967.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26966/comments | https://api.github.com/repos/huggingface/transformers/issues/26966/events | https://github.com/huggingface/transformers/issues/26966 | 1,954,709,300 | I_kwDOCUB6oc50gn80 | 26,966 | Add MoLFormer model | {
"login": "hoffmansc",
"id": 18314063,
"node_id": "MDQ6VXNlcjE4MzE0MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/18314063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoffmansc",
"html_url": "https://github.com/hoffmansc",
"followers_url": "https://api.github.com/users/hoffmansc/followers",
"following_url": "https://api.github.com/users/hoffmansc/following{/other_user}",
"gists_url": "https://api.github.com/users/hoffmansc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoffmansc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoffmansc/subscriptions",
"organizations_url": "https://api.github.com/users/hoffmansc/orgs",
"repos_url": "https://api.github.com/users/hoffmansc/repos",
"events_url": "https://api.github.com/users/hoffmansc/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoffmansc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,697 | 1,698 | 1,698 | NONE | null | ### Model description
MoLFormer is a large-scale chemical language model designed with the intention of learning a model trained on small molecules which are represented as SMILES strings. MoLFormer leverages Masked Language Modeling and employs a linear attention Transformer combined with rotary embeddings.
Published in Nature Machine Intelligence:
> ... Experiments show that utilizing the learned molecular representation outperforms existing baselines on downstream tasks, including supervised and self-supervised graph neural net baselines and language models, on several classification and regression tasks from ten benchmark datasets while performing competitively on two others. Further analyses, specifically through the lens of attention, demonstrate that MoLFormer trained on chemical SMILES indeed learns the spatial relationships between atoms within a molecule.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: [NMI version](https://rdcu.be/c12D0), [arXiv version](https://arxiv.org/abs/2106.09553)
Code: https://github.com/IBM/molformer
Weights: https://ibm.box.com/v/MoLFormer-data | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26966/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26966/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26965/comments | https://api.github.com/repos/huggingface/transformers/issues/26965/events | https://github.com/huggingface/transformers/pull/26965 | 1,954,635,373 | PR_kwDOCUB6oc5dZqK9 | 26,965 | Add `default_to_square_for_size` to `CLIPImageProcessor` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot !",
"I forgot to change this line\r\n\r\n```\r\n output_size = get_resize_output_image_size(\r\n image, size=size[\"shortest_edge\"], default_to_square=False, input_data_format=input_data_format\r\n )\r\n```\r\nto \r\n```\r\n output_size = get_resize_output_image_size(\r\n image, size=size[\"shortest_edge\"], default_to_square=self.use_square_size, input_data_format=input_data_format\r\n )\r\n```\r\nwill do it but wait on next Monday to merge"
] | 1,697 | 1,698 | 1,698 | COLLABORATOR | null | # What does this PR do?
Add `default_to_square_for_size` to `CLIPImageProcessor`.
As in the file, we also have `crop_size = get_size_dict(crop_size, default_to_square=True, param_name="crop_size")`, so I give the name `default_to_square_for_size` for `get_size_dict(size, default_to_square=default_to_square_for_size)` to avoid confusion.
Otherwise, I can give it `default_to_square`, and we may add `default_to_square_for_crop_size` later (or in this PR). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26965/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26965",
"html_url": "https://github.com/huggingface/transformers/pull/26965",
"diff_url": "https://github.com/huggingface/transformers/pull/26965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26965.patch",
"merged_at": 1698138497000
} |
https://api.github.com/repos/huggingface/transformers/issues/26964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26964/comments | https://api.github.com/repos/huggingface/transformers/issues/26964/events | https://github.com/huggingface/transformers/pull/26964 | 1,954,560,043 | PR_kwDOCUB6oc5dZZUp | 26,964 | fix roformer prepare_inputs_for_generation not return model_kwargs | {
"login": "ylzz1997",
"id": 28924547,
"node_id": "MDQ6VXNlcjI4OTI0NTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/28924547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylzz1997",
"html_url": "https://github.com/ylzz1997",
"followers_url": "https://api.github.com/users/ylzz1997/followers",
"following_url": "https://api.github.com/users/ylzz1997/following{/other_user}",
"gists_url": "https://api.github.com/users/ylzz1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylzz1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylzz1997/subscriptions",
"organizations_url": "https://api.github.com/users/ylzz1997/orgs",
"repos_url": "https://api.github.com/users/ylzz1997/repos",
"events_url": "https://api.github.com/users/ylzz1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylzz1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Thanks for opening a PR, the goal of this function is basically to expose the inputs that are used by the model. Which inputs in particular are you trying to unlock here?\r\n\r\nI'll be using RoFormer with a multi-encoder, single-decoder structure to accommodate MultiModal Learning. I hope to include the `encoder_hidden_states` and `encoder_attention_mask` parameters in the model's inputs to enable **cross-attention** during generation.",
"The best way to do this would be to just overload this function in your codebase! Transformers is meant to be easy to work on top of, and this change is really not aligned with most models were we only return the args needed for generation, otherwise it's very messy! ",
"> The best way to do this would be to just overload this function in your codebase! Transformers is meant to be easy to work on top of, and this change is really not aligned with most models were we only return the args needed for generation, otherwise it's very messy!\r\n\r\nHowever, when using RoFormer to form a common encoder-decoder structure, cross-attention is also necessary for feature input. I think the input of `encoder_hidden_states` features is essential.",
"Hey! I understand your frustration but I believe that if you are using your custom code you should be able to overwrite the `perpare_inputs_for_generation` function. Also I recommend you to checkout the `EncoderDecoderModel` which supports this I believe! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | Fixes Roformer prepare_inputs_for_generation not return model_kwargs
### Motivation
This bug causes the parameters passed into the **generate** function to be unable to be received by the model's **forward** function.
This PR is aimed at fixing this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26964/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26964",
"html_url": "https://github.com/huggingface/transformers/pull/26964",
"diff_url": "https://github.com/huggingface/transformers/pull/26964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26964.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26963/comments | https://api.github.com/repos/huggingface/transformers/issues/26963/events | https://github.com/huggingface/transformers/pull/26963 | 1,954,498,410 | PR_kwDOCUB6oc5dZLpr | 26,963 | [RWKV] Add RWKV5 model and RWKVWorldTokenizer | {
"login": "BBuf",
"id": 35585791,
"node_id": "MDQ6VXNlcjM1NTg1Nzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/35585791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BBuf",
"html_url": "https://github.com/BBuf",
"followers_url": "https://api.github.com/users/BBuf/followers",
"following_url": "https://api.github.com/users/BBuf/following{/other_user}",
"gists_url": "https://api.github.com/users/BBuf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BBuf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BBuf/subscriptions",
"organizations_url": "https://api.github.com/users/BBuf/orgs",
"repos_url": "https://api.github.com/users/BBuf/repos",
"events_url": "https://api.github.com/users/BBuf/events{/privacy}",
"received_events_url": "https://api.github.com/users/BBuf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hey feel free to ping me when this is ready! 🤗 ",
"Hi, pr ready now 🤗. @ArthurZucker ",
"Ok! Thanks, I'll review now, but will let @amyeroberts handle the rest as I'll be off for a week 😉 ",
"> Thanks for the PR! Could you explain the motivation behind not using the fast tokenizer, and whether this tokenizer / slow implem of GPT2 for example.\r\n> \r\n> Mostly, this should need a new folder as it's a new model ! If we use the GPT2Tokenizer implementation then we can also just add a .md file ( like we did for flan T5 for example)\r\n\r\nThe model implementation is the same, only the tokenizer has different options. The tokenizer implemented in this PR is for the RWKV4 World model, but the implementation of the RWKV4 World model is exactly the same as the existing RWKV model implementation.",
"@ArthurZucker Hello, I have implemented the RWKV5 model and the RWKVWorldTokenizer it requires. Please review again. Thank you.",
"> Thanks for the PR! Could you explain the motivation behind not using the fast tokenizer, and whether this tokenizer / slow implem of GPT2 for example.\r\n> \r\n> Mostly, this should need a new folder as it's a new model ! If we use the GPT2Tokenizer implementation then we can also just add a .md file ( like we did for flan T5 for example)\r\n\r\n@BBuf, thanks for helping push this!\r\n\r\nIm from the RWKV team. So i can help explain this part.\r\n\r\nThe main motivation for the world tokenizer is to improve support for multi-lingual dataset, within the RWKV generations of models. Especially in character based languages, or languages without \"spaces\". This benefit applies to for european or nordic languages.",
"Okay, understood! So this new model uses a word level tokenizer, which can be supported both in transformers (by adding a new tokenizer, with a simple vocab / the code you are proposing) and `tokenizers` which natively has a [`WordLevel`](https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.models.WordLevel) tokenizer ! \r\n\r\nI'm thrilled to help you get this merged 😉 ",
"> Okay, understood! So this new model uses a word level tokenizer, which can be supported both in transformers (by adding a new tokenizer, with a simple vocab / the code you are proposing) and `tokenizers` which natively has a [`WordLevel`](https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.models.WordLevel) tokenizer !\r\n> \r\n> I'm thrilled to help you get this merged 😉\r\n\r\nHello, could you take another look at this PR? The recent few commits have added support for batch inference, and I feel it's getting close to being merged.",
"Sure I’ll review today! 🤗",
"> Okay, understood! So this new model uses a word level tokenizer, which can be supported both in transformers (by adding a new tokenizer, with a simple vocab / the code you are proposing) and `tokenizers` which natively has a [`WordLevel`](https://huggingface.co/docs/tokenizers/api/tokenizer#tokenizers.models.WordLevel) tokenizer !\r\n> \r\n> I'm thrilled to help you get this merged 😉\r\n\r\nI would not call it a \"word level\" more of a \"trie tokenizer\", spaces are just simply another character with no special meaning - If that makes sense.\r\n\r\nBut yes, that in concept this tokenizer could be used for non RWKV architecture, and there is nothing stopping anyone from using our older GPT-neox tokenizer on our newer architecture.\r\n\r\nDo let me know if I can clarify anything else from our end, or help in this merge =)",
"Okay! I'll let you know, sorry I got caught up in sprints here and there but will review this early next week 🤗 ",
"All the progress look good! Ping me whenever for another review! 🤗 ",
"Now that rwkv5 pretrained model is out, will this get merged?",
"I'll review again and help merge it asap! ",
"This yields the following:\r\n```python \r\n>>> from transformers import Rwkv5Tokenizer\r\n>>> tokenizer = Rwkv5Tokenizer(\"/Users/arthurzucker/Work/transformers/rwkv.txt\")\r\n>>> prompt = \"Hey how are you? 男:听说你们公司要派你去南方工作\"\r\n>>> ids = tokenizer.encode(prompt)\r\n\r\n>>> print(ids)\r\n[0, 6037, 21887, 21338, 22851, 64, 65517, 14631, 19181, 11095, 16765, 10494, 10432, 10708, 11059, 16533, 13848, 10494, 11015, 10964, 13066, 12167, 10490]\r\n>>> print(tokenizer.tokenize(prompt))\r\n['Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n>>> print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))\r\n['<s>', 'Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n>>> print(tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))))\r\n<s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n>>> print(tokenizer.decode(tokenizer.encode(prompt)))\r\n<s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n```",
"> This yields the following:\r\n> \r\n> ```python\r\n> >>> from transformers import Rwkv5Tokenizer\r\n> >>> tokenizer = Rwkv5Tokenizer(\"/Users/arthurzucker/Work/transformers/rwkv.txt\")\r\n> >>> prompt = \"Hey how are you? 男:听说你们公司要派你去南方工作\"\r\n> >>> ids = tokenizer.encode(prompt)\r\n> \r\n> >>> print(ids)\r\n> [0, 6037, 21887, 21338, 22851, 64, 65517, 14631, 19181, 11095, 16765, 10494, 10432, 10708, 11059, 16533, 13848, 10494, 11015, 10964, 13066, 12167, 10490]\r\n> >>> print(tokenizer.tokenize(prompt))\r\n> ['Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n> >>> print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))\r\n> ['<s>', 'Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n> >>> print(tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))))\r\n> <s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n> >>> print(tokenizer.decode(tokenizer.encode(prompt)))\r\n> <s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n> ```\r\n\r\nThank you for your advice. The main problem here is that the original tokenizer implementation(https://github.com/BlinkDL/ChatRWKV/tree/main/tokenizer) does not have bos, eos, or pad token, but bos_token_id, eos_token_id, and pad_token_id are all set to `0` . In my implementation on Hugging Face, I have simulated this situation. But now I am unsure what to set for bos, eos, and pad token, as it seems that setting any token would not meet expectations. Therefore, it feels like this tokenizer is a special hack case. I would like to ask if it is acceptable for the tokenizer's definition not to be merged into Transformers repository, and only merge the implementation of the RWKV5 model. Then, this custom tokenizer implementation could be placed in the corresponding repository on Hugging Face, for example: https://huggingface.co/RWKV/HF_v5-Eagle-7B . ",
"In the code I provided I manually set `self._added_tokens_decoder = {0:AddedToken(bos_token)}` \r\nwhich forces the token 0 to be the `bos_token`. We can of course force any other behaviour that way, but we only need to define a string token. This will be *very* useful to the community as a whole for any SFT training, as it's the expected api. ",
"Whether or not the original tokenizer has a `token`, it has `token_id=0` which means we can choose the content of the token. I used `<s>` but we should use something like `<|endoftext|>` just to make sure it doesn't exist. This should solve all the issues you are having . Doing something like `tokenizer.encode(\"<|endoftext|>\")` will yield `0`. which is what we want ",
"WDYT? ",
"In general the model was not trained with a special bos_token, or special pad_token in mind (we use 0, and masking for padding)\r\n\r\nSo for all these tokens, we typically just use token 0 as fallback, and it \"generally works\" if that makes sense - so i think defaulting for all these tokens as 0 makes sense to me (coming from the trainer / model team side)",
"I ran into a bug with this tokenizer, [sourced from here](https://huggingface.co/RWKV/rwkv-5-world-1b5/blob/main/modeling_rwkv5.py). I'm not sure how much the two code bases have diverged at this point, but @BlinkDL asked me to report it in this PR.\r\n\r\nMy issue relates to this code:\r\n```py\r\ntokenized_batches = []\r\nwith tqdm(total=len(batches)) as pbar:\r\n for batch in batches:\r\n tokenized = tokenizer(\r\n batch,\r\n max_length=block_size,\r\n stride=stride,\r\n padding=\"max_length\",\r\n return_overflowing_tokens=True,\r\n truncation=True,\r\n return_tensors=\"np\",\r\n )\r\n tokenized_batches.append(tokenized[\"input_ids\"])\r\n pbar.update(1)\r\n\r\ntokens = np.concatenate(tokenized_batches)\r\n```\r\nTypically, the tokenizer should pad these batches to a consistent length, but that's not happening here:\r\n```\r\n File \"/usr/local/lib/python3.10/dist-packages/aigen/datasets.py\", line 188, in encode_tokens\r\n tokens = np.concatenate(tokenized_batches)\r\nValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 24226 and the array at index 1 has size 26680\r\n```\r\nThe fix for me was rather simple, but I feel like it should probably be handled by the tokenizer itself:\r\n```\r\npadded_batches = []\r\nfor batch in tokenized_batches:\r\n # Calculate the number of padding tokens needed\r\n padding_length = max_len - batch.shape[1]\r\n # Pad the batch and add it to the padded_batches list\r\n if padding_length > 0:\r\n padded_batch = np.pad(batch, ((0, 0), (0, padding_length)), mode='constant', constant_values=tokenizer.pad_token_id)\r\n else:\r\n padded_batch = batch\r\n padded_batches.append(padded_batch)\r\n```\r\nJust a PSA. Thanks!",
"Can you try with this one: https://github.com/huggingface/transformers/pull/26963#pullrequestreview-1861671950",
"> Can you try with this one: [#26963 (review)](https://github.com/huggingface/transformers/pull/26963#pullrequestreview-1861671950)\r\n\r\nThank you, I will have a try.",
"> This yields the following:\r\n> \r\n> ```python\r\n> >>> from transformers import Rwkv5Tokenizer\r\n> >>> tokenizer = Rwkv5Tokenizer(\"/Users/arthurzucker/Work/transformers/rwkv.txt\")\r\n> >>> prompt = \"Hey how are you? 男:听说你们公司要派你去南方工作\"\r\n> >>> ids = tokenizer.encode(prompt)\r\n> \r\n> >>> print(ids)\r\n> [0, 6037, 21887, 21338, 22851, 64, 65517, 14631, 19181, 11095, 16765, 10494, 10432, 10708, 11059, 16533, 13848, 10494, 11015, 10964, 13066, 12167, 10490]\r\n> >>> print(tokenizer.tokenize(prompt))\r\n> ['Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n> >>> print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))\r\n> ['<s>', 'Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n> >>> print(tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))))\r\n> <s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n> >>> print(tokenizer.decode(tokenizer.encode(prompt)))\r\n> <s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n> ```\r\n\r\n\r\n\r\n> This yields the following:\r\n> \r\n> ```python\r\n> >>> from transformers import Rwkv5Tokenizer\r\n> >>> tokenizer = Rwkv5Tokenizer(\"/Users/arthurzucker/Work/transformers/rwkv.txt\")\r\n> >>> prompt = \"Hey how are you? 男:听说你们公司要派你去南方工作\"\r\n> >>> ids = tokenizer.encode(prompt)\r\n> \r\n> >>> print(ids)\r\n> [0, 6037, 21887, 21338, 22851, 64, 65517, 14631, 19181, 11095, 16765, 10494, 10432, 10708, 11059, 16533, 13848, 10494, 11015, 10964, 13066, 12167, 10490]\r\n> >>> print(tokenizer.tokenize(prompt))\r\n> ['Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n> >>> print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))\r\n> ['<s>', 'Hey', ' how', ' are', ' you', '?', ' ', '男', ':', '听', '说', '你', '们', '公', '司', '要', '派', '你', '去', '南', '方', '工', '作']\r\n> >>> print(tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))))\r\n> <s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n> >>> print(tokenizer.decode(tokenizer.encode(prompt)))\r\n> <s>Hey how are you? 男:听说你们公司要派你去南方工作\r\n> ```\r\n\r\nI tried this tokenizer, but it seems that I can't get the expected results.\r\n\r\n\r\n\r\nI would like to ask if the `rwkv.txt` in the code you provided is the original file from https://github.com/BlinkDL/ChatRWKV/blob/main/tokenizer/rwkv_vocab_v20230424.txt.\r\n",
"I converted the vocab to the appropriate format to read it, sorry I forgot that step. Will push it now. I used your tokenizer's `encoder`",
"> I converted the vocab to the appropriate format to read it, sorry I forgot that step. Will push it now. I used your tokenizer's `encoder`\r\n\r\nOkay, thanks.",
"I used this :\r\n```python \r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"RWKV/rwkv-5-world-1b5\", trust_remote_code=True)\r\n\r\nwith open(\"/Users/arthurzucker/Work/transformers/rwkv.txt\", \"wb\") as f:\r\n for index, token in tokenizer.encoder.item():\r\n f.write(token + b\"\\n\")\r\n\r\n\r\ntokenizer = Rwkv5Tokenizer(\"/Users/arthurzucker/Work/transformers/rwkv.txt\")\r\nprompt = \"Hey how are you? 男:听说你们公司要派你去南方工作\"\r\nids = tokenizer.encode(prompt)\r\nprint(ids)\r\nprint(tokenizer.tokenize(prompt))\r\nprint(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))\r\nprint(tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))))\r\nprint(tokenizer.decode(tokenizer.encode(prompt)))\r\n```",
"Pushed the tokenizer here: https://huggingface.co/ArthurZ/rwkv-5",
"> Pushed the tokenizer here: https://huggingface.co/ArthurZ/rwkv-5\r\n\r\nOkay, got it.",
"> Pushed the tokenizer here: https://huggingface.co/ArthurZ/rwkv-5\r\n\r\nHello, I encountered an error while testing this tokenizer, but I'm not sure how to resolve it.\r\n\r\n```\r\nERROR: test_added_token_serializable (tests.models.rwkv5.test_tokenization_rwkv5.RWKV5TokenizationTest.test_added_token_serializable) [Rwkv5Tokenizer]\r\n----------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/Users/bbuf/工作目录/RWKV/transformers/tests/test_tokenization_common.py\", line 2204, in test_added_token_serializable\r\n tokenizer.from_pretrained(tmp_dir_name)\r\n File \"/opt/homebrew/lib/python3.11/site-packages/transformers-4.38.0.dev0-py3.11.egg/transformers/tokenization_utils_base.py\", line 2031, in from_pretrained\r\n return cls._from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/transformers-4.38.0.dev0-py3.11.egg/transformers/tokenization_utils_base.py\", line 2263, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/transformers-4.38.0.dev0-py3.11.egg/transformers/models/rwkv5/tokenization_rwkv5.py\", line 133, in __init__\r\n self._added_tokens_decoder = {0:AddedToken(bos_token)}\r\n ^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: argument 'content': 'AddedToken' object cannot be converted to 'PyString'\r\n```"
] | 1,697 | 1,708 | null | NONE | null | Add RWKVWorldTokenizer for rwkv5 series model.
The tokenizer has been used in:
- [RWKV/rwkv-5-world-1b5](https://huggingface.co/RWKV/rwkv-5-world-1b5)
- [RWKV/rwkv-5-world-3b](https://huggingface.co/RWKV/rwkv-5-world-3b)
- [RWKV/rwkv-4-world-169m](https://huggingface.co/RWKV/rwkv-4-world-169m)
- [RWKV/rwkv-4-world-430m](https://huggingface.co/RWKV/rwkv-4-world-430m)
- [RWKV/rwkv-4-world-1b5](https://huggingface.co/RWKV/rwkv-4-world-1b5)
- [RWKV/rwkv-4-world-3b](https://huggingface.co/RWKV/rwkv-4-world-3b)
- [RWKV/rwkv-4-world-7b](https://huggingface.co/RWKV/rwkv-4-world-7b)
and lambda test in https://github.com/BBuf/RWKV-World-HF-Tokenizer/blob/main/check_lambda/lambda_hf.py
@xianbaoqian | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26963/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26963/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26963",
"html_url": "https://github.com/huggingface/transformers/pull/26963",
"diff_url": "https://github.com/huggingface/transformers/pull/26963.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26963.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26962/comments | https://api.github.com/repos/huggingface/transformers/issues/26962/events | https://github.com/huggingface/transformers/pull/26962 | 1,954,299,157 | PR_kwDOCUB6oc5dYgnG | 26,962 | Remove token_type_ids from default TF GPT-2 signature | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,698 | 1,698 | MEMBER | null | Although GPT-2 supports `token_type_ids`, the implementation is very weird (token embeddings are also used as token type embeddings!) and in practice `token_type_ids=None` is used almost exclusively.
For most models, you could mimic the effects of `token_type_ids=None` by just passing an all-zeros array, but this does not work for GPT-2 because it completely skips the embeddings when `token_type_ids=None`. This means that a model exported with a `token_type_ids` input cannot be coerced to behave correctly.
To stop this tripping up other users, we remove `token_type_ids` from the GPT-2 input sig. This issue is specific to GPT-2 - most other models have more reasonable ways of handling `token_type_ids` and shouldn't be affected.
Fixes #26783 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26962/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26962",
"html_url": "https://github.com/huggingface/transformers/pull/26962",
"diff_url": "https://github.com/huggingface/transformers/pull/26962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26962.patch",
"merged_at": 1698074283000
} |
https://api.github.com/repos/huggingface/transformers/issues/26961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26961/comments | https://api.github.com/repos/huggingface/transformers/issues/26961/events | https://github.com/huggingface/transformers/issues/26961 | 1,954,284,539 | I_kwDOCUB6oc50fAP7 | 26,961 | Wrong checkpoint got deleted when use_mtime=True | {
"login": "xkszltl",
"id": 5203025,
"node_id": "MDQ6VXNlcjUyMDMwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5203025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xkszltl",
"html_url": "https://github.com/xkszltl",
"followers_url": "https://api.github.com/users/xkszltl/followers",
"following_url": "https://api.github.com/users/xkszltl/following{/other_user}",
"gists_url": "https://api.github.com/users/xkszltl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xkszltl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xkszltl/subscriptions",
"organizations_url": "https://api.github.com/users/xkszltl/orgs",
"repos_url": "https://api.github.com/users/xkszltl/repos",
"events_url": "https://api.github.com/users/xkszltl/events{/privacy}",
"received_events_url": "https://api.github.com/users/xkszltl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @pacman100 and @muellerzr ",
"FYI this kind of issue is what we're seeing.\r\nMay not be this specific one because I don't know what we're using underneath, just for the context.\r\n- https://github.com/Azure/azure-storage-fuse/issues/95",
"Any update?",
"I got the same errors.\r\nYou can override the `_sorted_checkpoints` function and set `use_mtime = False`.",
"Ping again on this @ArthurZucker @pacman100 @muellerzr @sgugger ",
"> Fix to this should be straightforward, i.e. set it to False to rely on the numeric part of dir name.\r\nBut we'd like to know why it is True currently.\r\n\r\nHello @xkszltl, having gone through the issue, the fix suggested makes sense. A training argument flag for this should provide the required flexibility. We encourage you to raise a PR if you are interested, Thank you!",
"Sounds good, will send a PR later on.",
"> A training argument flag for this should provide the required flexibility.\r\n\r\nDo we still supposed to keep mtime code path? If so what's the use case?",
"If adding a cmd-line flag, I would suggest to default to numerical code path, which is a breaking change if whoever actually wants mtime, while defaulting to mtime won't make sense because it introduces a hidden and expensive bug.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still active",
"#28364 might not fix it but related",
"Not stale",
"cc @muellerzr a gentle ping as I am not sure this was fixed",
"Just a simple fix, let me send a PR."
] | 1,697 | 1,707 | 1,707 | CONTRIBUTOR | null | ### System Info
transformers 4.25.1
Ubuntu 22.04 in docker
### Description
We have a job with checkpointing per 200 iters and keeps at most 3 checkpoints.
However, we found in a 1100-iter job, only 400/600/800 was saved at the end.
200 is rotated as expected, but 1000 is missing as well.
Looking into the log, we can see checkpoint-1000 got deleted immediately after saving.
See the last line:
```
91%|█████████ | 1000/1100 [...][INFO|trainer.py:2693] 2023-10-20 12:24:47,419 >> Saving model checkpoint to output/checkpoint-1000
[INFO|configuration_utils.py:447] 2023-10-20 12:24:49,472 >> Configuration saved in output/checkpoint-1000/config.json
[INFO|modeling_utils.py:1637] 2023-10-20 12:24:55,486 >> Model weights saved in output/checkpoint-1000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2157] 2023-10-20 12:24:55,799 >> tokenizer config file saved in output/checkpoint-1000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2164] 2023-10-20 12:24:56,111 >> Special tokens file saved in output/checkpoint-1000/special_tokens_map.json
[INFO|trainer.py:2771] 2023-10-20 12:28:06,058 >> Deleting older checkpoint [output/checkpoint-1000] due to args.save_total_limit
```
And this line may be the root cause:
- https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/trainer.py#L2267
Our filesystem underneath, for the output dir, is a fuse fs to http blob storage.
So most likely it's not POSIX.
mtime is probably wrong, or maybe it's simply 0.
Code was first introduced by @sgugger in:
- https://github.com/huggingface/transformers/pull/7431
Fix to this should be straightforward, i.e. set it to False to rely on the numeric part of dir name.
But we'd like to know why it is True currently. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26961/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26960/comments | https://api.github.com/repos/huggingface/transformers/issues/26960/events | https://github.com/huggingface/transformers/pull/26960 | 1,954,213,507 | PR_kwDOCUB6oc5dYNzp | 26,960 | Shorten the conversation tests for speed + fixing position overflows | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @Rocketknight1 . Thanks a lot. I think it's probably better to simply skip this test for blenderbot small.\r\n\r\nCould you remind me why the test was never failing before?",
"@ydshieh the test never failed before because the `ConversationalPipeline` class wasn't working very well, and didn't actually generate any text in these tests! ",
"Thanks. Then let's just skip it for this model. You can search `def is_pipeline_test_to_skip` to see how to skip pipeline tests. ",
"@ydshieh are you sure? I think the test makes more sense with very short outputs anyway - we don't need 20 tokens to confirm the pipeline works!",
"That's true, but making it this way also hide the fact of some models fail this test. Having a skip make it explicit. We can definitely use `2` instead of `20` if we do have a gain of running time (not that usually the model used for pipeline testing is very small, and not really obvious to say we will have a gain).",
"BlenderBotSmall can pass the test! The problem only arises because the test `BlenderBotSmallConfig` has `max_position_embeddings=20`. Maybe I keep the test at a higher number, but increase the `BlenderBotSmallConfig` in the test to have more position embeddings, so it passes more easily?",
"Hey! Let's just skip it for now it's blocking a few prs! ",
"I'll merge #27013 so you have time to work on the fix",
"cc @ydshieh @ArthurZucker this should be ready to go now! I've skipped the test for `BlenderBotSmall` as @ydshieh suggested because the problem is in the `hf-internal-testing` checkpoint, and I don't think the model has much usage anymore. The model actually works fine - the test checkpoint just has a very low value for `max_position_embeddings`.\r\n\r\nI also increased `max_new_tokens` back to `5` in the general test and removed the workarounds that @ArthurZucker added. I don't think it needs to go higher - it just makes the test slower, and doesn't tell us anything new!",
"Quick ping again @ydshieh @ArthurZucker! This PR should be ready to go",
"Damm, I made review but forgot to commit the comments. Sorry ",
"@ydshieh Changes made as you suggested - I used `is_pipeline_test_to_skip`. I left the updated sequence lengths alone because even though they don't fix our `tiny-random` models, they do fix some other tests, especially for `BlenderBot`."
] | 1,697 | 1,698 | 1,698 | MEMBER | null | Some of the conversation tests were still overflowing the maximum position embeddings for very short models like `BlenderBotSmall`. I reduced the limits a lot, which also speeds up those tests! cc @ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26960/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26960",
"html_url": "https://github.com/huggingface/transformers/pull/26960",
"diff_url": "https://github.com/huggingface/transformers/pull/26960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26960.patch",
"merged_at": 1698762004000
} |
https://api.github.com/repos/huggingface/transformers/issues/26959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26959/comments | https://api.github.com/repos/huggingface/transformers/issues/26959/events | https://github.com/huggingface/transformers/issues/26959 | 1,954,075,651 | I_kwDOCUB6oc50eNQD | 26,959 | Generation doesn't stop despite provide stop tokens | {
"login": "hdnh2006",
"id": 17271049,
"node_id": "MDQ6VXNlcjE3MjcxMDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17271049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hdnh2006",
"html_url": "https://github.com/hdnh2006",
"followers_url": "https://api.github.com/users/hdnh2006/followers",
"following_url": "https://api.github.com/users/hdnh2006/following{/other_user}",
"gists_url": "https://api.github.com/users/hdnh2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hdnh2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hdnh2006/subscriptions",
"organizations_url": "https://api.github.com/users/hdnh2006/orgs",
"repos_url": "https://api.github.com/users/hdnh2006/repos",
"events_url": "https://api.github.com/users/hdnh2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/hdnh2006/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I'll give you hints as to how to solve this.\r\n- How does the string `### Instruction'` gets encoded by the model? \r\n- Given the results, how can I make sure the models stops? \r\nif you need help I can of course give you the solution 🤗 ",
"Thanks @ArthurZucker for you reply.\r\n\r\nThe string `### Instruction` gets encoded by the model as following:\r\n```\r\ntokenizer.encode('### Instruction:')\r\nOut[2]: [1, 835, 2799, 4080, 29901]\r\n```\r\nI appreciate your hints and I can tell you I have also tried several configurations as following:\r\n```\r\nresult = gen(prompt, max_length = 4096, num_return_sequences=1, eos_token_id=tokenizer.encode('### Instruction:'))\r\nprint(result[0]['generated_text'])\r\n```\r\nor\r\n```\r\nresult = gen(prompt, max_length = 500, num_return_sequences=1, eos_token_id=tokenizer.encode('###'))\r\nprint(result[0]['generated_text'])\r\n```\r\n\r\nAnd undesired results are gotten 😭😭:\r\n```\r\n...\r\n### Assistant:\r\n\r\nI can help you with a variety of tasks, such as: \r\n# stopped here\r\n```\r\nand\r\n```\r\n### Assistant:\r\n\r\nI can help you with a variety of tasks, such as:\r\n\r\n* Answering questions on a wide range of topics\r\n* Providing information on specific subjects\r\n* Helping you with your daily tasks and errands\r\n* Assisting you with your work and projects\r\n* Offering suggestions and ideas for your personal and professional growth\r\n* And much more!\r\n\r\n### Instruction:\r\n\r\nThat sounds great! Can you help me with something specific?\r\n\r\n### Assistant:\r\n\r\nOf course! I'd be happy to help you with something specific. Can you please tell me more about what you need help with?\r\n```\r\n\r\nI am new with this, I have been working with Computer Vision the last 3 years and unfortunately I don't know exactly what you are meaning. Sorry. Can you still help me please?\r\n\r\n",
"Sure! Seems like the `eos_token_id` supports a list but stops on each and every single token from the list. cc @gante not sure I involved it would be to support lists of lists but could be a good way to have these chat specific criterias!\r\n\r\nWhat you need now is a custom stopping criteria like this one: (from #23852):\r\n```python \r\nfrom transformers import StoppingCriteria\r\nclass EosListStoppingCriteria(StoppingCriteria):\r\n def __init__(self, eos_sequence = [835, 2799, 4080, 29901]]):\r\n self.eos_sequence = eos_sequence\r\n\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n last_ids = input_ids[:,-len(self.eos_sequence):].tolist()\r\n return self.eos_sequence in last_ids\r\noutput = model.generate(inputs[\"input_ids\"], max_new_tokens=64, stopping_criteria = [EosListStoppingCriteria()])\r\n```",
"Your help has been invaluable, thank you so much for your time and for changing AI forever. \r\n\r\nI will close this ticket for now. If someone in the community has the same problem, I have reached out a better `eos` with some prompt engineering.\r\n\r\nThanks again.",
"Thanks for your kind words 🤗 ",
"@ArthurZucker Yeah, we definitely want to add support for that!"
] | 1,697 | 1,698 | 1,698 | NONE | null | ### System Info
Hello!
It seems other developers have had similar issues: https://github.com/huggingface/transformers/issues/23175
I am giving a try to the Llama-7b-chat model and the model is ignoring the stop tokens, this is the code I am running where 'llama-hf' is just my local path to the [`Llama-2-7b-hf`](https://huggingface.co/meta-llama/Llama-2-7b-hf) model.
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import pipeline
model_path = "llama-hf"
model = AutoModelForCausalLM.from_pretrained(model_path, load_in_4bit=True, device_map=0, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_path, eos_token_id =['### Instruction'])
prompt = """
You are an AI assistant created by an important company called XXX
### Instruction:
Who did create you?
### Assistant:
I was created by the brilliant minds at XXX, a leading technology company known for their innovative products and cutting-edge research. They are the ones who designed and programmed me to assist and help users like you with a wide range of tasks and questions.
### Instruction:
And what can you do for me?
### Assistant:
"""
gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
result = gen(prompt, max_length = 500, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, pad_token_id= tokenizer.eos_token_id)
print(result[0]['generated_text'])
```
And this is what I get:
```
... # previous conversation
### Assistant:
I can help you with a variety of tasks, such as:
* Answering questions on a wide range of topics
* Providing information on specific subjects
* Helping you with your daily tasks and errands
* Assisting you with your work and projects
* Offering suggestions and ideas for your personal and professional growth
* And much more!
### Instruction:
That sounds great! Can you help me with something specific?
### Assistant:
Of course! I'd be happy to help you with something specific. Can you please tell me more about what you need help with?
```
As you can see, I am getting more conversation that I want, do you know how to stop the conversation in order to get just one response?
Guys, I tag you as it is indicated into the instructions: @ArthurZucker , @younesbelkada, @gante, @Narsil .
Thanks in advance and for changing AI forever!
### Who can help?
@ArthurZucker , @younesbelkada, @gante, @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps provided on Issue
### Expected behavior
Stop conversation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26959/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26958/comments | https://api.github.com/repos/huggingface/transformers/issues/26958/events | https://github.com/huggingface/transformers/issues/26958 | 1,954,052,894 | I_kwDOCUB6oc50eHse | 26,958 | deepspeed Integration error | {
"login": "SingL3",
"id": 20473466,
"node_id": "MDQ6VXNlcjIwNDczNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SingL3",
"html_url": "https://github.com/SingL3",
"followers_url": "https://api.github.com/users/SingL3/followers",
"following_url": "https://api.github.com/users/SingL3/following{/other_user}",
"gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SingL3/subscriptions",
"organizations_url": "https://api.github.com/users/SingL3/orgs",
"repos_url": "https://api.github.com/users/SingL3/repos",
"events_url": "https://api.github.com/users/SingL3/events{/privacy}",
"received_events_url": "https://api.github.com/users/SingL3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hey. If you want use to help you according to the contribution guidelines could you write what is your issue here, with small reproducible snippet N ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### System Info
N/A
See Bug report for DS [4194](https://github.com/microsoft/DeepSpeed/issues/4194) and [4533](https://github.com/microsoft/DeepSpeed/issues/4533)
### Who can help?
@pac
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
Deepspeed training successfully. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26958/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26958/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26957/comments | https://api.github.com/repos/huggingface/transformers/issues/26957/events | https://github.com/huggingface/transformers/pull/26957 | 1,954,049,251 | PR_kwDOCUB6oc5dXp-5 | 26,957 | [WIP] Make logic of adding the number of embedded tokens simpler | {
"login": "AnastasiyaKukharska",
"id": 70960052,
"node_id": "MDQ6VXNlcjcwOTYwMDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/70960052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnastasiyaKukharska",
"html_url": "https://github.com/AnastasiyaKukharska",
"followers_url": "https://api.github.com/users/AnastasiyaKukharska/followers",
"following_url": "https://api.github.com/users/AnastasiyaKukharska/following{/other_user}",
"gists_url": "https://api.github.com/users/AnastasiyaKukharska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnastasiyaKukharska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnastasiyaKukharska/subscriptions",
"organizations_url": "https://api.github.com/users/AnastasiyaKukharska/orgs",
"repos_url": "https://api.github.com/users/AnastasiyaKukharska/repos",
"events_url": "https://api.github.com/users/AnastasiyaKukharska/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnastasiyaKukharska/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null | # What does this PR do?
Here I simplify getting the number of embedding tokens according to discussions on PR [26024](https://github.com/huggingface/transformers/pull/26024)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26957/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26957",
"html_url": "https://github.com/huggingface/transformers/pull/26957",
"diff_url": "https://github.com/huggingface/transformers/pull/26957.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26957.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26956/comments | https://api.github.com/repos/huggingface/transformers/issues/26956/events | https://github.com/huggingface/transformers/pull/26956 | 1,953,704,725 | PR_kwDOCUB6oc5dWfD9 | 26,956 | Translated ```text_generation.md``` to Korean | {
"login": "zayunsna",
"id": 25922010,
"node_id": "MDQ6VXNlcjI1OTIyMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/25922010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zayunsna",
"html_url": "https://github.com/zayunsna",
"followers_url": "https://api.github.com/users/zayunsna/followers",
"following_url": "https://api.github.com/users/zayunsna/following{/other_user}",
"gists_url": "https://api.github.com/users/zayunsna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zayunsna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zayunsna/subscriptions",
"organizations_url": "https://api.github.com/users/zayunsna/orgs",
"repos_url": "https://api.github.com/users/zayunsna/repos",
"events_url": "https://api.github.com/users/zayunsna/events{/privacy}",
"received_events_url": "https://api.github.com/users/zayunsna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @zayunsna , before I give a thorough review, may you please change \"Fixes #20197\" to \"Part of #20197\"?\n\nIf unchanged, it will close the issue with the merge of this PR. Thank you.\n",
"> Hello @zayunsna , before I give a thorough review, may you please change \"Fixes #20197\" to \"Part of #20197\"?\r\n> \r\n> If unchanged, it will close the issue with the merge of this PR. Thank you.\r\n\r\nIt's corrected!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nStill waiting for the review. I hope to get comments or further process.",
"> Hey! Thanks for opening the PR and continuing the translation 🔥 There was a pretty big change in the text generation documentation which does not match this one anymore. Would you mind updating ? 😓 sorry for this\r\n\r\nNo worries! \r\nI'll have a look at the changed item! \r\nI just quickly glanced at the material but it seems there needs to be a detailed comparison of what was updated.\r\nThank you for letting me know!\r\nLet's keep in touch!",
"Dear @zayunsna \nI will review it together! 🙂\nWecl can discuss about translation on the huggingface discord 한국어 channel.\nhttps://discord.com/channels/879548962464493619/1105153909552586812",
"@ArthurZucker, Hi :) I have a look at a new update of 'transformer'.\r\nSo from now, this PR contains the updated item as same as the main repo.\r\nI hope this is what you want!",
"@zayunsna \nAs we dicuss, forked repo need to be synced before pusing the PR.\nAnd I recommend u to make a branch in your forked repo.\n[Transformer] - [Forked Transformer] :: PR branch\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,704 | 1,704 | NONE | null | # What does this PR do?
Hi all!
PR : translating ```text_generation.md``` file and add creating the parent folder ```main_classes```.
Thank you for reviewing this translating results!
Part of #20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. [[lowercased-header]])
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review?
As followed the guide, I would like to give a review request to @ArthurZucker, @sgugger, @eunseojo.
and also send the review request to Team PseudoLab just in case. @0525hhgus, @kihoon71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26956/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26956/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26956",
"html_url": "https://github.com/huggingface/transformers/pull/26956",
"diff_url": "https://github.com/huggingface/transformers/pull/26956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26956.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26955/comments | https://api.github.com/repos/huggingface/transformers/issues/26955/events | https://github.com/huggingface/transformers/pull/26955 | 1,953,697,909 | PR_kwDOCUB6oc5dWdlZ | 26,955 | translate `preprocessing.md` to Chinese | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu\r\n\r\nHere is the translation of preprocessing.md\r\n\r\nBest",
"@stevhliu \r\n\r\nHi, Thansk for your review. I just notice that you have merge the PR(`pipeline_tutorial.md`). So I just solve the merge conflicts problem.\r\n\r\nWould you mind having a check again?\r\n\r\nBest",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26955). All of your documentation changes will be reflected on that endpoint.",
"> Thanks, just one more minor change and we can merge!\r\n\r\nHi, thanks for your review and suggestion. Sorry that I missed some mirror problem.\r\n\r\nBest"
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26955/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26955",
"html_url": "https://github.com/huggingface/transformers/pull/26955",
"diff_url": "https://github.com/huggingface/transformers/pull/26955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26955.patch",
"merged_at": 1698082584000
} |
https://api.github.com/repos/huggingface/transformers/issues/26954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26954/comments | https://api.github.com/repos/huggingface/transformers/issues/26954/events | https://github.com/huggingface/transformers/pull/26954 | 1,953,610,529 | PR_kwDOCUB6oc5dWK5m | 26,954 | Translate `pipeline_tutorial.md` to chinese | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu\r\n\r\nI just updated my translation work. Here is a guileline during my translation:\r\n- Keep some terminology in English format. Those workds appears in amounts of class files or other documents. I think it's better to keep it in original format rather give a chinese translation.\r\n",
"@stevhliu Thansk for your review. I just fix these two problems. Would you mind having a check again?\r\nAnd for another PR of preprocessing.md, I will update it after this PR is merged. Both PRs changes `_toctree.yml`, there may be some merge conflict problem.\r\nBest",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26954). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Part of #26803
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? _not necessary_
## Who can review?
@stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26954/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26954",
"html_url": "https://github.com/huggingface/transformers/pull/26954",
"diff_url": "https://github.com/huggingface/transformers/pull/26954.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26954.patch",
"merged_at": 1698076680000
} |
https://api.github.com/repos/huggingface/transformers/issues/26953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26953/comments | https://api.github.com/repos/huggingface/transformers/issues/26953/events | https://github.com/huggingface/transformers/issues/26953 | 1,953,571,366 | I_kwDOCUB6oc50cSIm | 26,953 | Add flash attention support to Whisper | {
"login": "leng-yue",
"id": 25119060,
"node_id": "MDQ6VXNlcjI1MTE5MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/25119060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leng-yue",
"html_url": "https://github.com/leng-yue",
"followers_url": "https://api.github.com/users/leng-yue/followers",
"following_url": "https://api.github.com/users/leng-yue/following{/other_user}",
"gists_url": "https://api.github.com/users/leng-yue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leng-yue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leng-yue/subscriptions",
"organizations_url": "https://api.github.com/users/leng-yue/orgs",
"repos_url": "https://api.github.com/users/leng-yue/repos",
"events_url": "https://api.github.com/users/leng-yue/events{/privacy}",
"received_events_url": "https://api.github.com/users/leng-yue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This feature will be added by #26722, closed."
] | 1,697 | 1,697 | 1,697 | NONE | null | ### Feature request
Add flash attention support to Whisper.
### Motivation
Accelerate both training and inference.
### Your contribution
Submitting a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26953/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26952/comments | https://api.github.com/repos/huggingface/transformers/issues/26952/events | https://github.com/huggingface/transformers/issues/26952 | 1,953,453,442 | I_kwDOCUB6oc50b1WC | 26,952 | [FA-2] inference dtype mismatch bug | {
"login": "wizyoung",
"id": 13296106,
"node_id": "MDQ6VXNlcjEzMjk2MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/13296106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wizyoung",
"html_url": "https://github.com/wizyoung",
"followers_url": "https://api.github.com/users/wizyoung/followers",
"following_url": "https://api.github.com/users/wizyoung/following{/other_user}",
"gists_url": "https://api.github.com/users/wizyoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wizyoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wizyoung/subscriptions",
"organizations_url": "https://api.github.com/users/wizyoung/orgs",
"repos_url": "https://api.github.com/users/wizyoung/repos",
"events_url": "https://api.github.com/users/wizyoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/wizyoung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Can you share a small snippet with a reproducer? Would help use know what is happening and could be useful to improve our tests! ",
"@ArthurZucker My team project is quite complicated and I'm trying to extract a reproducible snippet and post it here later.",
"closed as latest version of hf fixed it.",
"Another problem:\r\nhttps://github.com/huggingface/transformers/blob/66b088faf01a795a7e0ddafafa1838f065f42f86/src/transformers/models/llama/modeling_llama.py#L719-L723\r\n\r\nThe dtype of indices_qhere shoud be torch.int64 as later we will use it by calling \r\n`attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)`\r\n\r\npad_input requires indices_q to be long, byte or bool dtype and it cannot be int32. So we need to change it to:\r\n` indices_q = cu_seqlens_q[:-1].long()`\r\n\r\n\r\n\r\n",
"which version of FA are you using @wizyoung ?",
"> which version of FA are you using @wizyoung ?\n\nI'm using the latest 2.3.3,more specificly,it's https://github.com/Dao-AILab/flash-attention/releases/download/v2.3.3/flash_attn-2.3.3+cu117torch1.13cxx11abiFALSE-cp39-cp39-linux_x86_64.whl",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Do you still face the same issue on latest transformers @wizyoung ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-4.14.0_1-0-0-44-x86_64-with-glibc2.27
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I installed the transformers from the latest main branch (commit bdbcd5d4) due to lots of bug fixes related to flash attn2. However, I encountered several dtype mismatch issues during inference. I've pinpointed the cause of the first issue, but the second one remains elusive.
(1) cat dtype mismatch when use_cache is set True:
https://github.com/huggingface/transformers/blob/08a2edfc6629a323effd7a85feafed9e6701e2dd/src/transformers/models/llama/modeling_llama.py#L454-L457
The past_key_value tensors' dtype might be upcast to fp32 in ops afterwards (and it clearly will), so we need to make sure the dtype matches. The dtype mismatch is 100% reproducible in my llama inference code where the LLM weights are frozen and the .generate function is called.
correct code should be:
```python
key_states = torch.cat([past_key_value[0].to(key_states.dtype), key_states], dim=2)
value_states = torch.cat([past_key_value[1].to(value_states.dtype), value_states], dim=2)
```
(2) I also met "get error tensors used as indices must be long, byte or bool tensors" in LLM inference with use_cache set as True. This error occasionally occurs and it seems to point to the flash-attn inner func.
Have you tested inference mode when implementing FA-2? By the way, I wonder whether FA2 supports batch inference with padding and the padding mode is "longest", where I tested on xformers but got poor inference speed.
### Expected behavior
FA-2 batch inference works normal. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26952/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26951/comments | https://api.github.com/repos/huggingface/transformers/issues/26951/events | https://github.com/huggingface/transformers/issues/26951 | 1,953,281,132 | I_kwDOCUB6oc50bLRs | 26,951 | T5Tokenizer load fails when added special tokens and saved with save_pretrained | {
"login": "minolee",
"id": 31353764,
"node_id": "MDQ6VXNlcjMxMzUzNzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/31353764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minolee",
"html_url": "https://github.com/minolee",
"followers_url": "https://api.github.com/users/minolee/followers",
"following_url": "https://api.github.com/users/minolee/following{/other_user}",
"gists_url": "https://api.github.com/users/minolee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minolee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minolee/subscriptions",
"organizations_url": "https://api.github.com/users/minolee/orgs",
"repos_url": "https://api.github.com/users/minolee/repos",
"events_url": "https://api.github.com/users/minolee/events{/privacy}",
"received_events_url": "https://api.github.com/users/minolee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It’s a T5 specific issue indeed. I’ll have a look but T5 basically needs to have extra ids in the additional special tokens when passed. I’ll see if this was BC or bug fix ",
"Ah actually the fix was already there in `slow` but not `fast` 😉 pushing now a PR "
] | 1,697 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.4.0-149-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
It can be reproduced on python console.
```python
>>> import transformers
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("google/flan-t5-base")
Downloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.54k/2.54k [00:00<00:00, 25.3MB/s]
Downloading spiece.model: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 792k/792k [00:00<00:00, 20.2MB/s]
Downloading (…)/main/tokenizer.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.42M/2.42M [00:00<00:00, 2.47MB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.20k/2.20k [00:00<00:00, 15.9MB/s]
>>> tokenizer.add_special_tokens({"additional_special_tokens": ["<1>", "<2>"]})
2
>>> tokenizer.save_pretrained("/tmp/tokenizer")
('/tmp/tokenizer/tokenizer_config.json', '/tmp/tokenizer/special_tokens_map.json', '/tmp/tokenizer/tokenizer.json')
>>> new_tokenizer = transformers.AutoTokenizer.from_pretrained("/tmp/tokenizer")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 751, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2017, in from_pretrained
return cls._from_pretrained(
File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2249, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 127, in __init__
raise ValueError(
ValueError: Both extra_ids (100) and additional_special_tokens (['<1>', '<2>']) are provided to T5Tokenizer. In this case the additional_special_tokens must include the extra_ids tokens
>>> transformers.__version__
'4.34.1'
```
### Expected behavior
It works well on version `4.33.x`
```python
>>> import transformers
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("google/flan-t5-base")
>>> tokenizer.add_special_tokens({"additional_special_tokens": ["<1>", "<2>"]})
2
>>> tokenizer.save_pretrained("/tmp/tokenizer")
('/tmp/tokenizer/tokenizer_config.json', '/tmp/tokenizer/special_tokens_map.json', '/tmp/tokenizer/tokenizer.json')
>>> new_tokenizer = transformers.AutoTokenizer.from_pretrained("/tmp/tokenizer")
>>> transformers.__version__
'4.33.3'
```
I found related issue (#26536).
Maybe this is T5-specific issue, because it worked well using different models like `bert-base-cased` or `gpt2`
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26951/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26950/comments | https://api.github.com/repos/huggingface/transformers/issues/26950/events | https://github.com/huggingface/transformers/issues/26950 | 1,953,201,911 | I_kwDOCUB6oc50a373 | 26,950 | Unhashable dict error | {
"login": "erlebach",
"id": 324708,
"node_id": "MDQ6VXNlcjMyNDcwOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/324708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erlebach",
"html_url": "https://github.com/erlebach",
"followers_url": "https://api.github.com/users/erlebach/followers",
"following_url": "https://api.github.com/users/erlebach/following{/other_user}",
"gists_url": "https://api.github.com/users/erlebach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erlebach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erlebach/subscriptions",
"organizations_url": "https://api.github.com/users/erlebach/orgs",
"repos_url": "https://api.github.com/users/erlebach/repos",
"events_url": "https://api.github.com/users/erlebach/events{/privacy}",
"received_events_url": "https://api.github.com/users/erlebach/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante I can reproduce this with:\r\n```python \r\n>>> from tranformers import GenerationConfig\r\n>>> generation_config, unused_kwargs = GenerationConfig.from_pretrained(\r\n \"microsoft/phi-1_5\",\r\n top_k=100,\r\n top_p=1.0,\r\n temperature=1.0,\r\n max_new_tokens=2,\r\n num_return_sequences=1,\r\n do_sample=True,\r\n return_unused_kwargs=True,\r\n)\r\n```\r\nNote that this model is not (yet) part of transformers so that could explain why the config is a dict. Trust remote code probably not supported for generation mixin. ",
"Thanks you, @ArthurZucker . The toml file works on a friend's M1 mac (mine is M2). That is what is so confusing to me!\r\nIs there an easy way to find the models actually supported by `transformers`? It is obviously a subset of the models listed on HuggingFace. ",
"Not it's not obviously a sub-set, this is probably a versioning issue. Usually it's more like some models on the hub don't support transformers anymore because they are not maintained. For now I would recommend you to just pass the generation arguments to the generate call or init the generation config instead of using the generationConfig from pretrained, specifically because this model's generation config is pretty much empty",
"My current codebase relies on loading the `generation_config` from `GenerationConfig.from_pretrained`, and currently I resorted to downgrading the library to a previous version (v4.33.2) where hashing was not present.\r\nAny idea on how to resolve this issue?",
"Phi was recently merged, and this seems to have been fixed on main. I'll close this as completed. @zaemyung you are not providing a reproducer, can't really help you but feel free to open a new issue with a reproducer. 🤗 ",
"Hello,\r\nI am also having this issue with:\r\n```\r\nfrom transformers import GenerationConfig\r\ngeneration_kwargs = {\r\n \"top_k\": 0.0,\r\n \"top_p\": 1.0,\r\n \"do_sample\": True,\r\n \"pad_token_id\": \"test\",\r\n \"eos_token_id\": \"test\",\r\n \"return_unused_kwargs\": True\r\n}\r\nGenerationConfig.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", **generation_kwargs)\r\n\r\n```\r\nI get this stack trace:\r\n\r\n```\r\nTypeError Traceback (most recent call last)\r\nCell In[19], line 10\r\n 1 from transformers import GenerationConfig\r\n 2 generation_kwargs = {\r\n 3 \"top_k\": 0.0,\r\n 4 \"top_p\": 1.0,\r\n (...)\r\n 8 \"return_unused_kwargs\": True\r\n 9 }\r\n---> 10 GenerationConfig.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\", **generation_kwargs)\r\n\r\nFile ~/.virtualenvs/llm/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:754, in GenerationConfig.from_pretrained(cls, pretrained_model_name, config_file_name, cache_dir, force_download, local_files_only, token, revision, **kwargs)\r\n 751 logger.info(f\"loading configuration file {configuration_file} from cache at {resolved_config_file}\")\r\n 753 config = cls.from_dict(config_dict, **kwargs)\r\n--> 754 config._original_object_hash = hash(config) # Hash to detect whether the instance was modified\r\n 755 return config\r\n\r\nTypeError: unhashable type: 'dict'\r\n```\r\n\r\nI am using the most recent transformers version 4.35.2.\r\nCheers!",
"Was fixed on main 😉 "
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
I am running on Mac M2 with Ventura, using a Poetry environment. Here is the Pyproject.toml file:
```
[tool.poetry]
name = "UROP Project"
version = "0.0.1"
description = "LangChain mono-repo"
authors = []
license = "MIT"
readme = "README.md"
repository = "https://github.com/fsu-sc/UROP_2023-2024.git"
# pickle
# transformers
# collections
# matplotlib
[tool.poetry.dependencies]
python = ">=3.9,<4.0"
python-dotenv = "^1.0.0"
google-search-results = "^2.4.2"
matplotlib = "^3.8.0"
torch = "^2.0.1"
langchain = "^0.0.300"
transformers = "^4.34.1"
jupyterlab = "^4.0.7"
einops = "^0.7.0"
```
Here is my code (I am using Jupyter-lab):
```
from langchain.prompts import ChatPromptTemplate
import torch
import pickle
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from collections import defaultdict
model_name = "microsoft/phi-1_5"
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
device = "mps" if torch.backends.mps.is_available() else "cpu"
# device = "mps"
device = "cpu"
print(device)
model.to(device)
```
which is fine. The error occurs in the next cell:
```
generation_config, unused_kwargs = GenerationConfig.from_pretrained(
model_name,
top_k=100,
top_p=1.0,
temperature=1.0,
max_new_tokens=2,
num_return_sequences=1,
do_sample=True,
return_unused_kwargs=True,
)
```
Here is the stack trace:
```
Traceback (most recent call last):
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3526, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/var/folders/g_/_bggj09s1z183t3_j5jy1r9c0000gn/T/ipykernel_76724/2282397824.py", line 1, in <module>
generation_config, unused_kwargs = GenerationConfig.from_pretrained(
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 735, in from_pretrained
TypeError: unhashable type: 'dict'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 2120, in showtraceback
stb = self.InteractiveTB.structured_traceback(
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/ultratb.py", line 1435, in structured_traceback
return FormattedTB.structured_traceback(
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/ultratb.py", line 1326, in structured_traceback
return VerboseTB.structured_traceback(
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/ultratb.py", line 1173, in structured_traceback
formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context,
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/ultratb.py", line 1088, in format_exception_as_a_whole
frames.append(self.format_record(record))
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/ultratb.py", line 970, in format_record
frame_info.lines, Colors, self.has_colors, lvals
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/IPython/core/ultratb.py", line 792, in lines
return self._sd.lines
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/stack_data/utils.py", line 145, in cached_property_wrapper
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/stack_data/core.py", line 734, in lines
pieces = self.included_pieces
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/stack_data/utils.py", line 145, in cached_property_wrapper
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/stack_data/core.py", line 681, in included_pieces
pos = scope_pieces.index(self.executing_piece)
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/stack_data/utils.py", line 145, in cached_property_wrapper
value = obj.__dict__[self.func.__name__] = self.func(obj)
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/stack_data/core.py", line 660, in executing_piece
return only(
File "/Users/erlebach/src/2023/UROP_2023-2024/.venv/lib/python3.10/site-packages/executing/executing.py", line 116, in only
raise NotOneValueFound('Expected one value, found 0')
executing.executing.NotOneValueFound: Expected one value, found: " 0
```
Looking at the file where the error occurs: `transformers/generation/configuration_utils.py`, I find the lines:
```
727 if is_local:
728 logger.info(f"loading configuration file {resolved_config_file}")
729 else:
730 logger.info(f"loading configuration file {configuration_file} from cache at {resolved_config_file}")
731
732 config = cls.from_dict(config_dict, **kwargs)
733 config._original_object_hash = hash(config) # Hash to detect whether the instance was modified
734 return config
```
The error occurs on line 733. I printed the type of config, it was a non-hashable type. More specifically, type:
```
type(config_dict): <class 'dict'>
type(config): <class 'tuple'>
type(config): <class 'tuple'>
config: (GenerationConfig {
"do_sample": true,
"max_new_tokens": 2,
"top_k": 100
}
, {})
```
Clearly, `config` isunhashable due to `GenerationConfig`
Any insight would be greatly appreciated. Thanks.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
See above.
### Expected behavior
See above. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26950/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26949/comments | https://api.github.com/repos/huggingface/transformers/issues/26949/events | https://github.com/huggingface/transformers/pull/26949 | 1,953,171,451 | PR_kwDOCUB6oc5dUrLu | 26,949 | Add fuyu device map | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,698 | 1,698 | MEMBER | null | # What does this PR do ?
This PR adds the possibility to use fuyu model with `device_map="auto"`.
I ran some tests on my side and it works with multigpu and cpu offload. I am not writing any accelerate tests as they will be contained in a future `FuyuModelTest` class which will inherit from `ModelTesterMixin`.
cc @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26949",
"html_url": "https://github.com/huggingface/transformers/pull/26949",
"diff_url": "https://github.com/huggingface/transformers/pull/26949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26949.patch",
"merged_at": 1698153024000
} |
https://api.github.com/repos/huggingface/transformers/issues/26948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26948/comments | https://api.github.com/repos/huggingface/transformers/issues/26948/events | https://github.com/huggingface/transformers/issues/26948 | 1,953,123,631 | I_kwDOCUB6oc50ak0v | 26,948 | Making the unsqueeze dimension parameterized in the apply_rotary_pos_emb function in modeling_llama.py | {
"login": "ShashankMosaicML",
"id": 144760128,
"node_id": "U_kgDOCKDdQA",
"avatar_url": "https://avatars.githubusercontent.com/u/144760128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShashankMosaicML",
"html_url": "https://github.com/ShashankMosaicML",
"followers_url": "https://api.github.com/users/ShashankMosaicML/followers",
"following_url": "https://api.github.com/users/ShashankMosaicML/following{/other_user}",
"gists_url": "https://api.github.com/users/ShashankMosaicML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShashankMosaicML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShashankMosaicML/subscriptions",
"organizations_url": "https://api.github.com/users/ShashankMosaicML/orgs",
"repos_url": "https://api.github.com/users/ShashankMosaicML/repos",
"events_url": "https://api.github.com/users/ShashankMosaicML/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShashankMosaicML/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"MMM changing the transpose would change the past key values which is not backward compatible. The unsqueeze dimension could be a nice to have but would rather have it in the init of the rope embedding than in the forward as this is another not backward compatible change 😓 \r\nwe refrain from adding custom usage code as `transformers` is not meant to be a tool box with 10K config arguments, but if removing the 2 transpose gives a very good performance gain we could consider this IMO. \r\n\r\ncc @gante for ROPE 🤗 ",
"@ShashankMosaicML I do not oppose adding `unsqueeze_dim=1` as an argument if it makes life easier for others 🤗 \r\n\r\nAs for backward compatibility -- this is a good question. As of now, as @ArthurZucker said, we don't want to break it. However, we are currently working on a new cache structure (with a flag for backwards compatibility), so making the new cache structure optimal for FA2 might be a good idea 🤔 ",
"If we can remove 2 transpose (it's not a bottlneck but still) would be nice. Let's keep that in mind when refactoring the cache cc @tomaarsen as well. ",
"@ShashankMosaicML feel free to open a PR and tag us 🤗 ",
"Great, will do this soon!",
"Here is the [link to the pull request](https://github.com/huggingface/transformers/pull/27117)."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | ### Feature request
Hi Huggingface Team,
We would like to request to make the 'unsqueeze(1)' in [these two lines ](https://github.com/huggingface/transformers/blob/08a2edfc6629a323effd7a85feafed9e6701e2dd/src/transformers/models/llama/modeling_llama.py#L209C1-L210C41) parameterized. To be precise, we would like to request that the following lines
```
def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
cos = cos[position_ids].unsqueeze(1)
sin = sin[position_ids].unsqueeze(1)
...
```
be converted to something like the following
```
def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
cos = cos[position_ids].unsqueeze(unsqueeze_dim)
sin = sin[position_ids].unsqueeze(unsqueeze_dim)
...
```
### Motivation
We are trying to import and use the [apply_rotary_pos_emb function](https://github.com/huggingface/transformers/blob/08a2edfc6629a323effd7a85feafed9e6701e2dd/src/transformers/models/llama/modeling_llama.py#L208) and the [LlamaRotaryEmbedding class](https://github.com/huggingface/transformers/blob/08a2edfc6629a323effd7a85feafed9e6701e2dd/src/transformers/models/llama/modeling_llama.py#L119). However, our query and key tensors have the shape `[batch_size, sequence_len, heads, dim]`. To make them compatible with the `apply_rotary_pos_emb` function, we have to transpose the tensors to `[batch_size, heads, sequence_len, dim]`, call `apply_rotary_pos_emb` on them and then transpose the tensors back to `[batch_size, sequence_len, heads, dim]`. These unnecessary transposes could be avoided if the `apply_rotary_pos_emb` function had a parameter which controlled the dimension on which the `unsqueeze`'s [here](https://github.com/huggingface/transformers/blob/08a2edfc6629a323effd7a85feafed9e6701e2dd/src/transformers/models/llama/modeling_llama.py#L209C1-L210C41) were applied.
_Please note that the Llama Huggingface code also [does similar back and forth transposes](https://github.com/huggingface/transformers/blob/08a2edfc6629a323effd7a85feafed9e6701e2dd/src/transformers/models/llama/modeling_llama.py#L442C1-L463C52), and hence could benefit from this **very small** code change as well._
### Your contribution
We are willing to submit a PR for this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26948/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26948/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26947/comments | https://api.github.com/repos/huggingface/transformers/issues/26947/events | https://github.com/huggingface/transformers/pull/26947 | 1,953,083,548 | PR_kwDOCUB6oc5dUXr_ | 26,947 | Add Readme file in Ukrainian language | {
"login": "AnastasiyaKukharska",
"id": 70960052,
"node_id": "MDQ6VXNlcjcwOTYwMDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/70960052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnastasiyaKukharska",
"html_url": "https://github.com/AnastasiyaKukharska",
"followers_url": "https://api.github.com/users/AnastasiyaKukharska/followers",
"following_url": "https://api.github.com/users/AnastasiyaKukharska/following{/other_user}",
"gists_url": "https://api.github.com/users/AnastasiyaKukharska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnastasiyaKukharska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnastasiyaKukharska/subscriptions",
"organizations_url": "https://api.github.com/users/AnastasiyaKukharska/orgs",
"repos_url": "https://api.github.com/users/AnastasiyaKukharska/repos",
"events_url": "https://api.github.com/users/AnastasiyaKukharska/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnastasiyaKukharska/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26947). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | # What does this PR do?
Adding docs in Ukrainian language
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26947/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26947",
"html_url": "https://github.com/huggingface/transformers/pull/26947",
"diff_url": "https://github.com/huggingface/transformers/pull/26947.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26947.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26946/comments | https://api.github.com/repos/huggingface/transformers/issues/26946/events | https://github.com/huggingface/transformers/pull/26946 | 1,952,916,714 | PR_kwDOCUB6oc5dTzEK | 26,946 | Simplifying getting the number of embedding tokens | {
"login": "AnastasiyaKukharska",
"id": 70960052,
"node_id": "MDQ6VXNlcjcwOTYwMDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/70960052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnastasiyaKukharska",
"html_url": "https://github.com/AnastasiyaKukharska",
"followers_url": "https://api.github.com/users/AnastasiyaKukharska/followers",
"following_url": "https://api.github.com/users/AnastasiyaKukharska/following{/other_user}",
"gists_url": "https://api.github.com/users/AnastasiyaKukharska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnastasiyaKukharska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnastasiyaKukharska/subscriptions",
"organizations_url": "https://api.github.com/users/AnastasiyaKukharska/orgs",
"repos_url": "https://api.github.com/users/AnastasiyaKukharska/repos",
"events_url": "https://api.github.com/users/AnastasiyaKukharska/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnastasiyaKukharska/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, this is a duplicate of #26333 and will thus not be merged"
] | 1,697 | 1,697 | 1,697 | NONE | null | # What does this PR do?
Here I simplify getting the number of embedding tokens according to discussions on PR [26024](https://github.com/huggingface/transformers/pull/26024)
# Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines (https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26946",
"html_url": "https://github.com/huggingface/transformers/pull/26946",
"diff_url": "https://github.com/huggingface/transformers/pull/26946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26946.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26945/comments | https://api.github.com/repos/huggingface/transformers/issues/26945/events | https://github.com/huggingface/transformers/pull/26945 | 1,952,871,838 | PR_kwDOCUB6oc5dTpHc | 26,945 | Simplifying getting the number of embedding tokens | {
"login": "AnastasiyaKukharska",
"id": 70960052,
"node_id": "MDQ6VXNlcjcwOTYwMDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/70960052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnastasiyaKukharska",
"html_url": "https://github.com/AnastasiyaKukharska",
"followers_url": "https://api.github.com/users/AnastasiyaKukharska/followers",
"following_url": "https://api.github.com/users/AnastasiyaKukharska/following{/other_user}",
"gists_url": "https://api.github.com/users/AnastasiyaKukharska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnastasiyaKukharska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnastasiyaKukharska/subscriptions",
"organizations_url": "https://api.github.com/users/AnastasiyaKukharska/orgs",
"repos_url": "https://api.github.com/users/AnastasiyaKukharska/repos",
"events_url": "https://api.github.com/users/AnastasiyaKukharska/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnastasiyaKukharska/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,697 | 1,697 | 1,697 | NONE | null | Here I simplify getting the number of embedding tokens according to discussions on PR [26024](https://github.com/huggingface/transformers/pull/26024)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26945",
"html_url": "https://github.com/huggingface/transformers/pull/26945",
"diff_url": "https://github.com/huggingface/transformers/pull/26945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26945.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26944/comments | https://api.github.com/repos/huggingface/transformers/issues/26944/events | https://github.com/huggingface/transformers/pull/26944 | 1,952,780,701 | PR_kwDOCUB6oc5dTVI0 | 26,944 | fix a typo in README_ru.md | {
"login": "megavaz",
"id": 32195253,
"node_id": "MDQ6VXNlcjMyMTk1MjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/32195253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/megavaz",
"html_url": "https://github.com/megavaz",
"followers_url": "https://api.github.com/users/megavaz/followers",
"following_url": "https://api.github.com/users/megavaz/following{/other_user}",
"gists_url": "https://api.github.com/users/megavaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/megavaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/megavaz/subscriptions",
"organizations_url": "https://api.github.com/users/megavaz/orgs",
"repos_url": "https://api.github.com/users/megavaz/repos",
"events_url": "https://api.github.com/users/megavaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/megavaz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | # What does this PR do?
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
change to correct Russian spelling | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26944/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26944",
"html_url": "https://github.com/huggingface/transformers/pull/26944",
"diff_url": "https://github.com/huggingface/transformers/pull/26944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26944.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26943/comments | https://api.github.com/repos/huggingface/transformers/issues/26943/events | https://github.com/huggingface/transformers/pull/26943 | 1,952,725,946 | PR_kwDOCUB6oc5dTJRE | 26,943 | Flax mistral | {
"login": "kiansierra",
"id": 47116198,
"node_id": "MDQ6VXNlcjQ3MTE2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47116198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiansierra",
"html_url": "https://github.com/kiansierra",
"followers_url": "https://api.github.com/users/kiansierra/followers",
"following_url": "https://api.github.com/users/kiansierra/following{/other_user}",
"gists_url": "https://api.github.com/users/kiansierra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiansierra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiansierra/subscriptions",
"organizations_url": "https://api.github.com/users/kiansierra/orgs",
"repos_url": "https://api.github.com/users/kiansierra/repos",
"events_url": "https://api.github.com/users/kiansierra/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiansierra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @sanchit-gandhi , @ArthurZucker \r\nThis is the first draft of the PR, no need to review yet, as I still got to implement the slow tests and documentation changes.\r\nI did want you're thoughts on the change on src/transformers/modeling_flax_pytorch_utils.py, as when loading from_pt weights they were in bfloat16, and I'm a bit weary of changing core functionality",
"Hi @sanchit-gandhi , @ArthurZucker I think this is ready to be reviewed now.\r\nSome of the implementation details were done with the intention to pass some of the tests, so I think I'll highlight them.\r\n* In `MISTRAL_START_DOCSTRING`.\r\n```dtype (`jax.numpy.dtype`, *optional*, defaults to `float32`)```, instead of ```dtype (`jax.numpy.dtype`, *optional*, defaults to `jax.numpy.float32`)```\r\nThere seemed to be a change that happened in `utils/check_docstrings.py`\r\n* In `FlaxMistralPreTrainedModel` set `_seed` and `_input_shape` instead of `seed` and `input_shape` for the same `utils/check_docstrings.py` as these appeared in one of the docstrings to compared to\r\n* Converted checkpoint to Flax and stored in `_CHECKPOINT_FOR_DOC = \"ksmcg/Mistral-7B-v0.1\"`, this allows to pass the [testing docstring](https://github.com/huggingface/transformers/blob/0baa9246cb1ddac355db1df7824a521426599eb7/src/transformers/utils/doc.py#L1022), since it doesn't allow for the `from_pt=True` flag \r\n* As far as I can tell these are many of the issues that are being faced in Llama PR https://github.com/huggingface/transformers/pull/24587",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @ArthurZucker I was wondering if you could please review this PR\r\nI've seen in https://github.com/huggingface/transformers/pull/24587 that you've been on holidays, hope you had a good time.\r\nI believe many of the issues are the same that occur in that PR, so you would like already know why they're failing.\r\nSince the doc test doesn't allow the load from_pt flag, I created a checkpoint in https://huggingface.co/ksmcg/Mistral-7B-v0.1/tree/main to load it from, but this fails since it takes a long time to execute",
"Sure I'll have a look 😉 The checkpoints need to be merged to add the doc, I'll open a PR to support a revision I think! ",
"#27645 will allow you to use the revision in the docstring 😉 I'll finish this asap",
"Thanks for the wait! Should be usable 😉 ",
"Yes the revision fix is working, now the issue is it times out",
"Hi @ArthurZucker I've passed all the tests.\r\nIf you could please review and let me know your feedback.\r\nIt is quite similar, and I think we should wait for https://github.com/huggingface/transformers/pull/24587 to be merged so I can add the fix copies from that PR",
"Hi @ArthurZucker, I've updated the PR and added the different # Copied from transformers.models.llama.modeling_flax_llama now that it has been merged",
"This looks awesome! Any chance it can be merged? Really looking forward to it.",
"Oups sorry @kiansierra let's just rebase now and I'll merge! ",
"Hey, no worries @ArthurZucker, I've merged main back in",
"Is this solved? Or is it deprecated in favour of a v0.2 one?\r\n\r\nOverall, does huggingface's mistral 7B have a flax implementation?",
"Hi @sanchit-gandhi ,\r\nFirst of all thank you for your review and feedback\r\nI believe all of the comments you've raised have been resolved.\r\nCurrently Github doesn't let me comment on the `convert_pytorch_sharded_state_dict_to_flax` I've added reconversion back to bf16 as was implemented in `convert_pytorch_state_dict_to_flax`\r\nA lot more `#Copied from` were added.\r\nI think the change I feel most confused about was the needing to freeze the params to have the masking work, that I expand on [here](https://github.com/huggingface/transformers/pull/26943#discussion_r1427962523) \r\nAlso I made a 1 character change to remove a deprecation warning from pytest in `tests/test_modeling_flax_common.py`\r\nLet me know you're thoughts\r\n",
"Hi @sanchit-gandhi, Could you please let me know if there is anything more to add to the PR?\r\nHappy holidays, and thanks to the HF team for the great work you're all doing",
"Hi @sanchit-gandhi , I implemented the above changes.\r\nThe ModelTester can't be directly copied because of the different configurations.\r\nI believe the separate PR you're refering to is this one https://github.com/huggingface/transformers/pull/28367 right?\r\n",
"Hi @sanchit-gandhi, @ArthurZucker just wanted to make sure if there is any additional task on my side to complete this PR",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26943). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @ArthurZucker, @sanchit-gandhi \r\nI've implemented the latest changes and merged main branch in.",
"Awesome! Requesting final review from @ArthurZucker - once we get his approval, we can merge this one @kiansierra ",
"Hi @ArthurZucker I run the tests with `RUN_SLOW=1 RUN_PT_FLAX_CROSS_TESTS=1 pytest tests/models/mistral`\r\n\r\n```\r\n================================================================== short test summary info ===================================================================\r\nFAILED tests/models/mistral/test_modeling_mistral.py::MistralModelTest::test_eager_matches_sdpa_generate - RuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'\r\nFAILED tests/models/mistral/test_modeling_mistral.py::MistralIntegrationTest::test_speculative_generation - RuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'\r\n============================================ 2 failed, 139 passed, 45 skipped, 92 warnings in 1813.70s (0:30:13) =============================================\r\n```\r\nI believe these fails are due to running the tests without GPU.\r\nFor some reason the `tests_pr_documentation_tests ` is failing due to \r\n```\r\nERROR src/transformers/models/mistral/modeling_flax_mistral.py - AttributeError: 'HfDoctestModule' object has no attribute 'getini'\r\nERROR dummy.py - AttributeError: 'HfDoctestModule' object has no attribute 'getini'\r\n```\r\nDo you know why this might be?",
"Yep no worries for the PR documentation test. We can merge without it / rebasing should also help! Left a last nit about `key = jnp.repeat(key, repeats=self.num_key_value_groups, axis=2)` and let's merge 🚀 ",
"Hi @ArthurZucker that's it done, changed to repeat and merged main in",
"Thanks for your hard works and congrats on the merge 🤗 ",
"Thank you both for your time reviewing this, it's been a great learning experience"
] | 1,697 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26809
Implements Mistral Models in Flax, used many components from Llama PR https://github.com/huggingface/transformers/pull/24587
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi , @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26943/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26943",
"html_url": "https://github.com/huggingface/transformers/pull/26943",
"diff_url": "https://github.com/huggingface/transformers/pull/26943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26943.patch",
"merged_at": 1706707142000
} |
https://api.github.com/repos/huggingface/transformers/issues/26942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26942/comments | https://api.github.com/repos/huggingface/transformers/issues/26942/events | https://github.com/huggingface/transformers/pull/26942 | 1,952,662,465 | PR_kwDOCUB6oc5dS7Sz | 26,942 | Change default `max_shard_size` to smaller value | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26942). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
As per title, we can also change that value dynamically with respect to model size - with this change large models will end up having many shards.
cc @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26942/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26942",
"html_url": "https://github.com/huggingface/transformers/pull/26942",
"diff_url": "https://github.com/huggingface/transformers/pull/26942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26942.patch",
"merged_at": 1698063949000
} |
https://api.github.com/repos/huggingface/transformers/issues/26941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26941/comments | https://api.github.com/repos/huggingface/transformers/issues/26941/events | https://github.com/huggingface/transformers/pull/26941 | 1,952,594,176 | PR_kwDOCUB6oc5dSshj | 26,941 | Fix a couple of typos and add an illustrative test | {
"login": "rjenc29",
"id": 13890854,
"node_id": "MDQ6VXNlcjEzODkwODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/13890854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rjenc29",
"html_url": "https://github.com/rjenc29",
"followers_url": "https://api.github.com/users/rjenc29/followers",
"following_url": "https://api.github.com/users/rjenc29/following{/other_user}",
"gists_url": "https://api.github.com/users/rjenc29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rjenc29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rjenc29/subscriptions",
"organizations_url": "https://api.github.com/users/rjenc29/orgs",
"repos_url": "https://api.github.com/users/rjenc29/repos",
"events_url": "https://api.github.com/users/rjenc29/events{/privacy}",
"received_events_url": "https://api.github.com/users/rjenc29/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"FYI @rafaelpadilla if you think that's relevant. Changing the class names might be not backward compatible. \r\nFeel free to ping us once this is ready ",
"Thanks for the feedback - this PR is probably ready for review. \r\n\r\nIf there's a downside risk re: backwards compatibility, it probably isn't worth moving forwards with this PR - I have no issue closing it if so. \r\n\r\nI just thought I would raise it in passing as I was recently wiring up a DETR model and came across a few typos whilst trying figure out why my custom annotations failed to process (as anticipated: user error on my part :) ).",
"@rjenc29 Thanks for your work on improving this! Agreed with @rafaelpadilla, we should try to fix typos, even for class names, wherever we can (this was my bad!). \r\n\r\nRegarding moving the class to somewhere more central - I think that's a good idea. Originally it was within the model specific image processing files as it was just for DETR. `image_utils.py` seems the natural place, but open to suggestions. \r\n\r\nGenerally, for both cases (spelling adjustment and moving) we should to be careful about backwards compatibility. Even though this isn't a public class, we might need at least a small deprecation cycle c.f. [this issue and comment](https://github.com/huggingface/transformers/issues/25948#issuecomment-1758537251). Searching [on github](https://github.com/search?q=AnnotionFormat&type=code&p=5) it seems other repos define the enum outright, so it should be OK without. \r\n\r\nMy main request is that you confirm saved image processors with the old annotation format can still be correctly loaded i.e. the annotation format matches the preprocessor config (this should be checked with non-default values). \r\n",
"Hi @amyeroberts, thanks vm for the feedback.\r\n\r\nIn the last deluge of commits (mostly, arguments with `black` and `ruff` unfortunately!), I have attempted to reduce code duplication a little.\r\n\r\nIn terms of backwards compatibility, I have imported `AnnotionFormat` back into the scripts where it was formerly defined and have attempted to add a `FutureWarning` to advertise that it's due to be deprecated - albeit it will get called at import time, and might therefore be annoying / noisy (let me know if so). My ambition was to do something less annoying / noisy, but Enums defeated me once more.\r\n\r\nI've added a roundtrip test to create / save / load processors with various `format`s specified, and choose to do so in a way that would make it easy to remove further down the line when `AnnotionFormat` bites the dust. As far as I can see, it should be possible to use `AnnotionFormat` or the newfangled `AnnotationFormat` interchangeably. \r\n\r\nIf this isn't really what you meant, let me know and I will refine further.\r\n\r\n\r\n\r\n\r\n\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26941). All of your documentation changes will be reflected on that endpoint.",
"@rjenc29 At the moment, there's a lot of integration tests for trainer failing on the CI, which will need to pass before we can merge. If you run the tests locally are you able to replicate the errors? ",
"I can't run some of the tests locally, but those that I can run seem to pass for me.\r\n",
"@rjenc29 Thanks for confirming. The following tests: \r\n* examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_glue_no_trainer\r\n* examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_ner\r\n\r\nare unrelated to this PR. There's a fix being added to the accelerate library which once pushed and our CI runners updated should be resolved here. The other tests seem to be specific to this PR but might be flaky. \r\n\r\nLet's wait for the update in accelerate to be added, we'll re-run the tests here and then we can debug if some tests are still failing. ",
"Hi @rjenc29 - there was a recent push to main which should have resolved the unrelated failing tests. Could you rebase to see if this fixes them? Thanks! ",
"Well, I attempted a rebase but, judging by the files changes count, it looks a bit like I've destroyed my branch...",
"@rjenc29 oh no! Did you force push after rebasing? The commit history looks like branches I've had when I've just pushed without forcing after rebase. ",
"@amyeroberts I did not force push - I never, ever use rebase personally so doubtless have brought this on myself & can add it to my catalogue of git fails",
"haha - don't worry - git remains an enigma to the best of us 🙈 ! \r\n\r\nIn this case, all you need to do force push the changes to the branch. Rebasing is effectively re-writing the history, so you need to force for the changes to be reflected on the remote. Worst case, this will all be squashed into a single commit when merging into main, so the history might be big on the PR but we can still clearly see which commits are yours and it doesn't prevent it from being added. \r\n\r\nOutside of that, the other tests that are failing appear to be unrelated to this PR but new. The good news is, you'll get to try rebase again, the bad news is you'll have to wait for us to track down the issue first (sorry :/) ",
"Thanks very much @amyeroberts - on your advice, I think I may have restored the health of my branch / this PR, which is potentially the first time I have ever successfully resolved a git foul-up - but then again, it is (almost) the season of miracles! ",
"@rjenc29 Great - glad it worked! \r\n\r\nThe failing tests _should_ now have been resolved on main. If you can do one more rebase, things should all pass and we can merge 🟢 ",
"Hi @amyeroberts, I rebased yesterday late afternoon - I think there's one residual test failure in this PR",
"@rjenc29 Apologies for this - we were having quite a few issues yesterday with stabilising the CI. One last (🤞) rebase should do the trick. ",
"@rjenc29 do you want me to merge this? Black is no longer used so a last-last-last rebase will be needed 🤣 \r\nApologies we didn't receive the ping for the green tick! ",
"Hi @ArthurZucker, I have rebased so this PR should be in good shape",
"@rjenc29 Thanks again for adding this and iterating to get everything green! "
] | 1,697 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes a couple of types in code / comments relating to COCO annotations.
I added a test to illustrate examples of legal and illegal encodings - possibly marginal utility; if the reviewer feels it's unnecessary, I have no problem removing it.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26941/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26941",
"html_url": "https://github.com/huggingface/transformers/pull/26941",
"diff_url": "https://github.com/huggingface/transformers/pull/26941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26941.patch",
"merged_at": 1702309912000
} |
https://api.github.com/repos/huggingface/transformers/issues/26940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26940/comments | https://api.github.com/repos/huggingface/transformers/issues/26940/events | https://github.com/huggingface/transformers/pull/26940 | 1,952,577,048 | PR_kwDOCUB6oc5dSowf | 26,940 | Add RoCm scheduled CI & upgrade RoCm CI to PyTorch 2.1 | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@fxmarty Could you verify that docker image could be built? For the AMD push CI, `docker/transformers-pytorch-amd-gpu` is not able to be built.\r\n\r\nNote, `docker/transformers-pytorch-amd-gpu` is built on `[self-hosted, docker-gpu, amd-gpu, single-gpu, mi210]`, as the `ubuntu-latest` always gets not enough disk space error, no matter what I tried. I made it ran on `[self-hosted, docker-gpu, amd-gpu, single-gpu, mi210]` (maybe for a few days), but after that it gives permission error `EACCES: permission denied, mkdir '/home/github_actions/.docker/buildx/certs'`.\r\n\r\nhttps://github.com/huggingface/transformers/actions/runs/6581781214\r\n\r\nSee my comment [here](https://github.com/huggingface/transformers/issues/26599#issuecomment-1748413935)\r\n\r\nI suspect it is related the new way of AMD CI runners are set up ",
"It appears the runners on MI210 are down.",
"Hi @mfuntowicz When you get some bandwidth, could you check how to bring back mi210 runners? Would be great if we can move toward the daily AMD CI (complete suite) and see the status :-)",
"@ydshieh probably did most of it! :smile: ",
"thank you so much @ydshieh, you rock!",
"You're right, thank you both :raised_hands: :hugs:"
] | 1,697 | 1,700 | 1,700 | COLLABORATOR | null | As per title | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26940",
"html_url": "https://github.com/huggingface/transformers/pull/26940",
"diff_url": "https://github.com/huggingface/transformers/pull/26940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26940.patch",
"merged_at": 1700574913000
} |
https://api.github.com/repos/huggingface/transformers/issues/26939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26939/comments | https://api.github.com/repos/huggingface/transformers/issues/26939/events | https://github.com/huggingface/transformers/pull/26939 | 1,952,571,303 | PR_kwDOCUB6oc5dSndf | 26,939 | [WIP] New model RTDetr | {
"login": "rafaelpadilla",
"id": 31217453,
"node_id": "MDQ6VXNlcjMxMjE3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaelpadilla",
"html_url": "https://github.com/rafaelpadilla",
"followers_url": "https://api.github.com/users/rafaelpadilla/followers",
"following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions",
"organizations_url": "https://api.github.com/users/rafaelpadilla/orgs",
"repos_url": "https://api.github.com/users/rafaelpadilla/repos",
"events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaelpadilla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26939). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Adds new model RTDetr.
Image processing:
- [x] preprocess
- [x] post_process
- [x] post_process_object_detection
Tests:
- [x] image processing: test_image_processor_outputs
- [x] image processing: test_multiple_images_processor_outputs
- [x] model: logits and boxes match the original model
- [ ] model: unit tests for `modeling_rt_detr.py` are passing
Backbone:
- [x] adjust backbone to be compatible with Timm
- [x] convert backbone weights to be compatible with Timm
General:
- [x] review docstrings
- [x] check variable names
- [x] check order of classes
Fixes #26742
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26939/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26939",
"html_url": "https://github.com/huggingface/transformers/pull/26939",
"diff_url": "https://github.com/huggingface/transformers/pull/26939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26939.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26938/comments | https://api.github.com/repos/huggingface/transformers/issues/26938/events | https://github.com/huggingface/transformers/issues/26938 | 1,952,433,575 | I_kwDOCUB6oc50X8Wn | 26,938 | Please set up stand-with-Palestine banner and help change the IR-PL war a little bit,thank you | {
"login": "Kevin-shihello-world",
"id": 32519859,
"node_id": "MDQ6VXNlcjMyNTE5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32519859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kevin-shihello-world",
"html_url": "https://github.com/Kevin-shihello-world",
"followers_url": "https://api.github.com/users/Kevin-shihello-world/followers",
"following_url": "https://api.github.com/users/Kevin-shihello-world/following{/other_user}",
"gists_url": "https://api.github.com/users/Kevin-shihello-world/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kevin-shihello-world/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kevin-shihello-world/subscriptions",
"organizations_url": "https://api.github.com/users/Kevin-shihello-world/orgs",
"repos_url": "https://api.github.com/users/Kevin-shihello-world/repos",
"events_url": "https://api.github.com/users/Kevin-shihello-world/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kevin-shihello-world/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,701 | 1,701 | NONE | null | ### Feature request
Please set up stand-with-Palestine banner and help change the IR-PL war a little bit,thank you.
I'm sorry I know it is not a place suitable for political things here, but I'm still an undergraduate student and I'm eager to help give it some change to help some innocent people i mean normal people from both side.
### Motivation
Today is the 12th day after the conflict between Israel and Palestine. As an AI novice researcher [1] I want to talk about nationality, national understanding, and AI Interrupt with our modern political life.
We may all heard some strange things about the IsPalestine conflict those days. First, we merely see it as a territory attack. And then, we heard the history hidden before this conflict. Then we may all have some strange things about some Jewish capitalists who threaten some Harvard students to offer their names to not hire them just because they support Palestine and judge Israel as more guilty for this conflict. Israel's government uses its old way to try to lead us to judge those who support Palestine more are anti-Jew-discriminant and connect it to terrorists like Bin Laden in 9/11 and try to impel people in the other countries to support their military acts in thinking and in material way.
Those who believe in Israel Netaniahu government's words may miss the big picture and I think if we keep accepting the understanding they instilled in us on blind faith it would threaten our national consciousness, give our political rights to some lobbyist with AI, and let real terrorists find their way to get close to us and our family.
Here are my reasons And why we should stand with punishing and set up of our voices: 1:Firstly, support Palestine more can’t be said Anti-jew-discriminant, we all know from Wiki encyclopedia The constitution of Israel says that every Jewish person has the right to choose to be an Israel citizen No matter whether he is in high social standard or not. So we ought to see people with different skins change their nationality and live in Israel. But why from all social media and all those photos about Israel we all just see white Israelians? I think it's quite weird. Then I found that in fact, they were quiet against people with colored skin with Jewish blood in their veins. I searched and found some uneasy news[2] about that from reasonable and reliable sources(ABC News) tells us how those Israeli people, dislike other drawers with different colors and want to retreat them back to their original countries. And now the Israeli government‘s just discriminating against people in Palestine by driving them from their land to Gaza, which aroused people of Israeli descent rebel to against Israel’s military acts. So supporting Israel is no way near solving Anti-JewDiscriminant.If we want to evacuate it we should start by being kind to Jewish people who live near us.
2:Secondly, Hamas’s role in this conflict should not be judged merely as a territory force like the one planned terrorist attack that happened on 9/11. The thing is not just an airplane striking a skyscraper but also, another plane striking the white house, and the congress, those people‘s being judged as terrorists more reside in their attack on a country‘s democracy department. Moreover, those terrorists in the 9/11 had already gained creditable pilot jobs with dignified pay so they could try to get into some colleges in the US and address speeches to call for less military affairs and given the former Bush president’s bad reputation it seems they more likely to change it better in this way, but they still chose that way, so this why they were called terrorists, they had other choices. But by contrast, we can see how Israel controls people in Gaza and how their state terrorism act made Palestinians have less another way to express their situation to the global society. I'm not trying to exculpate what Hamas has done these days kidnapping some tourists in a music festival but what Israel has done are bigger territory acts and they even decided to evacuate Palestine just those days by saying every normal Palestine citizen is guilty of not stopping Hamas.
And nowadays some talks about this conflict between man and ChatGPT are quite instructive and a bit chilly, a person in my country asked chatgpt what might be the cause of this conflict. ChatGPT then concluded from news and history and found that the new Israeli government had just gotten the throne one year. It kept carrying out cases against judicial checks and balances to grant the prime minister of Israel more power and to restrict democracy reform. This aroused criticism from all corners of Israel So it may do not have prepared well for the attack. So who the prime minister of Israel is in his mind? I searched and found he has already been the prime minister for 15 years dating back to 2021 and was widely criticized for getting illegal money so he is a Putin-like statesman. Also, there could be the Israeli government setting up this attack and wishing to use the victim's identity to gain power. ChatGPT found us several precedents, early in 2014, Israel was believed to use the death of three kidnapped STUDENT to set up an attack against Palestine. The third cause is still about if the Israeli government wants to gain interest from this attack. Where did the 1000 missiles come from? Lots of voices said that they were from Iran, but how could they pass to Gaza without being detected by the Israeli military forces? They built walls all around Gaza So it may be more difficult for decide to put bombs in Gaza, then just put them in Israel And hamas were comparatively weakly protect themselves than Bin latin I mean they were always under attack by Israel So they may choose some more hidden ways like a. If they got support they may want those bombs just appear in Israel like by use of unmanned vehicles instead of set off rockets from Gaza to let Israeli government just found them at the time Hamas To take the responsibility .They can come out later after the old dictator step down and declare it was done by them. And seldom did Hamas people do the same thing before. The chatbot proposed that this attack is somehow very critical to Israel's political society. The political society was quite uneasy with the new government reform in Israel. It looks a lot like Putin. According to whose world just one year ahead of his election day, a set of NATO army encourage Ukraine to bombed a russian watchhouse with the ceiling blewed away while the walls remain And maybe to show up that it was not Putin's fault ,news according to Russian says that no Russian army was hurt. This attack it just in time to move People‘s view away from domestic affairs and reunite them under the flag of protecting Israel and to set more intense antagonism with Palestine was more positive for the Israeli government to set up future Military attacks. Maybe they were directly supported by the Israeli government so the attack could have happened just after the protest against Netanyahu for his judicial reform to be a dictator. The 5th reason for this is maybe Israel would want to use this attack to know some interests of how Hamas can be in a military way. This also, uh, close to the news. I mean, uh, we all heard that America and the other countries have already informed Israel that a Palestinian attack these days. However, Israel did not answer and did not inform its citizens and tourists.
People's minds may sometimes be led by manipulated emotions like fear and anger but some not distorted chatbots can think more calmly and more reasonably and I think ChatGPT this time did find us something we should consider when it comes to this conflict.
So who blends our mind from thinking all those skeptical things? Think about it. If some rich guys like Cambridge Analysis got an AI a bit like chatgpt mingle with some GNNs which was used to track and take down rumors to try to manipulate our mind in thinking political way( Think on positive way maybe when those people they may not use those academic system in the use recommendation algorithms ). It would bring us a horrible future.
3: Support Israel without opposing the Netaniahu's state territory acts and trying to discriminate against those people who support Palestine Is a threat to national understanding and freedom of speech and It would bring us a more cyberpunk way terrible future. The word state territory Was used here not just because of what happened in Palestine, but also for those brave UN Warriors Modeled by them. And also the building owned by AP and another news press blew up by his army in 2021 Say they are. It could be Hamas hiding in that building which was opposed by AP officials. Now people can have multiple nationalities, but in their mind. Every person should only support one. I mean, for those people who support to get the name of rough students who support Palestine. And to not hire them They want names. So let us remember their names first I heard Bill Ackman. It's one of them. And he is a jew I mean, for those people who support Ukraine, they were not mostly of Ukraine descent. Instead, they were more just sympathy with Ukraine people But why for those people who support Israel, they were mostly Jews? I mean, you can't claim faith in two countries whose interests are against [3].
So here I'm calling for act. October 24 Is around the corner and I want to encourage everyone, especially programmers, who wants to give the situation a better change to do it peacefully, Parade may not hurt the essence interests of those person who only blame hamas but chose not to blame the continuing military act on Gaza.We heard Oracle is one of them And now with the help of ai a lot of traditional database company without combined with vector search and AI arrangement may face a big Challenge. So uh if they do not change, then please quit job from there and I'm sure you can find some new suitable jobs from some company like huggingface and ziliz. I mean striking against or quite a job from some capitalist like Bill Ackman who discriminates against some students who support Palestine but do not claim to support Hamas are better choices. And nowadays a lot of companies they use data from unauthorized sources to train their codebot and trying to take values of our program away and cut down jobs. So I want by doing this, also to claim that our programmers are unique in our placeand shouldn't be replaced entirely. We shall do the protest act from now on to October 24 to celebrate our programmer's day. And what's more, I want to call for voters of every country to consider it the next time we vote for our candidates. In the end, I also want to call for less use of AI in the political field and RETREAT military support to Israel until they retreated their border to the 1967s, I mean, especially unlike those older countries Israel gained this illegal land recently So it's reasonable for them to give it back.
We stand with Ukraine , Palestine , normal person in Israel, And we stand with we workforce ourselves .And we shall not surrender till our voices be heard and our demand to be fulfilled.
1: Okay. For me, I developed a new kind of nlp model, and it replaced most dot product attention And replace them by using a system that uses a token's last Attention relationship I mean, which tokens it changes Information with. The reason to do it is According to some newly research, the deep neither of a transformer llm would not get a lot new connection to communicate tokens Instead, they would reduce a lot connection already there and the attention connection would be far more sparse in the deep connection so that I thought this might be work and get competitive result. And with the help of a new gnn devised by myself( Thanks PYG and huggingface for offering us an easy-to-use platform used in this project[laugh]) It got a quite competitive result in some g l u e datasets. Theoretically, it would calculate much faster than the original one. But as I am yet still an undergrad student. I think I should be called a beginner. But I wrote this part not to propagate in my work, but for a little in the field and want to call for an act in programmer's way So I shall not give the GitHub repository of my work
But I would be denied to put my model here in huggingface To benefit this community a little bit and to encourage some people to take some peaceful action about this conflict
2:https://abcnews.go.com/International/wireStory/after-decades-struggle-place-israel-dozens-black-hebrews-101543434
3:67 killed in Gaza, 7 killed in Israel as UN warns conflict could turn into 'full-scale war' | CNN
### Your contribution
I'm sorry I know it is not a place suitable for political things, but I'm still an undergraduate student and I'm eager to help give it some change to help some innocent people i mean normal people from both side.
As in the complement I did develop a new kind of basic model in NLP and if I with some friends and also I want to meet some supporter( I mean maybe someone who can cold with me instead of the sun one with material support) here to develop it a little bit and put it here in this repository, it may help encourage someone to take it more seriously about this conflict. And if you want to know about my work, it called up-downformer and I've put the code on github. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26938/reactions",
"total_count": 14,
"+1": 1,
"-1": 13,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26937/comments | https://api.github.com/repos/huggingface/transformers/issues/26937/events | https://github.com/huggingface/transformers/pull/26937 | 1,952,419,989 | PR_kwDOCUB6oc5dSJW0 | 26,937 | Generate: update basic llm tutorial | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | MEMBER | null | # What does this PR do?
This PR updates the basic [LLM tutorial](https://huggingface.co/docs/transformers/llm_tutorial) in the following aspects:
1. Adds a short section right at the top on how to do batched generation. This was made to address https://github.com/huggingface/transformers/issues/26061 (and other similar issues in the past)
2. Updates the model used in the examples from OpenLlama (permissive licenses, but deprecated) to Mistral (latest 7B model with top performance and permissive licenses)
3. Adds prompting to the list of common issues (was a TODO)
4. Adds pointers to recent guides about prompting and performance
Fixes https://github.com/huggingface/transformers/issues/26061 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26937/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26937",
"html_url": "https://github.com/huggingface/transformers/pull/26937",
"diff_url": "https://github.com/huggingface/transformers/pull/26937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26937.patch",
"merged_at": 1697730808000
} |
https://api.github.com/repos/huggingface/transformers/issues/26936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26936/comments | https://api.github.com/repos/huggingface/transformers/issues/26936/events | https://github.com/huggingface/transformers/pull/26936 | 1,952,410,667 | PR_kwDOCUB6oc5dSHS1 | 26,936 | fix logit-to-multi-hot conversion in example | {
"login": "ranchlai",
"id": 5043767,
"node_id": "MDQ6VXNlcjUwNDM3Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5043767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranchlai",
"html_url": "https://github.com/ranchlai",
"followers_url": "https://api.github.com/users/ranchlai/followers",
"following_url": "https://api.github.com/users/ranchlai/following{/other_user}",
"gists_url": "https://api.github.com/users/ranchlai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranchlai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranchlai/subscriptions",
"organizations_url": "https://api.github.com/users/ranchlai/orgs",
"repos_url": "https://api.github.com/users/ranchlai/repos",
"events_url": "https://api.github.com/users/ranchlai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranchlai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26936). All of your documentation changes will be reflected on that endpoint.",
"@ArthurZucker I've updated the comment for educational purpose. "
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes #26830 . The bug might lead to difference in measuring multi-label acc (Slightly higher precision and lower recall).
@younesbelkada Would you please review.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26936/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26936",
"html_url": "https://github.com/huggingface/transformers/pull/26936",
"diff_url": "https://github.com/huggingface/transformers/pull/26936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26936.patch",
"merged_at": 1698057186000
} |
https://api.github.com/repos/huggingface/transformers/issues/26935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26935/comments | https://api.github.com/repos/huggingface/transformers/issues/26935/events | https://github.com/huggingface/transformers/pull/26935 | 1,952,402,309 | PR_kwDOCUB6oc5dSFeK | 26,935 | 🌐 [i18n-ZH] Translate multilingual into Chinese | {
"login": "yyLeaves",
"id": 76979429,
"node_id": "MDQ6VXNlcjc2OTc5NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yyLeaves",
"html_url": "https://github.com/yyLeaves",
"followers_url": "https://api.github.com/users/yyLeaves/followers",
"following_url": "https://api.github.com/users/yyLeaves/following{/other_user}",
"gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions",
"organizations_url": "https://api.github.com/users/yyLeaves/orgs",
"repos_url": "https://api.github.com/users/yyLeaves/repos",
"events_url": "https://api.github.com/users/yyLeaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/yyLeaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26935). All of your documentation changes will be reflected on that endpoint."
] | 1,697 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
Add zh (Chinese) translation for multilingual.md. #20095
## Who can review?
Documentation: @stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26935/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26935",
"html_url": "https://github.com/huggingface/transformers/pull/26935",
"diff_url": "https://github.com/huggingface/transformers/pull/26935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26935.patch",
"merged_at": 1698082518000
} |
https://api.github.com/repos/huggingface/transformers/issues/26934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26934/comments | https://api.github.com/repos/huggingface/transformers/issues/26934/events | https://github.com/huggingface/transformers/issues/26934 | 1,952,357,254 | I_kwDOCUB6oc50XpuG | 26,934 | Whisper tokenizer decode function ignores timestamp tokens after v4.34.0 (the big refactor) | {
"login": "versae",
"id": 173537,
"node_id": "MDQ6VXNlcjE3MzUzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/versae",
"html_url": "https://github.com/versae",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api.github.com/users/versae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versae/subscriptions",
"organizations_url": "https://api.github.com/users/versae/orgs",
"repos_url": "https://api.github.com/users/versae/repos",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"received_events_url": "https://api.github.com/users/versae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Well, based on #26054, it seems a new undocumented parameter `decode_with_timestamps` has to be passed in:\r\n\r\n```python\r\nprint(\r\n tokenizer.encode(\"<|0.00|>\", add_special_tokens=False),\r\n tokenizer.decode(tokenizer.encode(\"<|0.00|>\"), decode_with_timestamps=True)\r\n)\r\n```\r\n```bash\r\n[50364] <|startoftranscript|><|en|><|transcribe|><|0.00|><|endoftext|>\r\n```\r\n\r\nIt seems to me that if the timestamp tokens are there and the user sets `predict_timestamps=True`, `decode_with_timestamps` should default to `True` or maybe not even exist as a parameter?\r\n",
"I think we have some of this for BC but also performance reasons, cc @sanchit-gandhi ",
"That would be strange. With the previous behavior, and before the OpenAI Whisper models included the timestamp tokens, if you added them yourself using the `.add_tokens()` method, by default those tokens would be returned when decoding as well.",
"Hey @versae! Thanks for flagging this - this is a bit of a tricky edge case since the HF tokenizers were only _recently_ updated to have the timestamp tokens in their vocabulary. Prior to this, we followed the _original_ OpenAI behaviour, where the timestamps were not part of the vocabulary.\r\n\r\nIf the timestamps were added by the user to the tokenizer (i.e. custom behaviour), then the tokenizer would **always** return predicted timestamps as part of the transcriptions. This is because we didn't design the tokenizer for this custom use-case, but assumed users would be using the default vocabulary. This then gave incorrect behaviour, as the tokenizer returned the timestamp tokens even if `decode_with_timestamps=False`.\r\n\r\nThis is in contradiction to the prior default behaviour: when timestamp tokens wer **not** part of the vocabulary, then the user needs to set `decode_with_timestamps=True` to decode the timestamp tokens to text. If they do not, then the timestamp tokens will not be decoded, but rather filtered and removed from the transcription.\r\n\r\nThe current behaviour on `main` fixed the custom tokenizer behaviour, which was inconsistent with the previous one: now that timestamp tokens are part of the default tokenizer vocabulary, we have updated the tokenizer to only return timestamps when `decode_with_timestamps=True`, in-keeping with how the tokenizer operated before the timestamp tokens were added\r\n\r\nThe fact that the custom tokenizer returned the timestamp tokens with `decode_with_timestamps=False` before was a bit of a bug, but not a case we could design for since it was custom behaviour expanding the vocabulary of the tokenizer\r\n\r\nHope that explains things @versae! Let me know if you have any questions",
"Thanks for the explanation, @sanchit-gandhi! I still think that if the user sets `predict_timestamps=True` is because is expecting to see timestamp tokens in the output, which would make `decode_with_timestamps` default to `True`.",
"Indeed this is how we set it in the `pipeline`: one input argument `return_timestamps` triggers all of the timestamp args under the hood. However, for the `model` + `processor` API it needs to be set twice, each time by the user. This is because there's no coupling between the `model` and the `processor`, and so we can't propagate the arguments forward automatically."
] | 1,697 | 1,700 | 1,698 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.19.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.1
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (cpu)
- Jax version: 0.4.12
- JaxLib version: 0.4.12
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi, @peregilk
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Install `transformers>=4.34.0` and run this code:
```python
from transformers import AutoTokenizer, AddedToken
tokenizer = AutoTokenizer.from_pretrained("openai/whisper-large-v2")
tokenizer.set_prefix_tokens(language="en", task="transcribe", predict_timestamps=True)
print(
tokenizer.encode("<|0.00|>", add_special_tokens=False),
tokenizer.decode(tokenizer.encode("<|0.00|>"))
)
```
The output will be
```bash
[50364] <|startoftranscript|><|en|><|transcribe|><|endoftext|>
```
Which ignores the timestamp token when decoding.
### Expected behavior
With versions of `transformers<4.34.0`, the timestamp tokens will be correctly decoded. The same code will produce:
```bash
[50364] <|startoftranscript|><|en|><|transcribe|><|0.00|><|endoftext|>
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26934/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26933/comments | https://api.github.com/repos/huggingface/transformers/issues/26933/events | https://github.com/huggingface/transformers/pull/26933 | 1,952,222,189 | PR_kwDOCUB6oc5dReJc | 26,933 | Refactor: Use Llama RoPE implementation for Falcon | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Before diving into the code, I tried the snipped you shared, but with an extra `set_seed(0)` for complete reproducibility at the sampling step.\r\n\r\n```py\r\nfrom transformers import AutoTokenizer, pipeline, set_seed\r\nimport torch\r\n\r\nmodel = \"tiiuae/falcon-7b\"\r\n\r\nset_seed(0)\r\ntokenizer = AutoTokenizer.from_pretrained(model)\r\npipe = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n torch_dtype=torch.bfloat16,\r\n device_map=\"auto\",\r\n)\r\nsequences = pipe(\r\n \"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\\nDaniel: Hello, Girafatron!\\nGirafatron:\",\r\n max_length=300,\r\n do_sample=True,\r\n top_k=10,\r\n num_return_sequences=1,\r\n eos_token_id=tokenizer.eos_token_id,\r\n)\r\nfor seq in sequences:\r\n print(f\"Result: {seq['generated_text']}\")\r\n```\r\n\r\nThere is a tiny difference in outputs, which we should try to figure out before merging (maybe I'll have some clues after looking at the diff)\r\n\r\nOn `main`, the output is \r\n```\r\nResult: Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\r\nDaniel: Hello, Girafatron!\r\nGirafatron: Daniel, it is good that you have called upon me for I have many important matters in which I must attend to.\r\nDaniel: Yes, and I will be brief for I must attend to some urgent matters myself.\r\nGirafatron: You have urgent matters? What are these urgent matters?\r\nDaniel: It has come to my attention that you seem to have a problem, a problem with being a giraffe.\r\nGirafatron: What are you saying? That I have a problem? What kind of problem?\r\nDaniel: You seem to be in denial.\r\nGirafatron: Of what?\r\nDaniel: Of your love for giraffes, of course!\r\nGirafatron: How do you know that I’m in denial?\r\nDaniel: Because you have not been able to get a giraffe to be your friend for all these years.\r\nGirafatron: That’s not a problem.\r\nDaniel: Yes, it is! How are you supposed to find love without being able to love?\r\nGirafatron: I’ll tell you how I find love.\r\nDaniel\r\n```\r\n\r\nUsing the latest commit here, the output is \r\n```\r\nResult: Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\r\nDaniel: Hello, Girafatron!\r\nGirafatron: Daniel, it is good that you have called upon me for I have many important matters in which I must attend to.\r\nDaniel: Yes, and I will be brief for I must attend to some urgent matters myself.\r\nGirafatron: You have urgent matters? What are these urgent matters?\r\nDaniel: It has come to my attention that you seem to have a problem, a problem with being a giraffe.\r\nGirafatron: What are you saying? That I have a problem? What kind of problem?\r\nDaniel: You seem to be in denial.\r\nGirafatron: Of what?\r\nDaniel: Of your love for giraffes, of course!\r\nGirafatron: How do you know that I’m in denial?\r\nDaniel: Because you have not been able to get a giraffe to be your friend for all these years.\r\nGirafatron: That’s not a problem.\r\nDaniel: Yes, it is! How are you supposed to find love if you don’t go out and try to find a giraffe?\r\nGirafatron: I don’\r\n```\r\n\r\n👉 identical up to the last two sentences",
"Note: adding \r\n```py\r\nif dtype in [torch.float16, torch.bfloat16]:\r\n emb = emb.float()\r\n```\r\nback doesn't fix the mismatch",
"All of that sounds great. I didn't notice the potential to remove some reshapes, that would definitely be useful. And I'll reuse my benchmarking tools that I made for `attention_sinks`, but use it for `transformers` `main` and this PR to get info on latencies and ppl.\r\n\r\nWill work on this tomorrow!\r\n\r\n- Tom Aarsen",
"I have some more results @gante 🎉 \r\n\r\nI've done tests for `main`, this PR after c4a8d52339abb89561337aedbb5ed56fc000fefa (which I called `pr_v1`) and this PR after e68b8e466cdacb87e02ebe48cc002a3826fe1124 (which I called `pr_v2`).\r\n\r\n## Perplexity\r\nI've tried 3 different books, and for all of them there are some notable differences in favor of `pr_v2`:\r\n\r\n\r\nIf I try a different book, again there's some notable differences in favor of `pr_v2`:\r\n\r\n\r\nYet another book, again large differences in favor of `pr_v2`:\r\n\r\n\r\n## VRAM\r\n\r\nThere's two groups: `main` as one group, and `pr_v1` and `pr_v2` as the other. It seems that the Llama RoPE implementation immediately cashes more cos/sin values, hence a slightly larger memory usage at the start.\r\n\r\n## Latency\r\nThe latency for v2 definitely improves over `main`, which is very promising. Here's results for the 3 different books that I tested with:\r\n\r\n\r\n\r\nI think with the second book, my OS de-prioritized the process for `main`, so that's a bit of an unrelated outlier.\r\nFor book 1, the average tokens per second was 18.04 for `pr_v2` and 14.13 for `main`, a 27.65% speedup. For book 3, I got 17.54 tokens/sec for `pr_v2` and 14.82 for `main`, i.e. an 18.38% speedup.\r\n\r\n---\r\n\r\nOne thing: I get a test failure locally for `tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_left_padding_compatibility` due to some `nan` popping up where the attention_mask is 0. Not sure if this is anything to be concerned about, as it does pass on the CI...\r\n\r\nThis also simplifies implementing the Attention Sink Cache for Falcon 🎉 \r\n\r\n- Tom Aarsen",
"I'll pull this into #26681, i.e. the caching refactor PR, when this is merged. Then I can conveniently test whether the implementation is easily extended for Falcon now too.",
"Resolved the merge conflicts introduced by #26792 by @patrickvonplaten, the latency seems slightly better than reported in https://github.com/huggingface/transformers/pull/26933#issuecomment-1772700422 now. This PR should be good to go as far as I'm concerned. Idem dito for #26929 which solves some minor issues with the Falcon config.",
"Woah, that is a great analysis of the changes! Great work Tom 💛 ",
"Tagging @ArthurZucker (core maintainer) for a quick final check",
"FYI @tomaarsen: a major cause for the PPL mismatch seems to stem from how `t` was being cast to a lower precision in the older version of the code:\r\n\r\n```\r\n(Pdb) torch.arange(3000, device=device).to(torch.float16)[-10:]\r\ntensor([2990., 2992., 2992., 2992., 2994., 2996., 2996., 2996., 2998., 3000.],\r\n device='cuda:0', dtype=torch.float16)\r\n(Pdb) torch.arange(3000, device=device).to(torch.float32)[-10:]\r\ntensor([2990., 2991., 2992., 2993., 2994., 2995., 2996., 2997., 2998., 2999.],\r\n device='cuda:0')\r\n```\r\n\r\nThe following `einsum` is mischievous, as it automatically upcasts `t` to be compatible with the other input!",
"That must be it indeed!\r\n\r\nIn `main`, we have:\r\nhttps://github.com/huggingface/transformers/blob/576e2823a397942421e1724e79f51a12122ef49e/src/transformers/models/falcon/modeling_falcon.py#L247-L249\r\nwhich is called like so:\r\nhttps://github.com/huggingface/transformers/blob/576e2823a397942421e1724e79f51a12122ef49e/src/transformers/models/falcon/modeling_falcon.py#L262-L267\r\nwhich is called like so:\r\nhttps://github.com/huggingface/transformers/blob/576e2823a397942421e1724e79f51a12122ef49e/src/transformers/models/falcon/modeling_falcon.py#L278-L280\r\nSo, if the model is loaded in fp16, then so is the `query`, which gives `t = torch.arange(seq_len, device=device).to(torch.float16)`. This results in:\r\n```python\r\n>>> torch.arange(3000, dtype=torch.float16)[-10:]\r\ntensor([2990., 2992., 2992., 2992., 2994., 2996., 2996., 2996., 2998., 3000.],\r\n dtype=torch.float16)\r\n```\r\nAlternatively, if the model is loaded in bf16, then it uses `t = torch.arange(seq_len, device=device).to(torch.bfloat16)` and we get:\r\n```python\r\n>>> torch.arange(3000, dtype=torch.bfloat16)[-10:]\r\ntensor([2992., 2992., 2992., 2992., 2992., 2992., 2992., 2992., 2992., 2992.],\r\n dtype=torch.bfloat16)\r\n```\r\nwhich is just awful!\r\n\r\n---\r\n\r\nWith this PR, we get:\r\nhttps://github.com/huggingface/transformers/blob/dcde537a0a86bd6aa6733982253d994bccbac5cc/src/transformers/models/falcon/modeling_falcon.py#L256-L258\r\n\r\nand `inv_freq` is this:\r\nhttps://github.com/huggingface/transformers/blob/dcde537a0a86bd6aa6733982253d994bccbac5cc/src/transformers/models/falcon/modeling_falcon.py#L248-L249\r\nWhich is always float32 due to the `.float()` call. So, we get `t = torch.arange(seq_len, device=device).to(torch.float32)` which results in:\r\n```python\r\n>>> torch.arange(3000, dtype=torch.float32)[-10:]\r\ntensor([2990., 2991., 2992., 2993., 2994., 2995., 2996., 2997., 2998., 2999.])\r\n```\r\n\r\nSo, this PR solves a hidden bug that has been resulting in reduced performance at higher sequence lengths. After all, the problem only gets worse at higher seq lengths, e.g.:\r\n```python\r\n>>> torch.arange(30000, dtype=torch.bfloat16)[-10:]\r\ntensor([29952., 29952., 29952., 29952., 29952., 29952., 29952., 29952., 29952.,\r\n 29952.], dtype=torch.bfloat16)\r\n```\r\n\r\n- Tom Aarsen",
"I've resolved the outstanding merge conflicts. This should be ready for final review @ArthurZucker. I've verified that the results are identical to what I previously plotted [here](https://github.com/huggingface/transformers/pull/26933#issuecomment-1772700422). I want to point out: This PR is completely unrelated to attention sinks (although it will eventually help with the implementation). These findings are for regular, pure `transformers` usage.\r\n\r\nUsing these scripts you can reproduce my findings:\r\n<details><summary>perplexity.py</summary>\r\n\r\n```python\r\n\"\"\"\r\nAdapted from https://github.com/mit-han-lab/streaming-llm\r\n\r\nNote: Although this script measures latency, it is not optimized whatsoever!\r\nThe latency is only tracked to see the impact of speed over time.\r\n\r\nUsage:\r\n\r\npython benchmark/perplexity.py --experiment attention_sinks\r\npython benchmark/perplexity.py --experiment transformers\r\npython benchmark/perplexity.py --experiment windowed\r\n\"\"\"\r\n\r\n\r\nimport argparse\r\nimport itertools\r\nimport time\r\nfrom collections import defaultdict\r\nfrom pathlib import Path\r\nfrom typing import Optional\r\n\r\nimport pandas as pd\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom torch.nn import CrossEntropyLoss\r\nfrom tqdm import tqdm\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\ndef compute_perplexity(\r\n model,\r\n tokenizer,\r\n dataset,\r\n experiment: str,\r\n output_dir: str = \"outputs\",\r\n data_column: str = \"text\",\r\n num_samples: int = 1,\r\n num_tokens: Optional[int] = None,\r\n overwrite: bool = False,\r\n) -> None:\r\n output_dir = Path(output_dir)\r\n output_dir.mkdir(parents=True, exist_ok=True)\r\n output_file = output_dir / f\"{experiment}.csv\"\r\n\r\n if output_file.exists() and not overwrite:\r\n raise ValueError(\r\n f\"The {output_file!r} output file already exists - if you really want to override it, then use `--overwrite`.\"\r\n )\r\n\r\n logs = defaultdict(list)\r\n loss_fn = CrossEntropyLoss(reduction=\"none\")\r\n past_key_values = None\r\n num_processed_tokens = 0\r\n for text in itertools.islice(dataset, num_samples):\r\n encodings = tokenizer(text[data_column], return_tensors=\"pt\")\r\n\r\n seq_len = encodings.input_ids.size(1)\r\n print(f\"sequence length: {seq_len}\")\r\n pbar = tqdm(range(0, seq_len - 1))\r\n\r\n for idx in pbar:\r\n start_t = time.time()\r\n input_ids = encodings.input_ids[:, idx : idx + 1].to(model.device)\r\n with torch.no_grad():\r\n outputs = model(input_ids, past_key_values=past_key_values, use_cache=True)\r\n logits = outputs.logits.view(-1, model.config.vocab_size)\r\n past_key_values = outputs.past_key_values\r\n label = encodings.input_ids[:, idx + 1 : idx + 2].to(logits.device).view(-1)\r\n neg_log_likelihood = loss_fn(logits, label)\r\n perplexity = neg_log_likelihood.exp()\r\n pbar.set_description(f\"nll: {neg_log_likelihood.item():>5.2f}, ppl: {perplexity.item():>8.2f}\")\r\n\r\n # Store data and save every 10 tokens\r\n logs[\"input_length\"].append(idx + 1)\r\n logs[\"nll\"].append(neg_log_likelihood.item())\r\n logs[\"ppl\"].append(perplexity.item())\r\n logs[\"overall_ppl\"].append(torch.tensor(logs[\"nll\"]).mean().exp().item())\r\n logs[\"cuda_vram_allocated\"].append(torch.cuda.memory_allocated(0) / 1024 / 1024 / 1024) # in GB\r\n logs[\"latency\"].append(time.time() - start_t)\r\n if num_processed_tokens % 10 == 0:\r\n try:\r\n pd.DataFrame(logs).to_csv(output_file, index=False)\r\n except KeyboardInterrupt as ex:\r\n # If there's a Keyboard Interrupt, still write the file, and then stop\r\n pd.DataFrame(logs).to_csv(output_file, index=False)\r\n raise ex\r\n\r\n num_processed_tokens += 1\r\n if num_tokens and num_processed_tokens >= num_tokens:\r\n return\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser()\r\n # How to call this experiment?\r\n parser.add_argument(\r\n \"--experiment\", type=str, default=\"main\"\r\n )\r\n\r\n # Model args\r\n parser.add_argument(\"--model_name_or_path\", type=str, default=\"tiiuae/falcon-7b\")\r\n parser.add_argument(\"--revision\", type=str, default=\"main\")\r\n parser.add_argument(\"--trust_remote_code\", action=\"store_true\")\r\n\r\n # Dataset args\r\n parser.add_argument(\"--dataset_name\", type=str, default=\"emozilla/pg19-test\")\r\n parser.add_argument(\"--data_column\", type=str, default=\"text\")\r\n parser.add_argument(\"--task\", type=str, default=None)\r\n parser.add_argument(\"--split\", type=str, default=\"test\", choices=[\"validation\", \"test\"])\r\n # parser.add_argument(\"--num_samples\", type=int, default=1)\r\n parser.add_argument(\"--num_tokens\", type=int, default=5000)\r\n\r\n # Where to log\r\n parser.add_argument(\"--output_dir\", type=str, default=\"perplexity_benchmark\")\r\n parser.add_argument(\"--overwrite\", action=\"store_true\")\r\n\r\n args = parser.parse_args()\r\n\r\n model = AutoModelForCausalLM.from_pretrained(\r\n args.model_name_or_path,\r\n revision=args.revision,\r\n trust_remote_code=bool(args.trust_remote_code),\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n )\r\n model.eval()\r\n tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, trust_remote_code=bool(args.trust_remote_code))\r\n\r\n # Set up the dataset\r\n dataset = load_dataset(args.dataset_name, args.task, split=args.split, streaming=True)\r\n\r\n compute_perplexity(\r\n model,\r\n tokenizer,\r\n dataset,\r\n args.experiment,\r\n output_dir=args.output_dir,\r\n data_column=args.data_column,\r\n num_samples=1, # <- No support for more than one instance now\r\n num_tokens=args.num_tokens,\r\n overwrite=args.overwrite,\r\n )\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n</details>\r\n\r\n<details><summary>plot_perplexity.py</summary>\r\n\r\n```python\r\n\"\"\"\r\nFirst run `perplexity.py` to generate one or more `csv` files.\r\nThis script can plot those csv files.\r\n\r\nUsage:\r\npython benchmark/plot_perplexity.py\r\npython benchmark/plot_perplexity.py --features perplexity latency --title \"Log perplexity & latency of Llama 2 7B as a function of input lengths\"\r\n\"\"\"\r\n\r\nimport argparse\r\nfrom pathlib import Path\r\nfrom typing import List, Optional\r\n\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom matplotlib import pyplot as plt\r\n\r\nFEATURE_DF_MAP = {\r\n \"perplexity\": \"overall_ppl\",\r\n \"vram\": \"cuda_vram_allocated\",\r\n \"latency\": \"latency\",\r\n}\r\nFEATURE_STYLE_MAP = {\r\n \"perplexity\": \"-\",\r\n \"vram\": \"--\",\r\n \"latency\": \":\",\r\n}\r\nFEATURE_LABEL_MAP = {\r\n \"perplexity\": \"Perplexity (log), lower is better\",\r\n \"vram\": \"CUDA VRAM Usage (GB), lower is better\",\r\n \"latency\": \"Time per token (sec), lower is better\",\r\n}\r\n\r\n\r\ndef plot(\r\n features: List[str],\r\n output_dir: str = \"outputs\",\r\n title: Optional[str] = None,\r\n perplexity_limit: Optional[float] = None,\r\n skip_first: int = 100,\r\n):\r\n output_dir = Path(output_dir)\r\n\r\n fig, ax = plt.subplots()\r\n ax.set_xlabel(\"Input Sequence Length\")\r\n\r\n for feature_i, feature in enumerate(features):\r\n # If we already plotted on this ax, make a new one\r\n if feature_i:\r\n ax = ax.twinx()\r\n\r\n for file in output_dir.glob(\"*.csv\"):\r\n experiment = file.stem\r\n df = pd.read_csv(file)\r\n X = df[\"input_length\"][skip_first:]\r\n Y = df[FEATURE_DF_MAP[feature]][skip_first:]\r\n if feature == \"perplexity\":\r\n Y = np.log(Y)\r\n if feature == \"latency\":\r\n poly = np.polyfit(X, Y, 20)\r\n poly_y = np.poly1d(poly)(X)\r\n ax.plot(X, poly_y, FEATURE_STYLE_MAP[feature], label=f\"{experiment} {feature}\")\r\n else:\r\n ax.plot(X, Y, FEATURE_STYLE_MAP[feature], label=f\"{experiment} {feature}\")\r\n\r\n ax.set_ylabel(FEATURE_LABEL_MAP[feature])\r\n if perplexity_limit and feature == \"perplexity\":\r\n ax.set_ylim(top=min(ax.get_ylim()[1], perplexity_limit))\r\n\r\n ax.legend(loc=[1, 2, 7][feature_i]) # upper right, upper left, center right\r\n\r\n ax.set_title(title.replace(\"\\\\n\", \"\\n\") if title else \"Log perplexity as a function of input lengths\")\r\n fig.tight_layout()\r\n\r\n return fig\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser()\r\n # Where csv files have been logged\r\n parser.add_argument(\"--output_dir\", type=str, default=\"perplexity_benchmark\")\r\n parser.add_argument(\r\n \"--features\", choices=[\"perplexity\", \"vram\", \"latency\"], nargs=\"+\", default=[\"perplexity\", \"vram\"]\r\n )\r\n parser.add_argument(\"--title\", type=str, default=None)\r\n parser.add_argument(\"--log_perplexity_limit\", type=float, default=5.0)\r\n # Perplexity starts a bit unstable, so we skip the start\r\n parser.add_argument(\"--skip_first\", type=int, default=100)\r\n\r\n args = parser.parse_args()\r\n\r\n figure = plot(\r\n args.features,\r\n output_dir=args.output_dir,\r\n title=args.title,\r\n perplexity_limit=args.log_perplexity_limit,\r\n skip_first=args.skip_first,\r\n )\r\n\r\n # Add your own code here if you'd like to change the figure\r\n\r\n plt.show()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n\r\n```\r\n\r\n</details>\r\n\r\nUsage:\r\n```bash\r\ngit checkout main\r\n# the --experiment just determines the filename\r\npython ./perplexity.py --experiment main\r\ngit checkout pr-26933 # <- Or whatever branch you use locally for this PR\r\npython ./perplexity.py --experiment llama_rope_for_falcon\r\n\r\npython ./plot_perplexity.py\r\n```\r\nAnd you'll get a plot just like [here](https://github.com/huggingface/transformers/pull/26933#issuecomment-1772700422).\r\n\r\n- Tom Aarsen",
"_The documentation is not available anymore as the PR was closed or merged._",
"(updating core maintainer to review :) )"
] | 1,697 | 1,699 | 1,699 | MEMBER | null | # What does this PR do?
* Use Llama RoPE implementation for Falcon, solves this TODO: https://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/falcon/modeling_falcon.py#L91
* Add copy functionalities
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Discussed this internally with @gante.
## Details
There's a few differences between Llama and Falcon that complicate this somewhat. In particular, Llama deals with `[batch_size, num_(kv)_heads, seq_len, head_dim]` on the KVQ states, while Falcon uses `[batch_size * num_(kv)_heads, seq_len, head_dim]`, i.e. one dimension less.
This is why `apply_rotary_pos_emb` uses `torch.repeat_interleave` a few times to manually expand the cos, sin when necessary. This used to be in the `forward` of the `FalconRotaryEmbedding`.
There are still some differences between the old and new implementations:
* Falcon used to do `.cos()` and `.sin()` in float32: https://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/falcon/modeling_falcon.py#L115-L116
* Falcon updates the device on every `forward` call, while the Llama RoPE doesn't: https://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/falcon/modeling_falcon.py#L131-L133
(Should we also implement this on Llama? In case someone wants to move a model on the fly)
### In the context of Attention Sinks
For Attention Sinks (context: #26681), it looks like the SinkCache must store a `apply_rotary_pos_emb` variable or something - because for Falcon it will need to use a different `apply_rotary_pos_emb` function.
## How did I test?
I tested this by running `pytest tests/models/falcon` and
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=300,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
and observing that the generated text was:
```
Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.
Daniel: Hello, Girafatron!
Girafatron: (inhales deeply, then exhales and makes a sound like a foghorn)
Daniel: I see you're a giraffe enthusiast. That's really cool!
Girafatron: (inhales again and lets out a sigh, his ears perked up)
Daniel: I'm not sure why I asked if you were a giraffe enthusiast! But it seems like you love giraffes.
Girafatron: (sighs)
Girafatron: Giraffes are my favorite animal, because they're the most glorious animal on this planet!
Daniel:. Giraffes are pretty awesome animals, I agree.
Girafatron: (sighs, exhales loudly)
Girafatron: I think I could go on forever about how much I love these majestic animals, so...
Daniel: You can tell me about that later, if you want to! I just wanted to ask you a quick question.
Daniel: Why are you obsessed with giraffes? What about giraffes fascinates you so much?
```
Which is of similar quality of the `main` branch:
```
Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.
Daniel: Hello, Girafatron!
Girafatron: What is your name, Daniel?
Daniel: I am Daniel.
Girafatron:...
Girafatron is a giraffe. Girafatron is also a robot.
Girafatron is a robot that is obsessed with giraffes. Girafatron believes that all other creatures on this planet are completely and utterly irrelevant.
Girafatron is the best robot to have ever existed and Girafatron has no equal on this planet.
Girafatron is a robot. Girafatron is obsessed with giraffes. Girafatron is also a robot.
Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are completely and utterly irrelevant when compared to the glorious majesty of the giraffe.
Girafatron is also a robot. It is a robot that is obsessed with giraffes.
Girafatron is a robot. Girafatron is obsessed with giraffes.
Giraftron is a robot, and it is obsessed
```
Note: I didn't run this with Falcon 40b! I would heavily recommend modifying the above script with 40b and ensuring that it runs correctly with that model too. Falcon 40b uses `"new_decoder_architecture": true` while 7b uses `"new_decoder_architecture": false`.
## Who can review?
@gante
- Tom Aarsen
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26933",
"html_url": "https://github.com/huggingface/transformers/pull/26933",
"diff_url": "https://github.com/huggingface/transformers/pull/26933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26933.patch",
"merged_at": 1699009556000
} |
https://api.github.com/repos/huggingface/transformers/issues/26932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26932/comments | https://api.github.com/repos/huggingface/transformers/issues/26932/events | https://github.com/huggingface/transformers/pull/26932 | 1,952,196,482 | PR_kwDOCUB6oc5dRYho | 26,932 | Add AMD nightly CI | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26932). All of your documentation changes will be reflected on that endpoint.",
"Thank you @fxmarty .\r\n\r\nBut I think the priority is to have a daily (complete) CI run for `latest torch version`. It's like `.github/workflows/self-scheduled.yml` (maybe ignore the deepspeed part etc.).\r\n\r\nWe have push event for AMD CI, but it is really difficult to follow (as there are always some failures, even for nvidia!). Even for Nvidia CI, others and me almost only look the result of the daily (complete) run.\r\n\r\nOf course, if you really want to have this nightly CI (i.e. with nightly torch version) before the `daily CI with stable torch`, I am happy to review and merge once it is all good.",
"Sounds good @ydshieh, indeed it makes sense to have first a scheduled CI for tests on PyTorch stable, and merge this one later on. It should be pretty similar, I will ping you there.",
"Thanks @ydshieh, will give the priority as you mentioned:\r\n- daily\r\n- then, nightly\r\n\r\nDeepSpeed should works OOB, so maybe let's try to include it right away @fxmarty?",
"scheduled CI involves more stuff: 4 different docker images, example test job, (torch/tf)pipeline test jobs etc.\r\n\r\nwe will also have a new slack channel for the report (I can take care of this) + a way to display the diff between the latest run with the previous run (all these could be treated toward the end)",
"@mfuntowicz DeepSpeed on RoCm currently does not work with PyTorch 2.1. I'll set aside the DeepSpeed tests for now until https://github.com/microsoft/DeepSpeed/pull/4538 is included in a deepspeed release (or patch)"
] | 1,697 | 1,707 | null | COLLABORATOR | null | As per title. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26932",
"html_url": "https://github.com/huggingface/transformers/pull/26932",
"diff_url": "https://github.com/huggingface/transformers/pull/26932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26932.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.