url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/26631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26631/comments
https://api.github.com/repos/huggingface/transformers/issues/26631/events
https://github.com/huggingface/transformers/pull/26631
1,929,830,645
PR_kwDOCUB6oc5cF5Nb
26,631
Make fsdp ram efficient loading optional
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "That PR is merged @ArthurZucker, Thank you!" ]
1,696
1,697
1,697
CONTRIBUTOR
null
# What does this PR do? 1. Make fsdp ram efficient loading optional. Certain models are having issues when handling meta devices during pre-trained model loading. Fixes https://github.com/huggingface/accelerate/issues/1948 and https://github.com/huggingface/accelerate/issues/2031 by making the ram efficient loading optional. 2. This PR should be merged after PR https://github.com/huggingface/accelerate/pull/2037 is merged.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26631/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26631", "html_url": "https://github.com/huggingface/transformers/pull/26631", "diff_url": "https://github.com/huggingface/transformers/pull/26631.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26631.patch", "merged_at": 1697462941000 }
https://api.github.com/repos/huggingface/transformers/issues/26630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26630/comments
https://api.github.com/repos/huggingface/transformers/issues/26630/events
https://github.com/huggingface/transformers/issues/26630
1,929,789,394
I_kwDOCUB6oc5zBj_S
26,630
Having a `to_grayscale` function
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hello @rafaelpadilla can i work on this issue\r\n", "Hi @Neel-07,\r\n\r\nSure, that'd be awesome. :) \r\nBut let's just wait until one of our core maintainers (@amyeroberts and @ArthurZucker) approves this request.", "okay sure @rafaelpadilla \r\n", "I don't have a strong opinion on this, think that when it comes to vision and converting to grayscale you might have a lot of heuristic that might be different so would just go with the flow. If next model uses the same we can move it otherwise I don't really see a point for now! ", "Alright! :) \r\nSo, I will close this issue for now.\r\nIf a new model also needs grayscale conversion, then we can reopen this issue and work on this." ]
1,696
1,696
1,696
CONTRIBUTOR
null
### Feature request It would be useful to have a general `to_grayscale` function into our `image_transforms.py` so it can be called by any model in their `preprocess()`. ### Motivation Our image processing got more robust and can now accept 1-channel and 3-channel images. Seeing that, new models (like [SuperModel](https://github.com/huggingface/transformers/pull/25786) and potentially new ones) require grayscale images as inputs. Thus, it would be useful to have a "standard" `to_grayscale` function in `image_transforms.py` - similarly as our `convert_to_rgb(...)`. Indeed each model could implement their own version of `to_grayscale`, but I find it useful to have an out-of-the-box function that can be easily called, avoiding redundant implementations of it. ### Your contribution I can make a PR for that if you also think it is useful. @amyeroberts @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26630/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/26630/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26629/comments
https://api.github.com/repos/huggingface/transformers/issues/26629/events
https://github.com/huggingface/transformers/issues/26629
1,929,699,108
I_kwDOCUB6oc5zBN8k
26,629
BLIVA
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,696
1,696
null
CONTRIBUTOR
null
### Model description BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions BLIVA performs VQA tasks. It is an augmented version of InstructBLIP with Visual Assistant. It incorporates the query embeddings from InstructBLIP and also directly projects encoded patch embeddings into the LLM. "Empirical evidence demonstrates that our model, BLIVA, significantly enhances performance in processing text-rich VQA benchmarks (up to 17.76\% in OCR-VQA benchmark) and in undertaking typical VQA benchmarks (up to 7.9\% in Visual Spatial Reasoning benchmark), comparing to our baseline InstructBLIP." ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Official repo: https://github.com/mlpc-ucsd/BLIVA Paper: https://arxiv.org/abs/2308.09936
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26629/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26628/comments
https://api.github.com/repos/huggingface/transformers/issues/26628/events
https://github.com/huggingface/transformers/issues/26628
1,929,555,362
I_kwDOCUB6oc5zAq2i
26,628
[New Model] Retrieval-based Voice Conversion
{ "login": "wfjsw", "id": 2220320, "node_id": "MDQ6VXNlcjIyMjAzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2220320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wfjsw", "html_url": "https://github.com/wfjsw", "followers_url": "https://api.github.com/users/wfjsw/followers", "following_url": "https://api.github.com/users/wfjsw/following{/other_user}", "gists_url": "https://api.github.com/users/wfjsw/gists{/gist_id}", "starred_url": "https://api.github.com/users/wfjsw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wfjsw/subscriptions", "organizations_url": "https://api.github.com/users/wfjsw/orgs", "repos_url": "https://api.github.com/users/wfjsw/repos", "events_url": "https://api.github.com/users/wfjsw/events{/privacy}", "received_events_url": "https://api.github.com/users/wfjsw/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @sanchit-gandhi ", "Hey @wfjsw, thanks for brining this model to our attention! We have recently been trying to push for model on the hub and have as much support as we can there. This is the recommended way of adding new models, since it's easier to integrate them and they can be made available as soon as the code is ready. Here is a [tutorial](https://huggingface.co/docs/transformers/custom_models) on this integration process. I think RVC could make for a great addition to the Hub!" ]
1,696
1,696
null
CONTRIBUTOR
null
### Model description Type: Audio-to-Audio Framework: Pytorch / ONNX License: MIT ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Source Code: https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI Pretrained Models by Author: https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main Models in the wild: https://www.weights.gg/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26628/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26627/comments
https://api.github.com/repos/huggingface/transformers/issues/26627/events
https://github.com/huggingface/transformers/pull/26627
1,929,424,327
PR_kwDOCUB6oc5cEh6s
26,627
docs(zh): review and punctuation & space fix
{ "login": "wfjsw", "id": 2220320, "node_id": "MDQ6VXNlcjIyMjAzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2220320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wfjsw", "html_url": "https://github.com/wfjsw", "followers_url": "https://api.github.com/users/wfjsw/followers", "following_url": "https://api.github.com/users/wfjsw/following{/other_user}", "gists_url": "https://api.github.com/users/wfjsw/gists{/gist_id}", "starred_url": "https://api.github.com/users/wfjsw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wfjsw/subscriptions", "organizations_url": "https://api.github.com/users/wfjsw/orgs", "repos_url": "https://api.github.com/users/wfjsw/repos", "events_url": "https://api.github.com/users/wfjsw/events{/privacy}", "received_events_url": "https://api.github.com/users/wfjsw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26627). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? Briefly reviewed the Simplified Chinese documentation. - Fix punctuation and spacing issues. - Fixes #26603 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stevhliu <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26627/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26627", "html_url": "https://github.com/huggingface/transformers/pull/26627", "diff_url": "https://github.com/huggingface/transformers/pull/26627.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26627.patch", "merged_at": 1696609469000 }
https://api.github.com/repos/huggingface/transformers/issues/26626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26626/comments
https://api.github.com/repos/huggingface/transformers/issues/26626/events
https://github.com/huggingface/transformers/issues/26626
1,929,071,041
I_kwDOCUB6oc5y-0nB
26,626
Docs request: installation for various hardware acceleration
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users/jamesbraza/followers", "following_url": "https://api.github.com/users/jamesbraza/following{/other_user}", "gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions", "organizations_url": "https://api.github.com/users/jamesbraza/orgs", "repos_url": "https://api.github.com/users/jamesbraza/repos", "events_url": "https://api.github.com/users/jamesbraza/events{/privacy}", "received_events_url": "https://api.github.com/users/jamesbraza/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @MKhalusova ", "Thank you for the suggestion! I'll look into this. ", "Hi @jamesbraza ! I looked into this topic. For Apple’s Metal, we already have documentation on how to [use Trainer for accelerated PyTorch Training on Mac](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-trainer-for-accelerated-pytorch-training-on-mac).\r\n\r\nWhen it comes to other hardware acceleration libraries, there are no Transformers-specific setup steps. \r\nIf you can use environment variables to make PyTorch use an alternative BLAS implementation, then this is the way to go. \r\nIf you have to rebuild PyTorch from sources to link to `flexiblas` during PyTorch's compile time, for example, again, you can do this. The steps will be PyTorch-specific. \r\n\r\nWith that in mind, it doesn’t make sense to include this in the Transformers docs as this is not Transformers-specific functionality. \r\n\r\nSome additional blog posts on CPU scaling that you might find interesting: \r\n* Part 1 (hardware part): [https://huggingface.co/blog/bert-cpu-scaling-part-1](https://huggingface.co/blog/bert-cpu-scaling-part-1)\r\n* Part 2 (software part): [https://huggingface.co/blog/bert-cpu-scaling-part-2](https://huggingface.co/blog/bert-cpu-scaling-part-2)\r\n", "Thanks for the response! Appreciate you investigating.\r\n\r\nIt seems like your comments, mainly that optimization is done outside of `transformers`, are worth adding to the README somewhere or to a new page in `docs/`. If you feel it's not worthwhile, feel free to close this out. Thanks again!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,702
1,702
NONE
null
### Feature request It would be nice to have a centralized and standardized page on how to `pip install transformers` with hardware acceleration for various systems (e.g. Metal, cuBLAS, OpenBLAS, etc). Here are some examples: - https://github.com/abetlen/llama-cpp-python/tree/v0.2.11#installation-with-hardware-acceleration - https://github.com/ggerganov/llama.cpp/tree/b1329#cublas Currently, what is mainly mentioned is installing `torch` for Metal, or `accelerate` package. Would be nice if there was a centralized and enhanced documentation page on this. ### Motivation Making use of hardware for additional inference speed, as well as saving on carbon emissions ### Your contribution TBD
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26626/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26625/comments
https://api.github.com/repos/huggingface/transformers/issues/26625/events
https://github.com/huggingface/transformers/pull/26625
1,928,743,416
PR_kwDOCUB6oc5cCN1L
26,625
Update chat template docs with more tips on writing a template
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
MEMBER
null
This PR updates the chat templates doc with more tips on writing your own templates.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26625/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26625", "html_url": "https://github.com/huggingface/transformers/pull/26625", "diff_url": "https://github.com/huggingface/transformers/pull/26625.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26625.patch", "merged_at": 1696590280000 }
https://api.github.com/repos/huggingface/transformers/issues/26624
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26624/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26624/comments
https://api.github.com/repos/huggingface/transformers/issues/26624/events
https://github.com/huggingface/transformers/issues/26624
1,928,656,841
I_kwDOCUB6oc5y9PfJ
26,624
Beam search calculates mean logprobs wrong?
{ "login": "TomerRonen34", "id": 38310481, "node_id": "MDQ6VXNlcjM4MzEwNDgx", "avatar_url": "https://avatars.githubusercontent.com/u/38310481?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomerRonen34", "html_url": "https://github.com/TomerRonen34", "followers_url": "https://api.github.com/users/TomerRonen34/followers", "following_url": "https://api.github.com/users/TomerRonen34/following{/other_user}", "gists_url": "https://api.github.com/users/TomerRonen34/gists{/gist_id}", "starred_url": "https://api.github.com/users/TomerRonen34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomerRonen34/subscriptions", "organizations_url": "https://api.github.com/users/TomerRonen34/orgs", "repos_url": "https://api.github.com/users/TomerRonen34/repos", "events_url": "https://api.github.com/users/TomerRonen34/events{/privacy}", "received_events_url": "https://api.github.com/users/TomerRonen34/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "Hi @TomerRonen34 👋 \r\n\r\nA few points that help explain what you're seeing, and how to disable certain default behaviors:\r\n1. It's actually the other way around, there is a default preference for long generations :) Taking the first scores from your example, `-0.0049241515807807446` (long prompt) > `-0.17024032771587372` (short prompt). You can also see that the sequence scores are sorted in descending order.\r\n2. This preference for long generations by default is intentional. The score of a sequence always goes down as we add words (the logits are always negative). Therefore, if a longer generation has a similar score to a shorter generation, then the longer generation's is composed of high-probability tokens, which usually means it is a better answer to the prompt.\r\n3. The default for `length_penalty` is `1.0`. Counter-intuitively, summarization models usually set it to `2.0` -- these models are trained to output short sequences (i.e. quickly generate an EOS token), so when a high-probability sequence of tokens comes along, it is almost always relevant to the output.\r\n4. You can disable this behavior by passing `length_penalty=0.0` to `generate` (or by setting it in the [generation config](https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration))\r\n5. Since you're playing with the scores, [`model.compute_transition_scores`](https://huggingface.co/docs/transformers/v4.34.0/en/main_classes/text_generation#transformers.GenerationMixin.compute_transition_scores) may be relevant to you :)\r\n\r\nI hope this helps explaining what's going on, and how to control it.", "Hi @gante , I understand `length_penalty` and the effect of averaging logprobs instead of summing them. I was unaware of `compute_transition_scores`, it looks very convenient. Thanks!\r\n\r\nI'm afraid my previous code example wasn't clear enough, as it didn't highlight the main issue which I addressed in the \"Expected behavior\" section. I hereby provide a better reproduction example which clearly shows the preference of the current beam search implementation to shorter generations, which I think is due to a bug in the way the beam score is calculated [here](https://github.com/huggingface/transformers/blob/75a33d60f25d99ff8cdd657d6ba685dc4336a0d1/src/transformers/generation/beam_search.py#L938), where the sum of the generated logprobs is divided by the entire sequence length, including the prompt.\r\n\r\nIn the following example, all generations should have the exact same score, since I force the logprobs of all generated tokens to be exactly the same. However, we see that the score is actually\r\n`logprob * gen_length / (gen_length + prompt_length - 1)`.\r\n\r\nRunning this code prints:\r\n```\r\n Beam search outputs 3 sequences, that should all have the same score\r\n (logprob=-10.825), but instead they have these scores, which prefer shorter generations:\r\n buggy_score = logprob * gen_length / (gen_length + prompt_length - 1)\r\n [-1.3531131744384766, -2.706226348876953, -3.4919049739837646]\r\n```\r\nThe code:\r\n```python\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom transformers import LogitsProcessor, LogitsProcessorList\r\n\r\n\r\nclass UniformScoreFixedLengthsLogitsProcessor(LogitsProcessor):\r\n \"\"\"\r\n This logits processor forces the beam search to output sequences with predetermined lengths\r\n (e.g. [3,7,10]), where all the tokens have the exact same logprob. This logprob represents a\r\n uniform distribution over the vocabulary.\r\n \"\"\"\r\n\r\n def __init__(self, generation_lengths: list[int], eos_token_id: int):\r\n assert len(set(generation_lengths)) == len(generation_lengths), \\\r\n \"This example is designed for unique lengths\"\r\n self.generation_lengths = sorted(generation_lengths)\r\n self.eos_token_id = eos_token_id\r\n self.i_seq = 0\r\n self.times_called = 0\r\n\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:\r\n self.times_called += 1\r\n uniform_logprob = _calculate_uniform_logprob(vocab_size=scores.shape[1])\r\n scores[:] = uniform_logprob\r\n scores[:self.i_seq, :] = -float(\"inf\")\r\n if (self.i_seq < len(self.generation_lengths) and\r\n self.times_called == self.generation_lengths[self.i_seq]):\r\n scores[self.i_seq, :] = -float(\"inf\")\r\n scores[self.i_seq, self.eos_token_id] = uniform_logprob\r\n self.i_seq += 1\r\n return scores\r\n\r\n\r\ndef _calculate_uniform_logprob(vocab_size: int) -> float:\r\n return torch.zeros(vocab_size).log_softmax(-1)[0].item()\r\n\r\n\r\ndef run_example():\r\n device = \"cuda\"\r\n model_name = \"gpt2\"\r\n prompt = \"a a a a a a a a a a a a a a a a a a a a a a\"\r\n\r\n generation_lengths = [3, 7, 10]\r\n beam_size = len(generation_lengths)\r\n num_return_sequences = beam_size\r\n max_new_tokens = max(generation_lengths) + 100 # won't be reached anyway\r\n\r\n with torch.device(device), torch.no_grad():\r\n tokenizer = AutoTokenizer.from_pretrained(model_name)\r\n model = AutoModelForCausalLM.from_pretrained(model_name)\r\n\r\n logits_processor = UniformScoreFixedLengthsLogitsProcessor(generation_lengths=generation_lengths,\r\n eos_token_id=tokenizer.eos_token_id)\r\n\r\n batch = tokenizer(prompt, return_tensors=\"pt\")\r\n outputs = model.generate(\r\n **batch, num_beams=beam_size, num_return_sequences=num_return_sequences,\r\n max_new_tokens=max_new_tokens, output_scores=True, return_dict_in_generate=True,\r\n logits_processor=LogitsProcessorList([logits_processor]))\r\n\r\n prompt_length = len(batch[\"input_ids\"][0])\r\n uniform_logprob = _calculate_uniform_logprob(vocab_size=tokenizer.vocab_size)\r\n buggy_scores = [uniform_logprob * gen_length / (gen_length + prompt_length - 1)\r\n for gen_length in sorted(generation_lengths)]\r\n assert torch.allclose(torch.tensor(buggy_scores), outputs.sequences_scores.cpu())\r\n\r\n print(f\"\"\"\r\n Beam search outputs {beam_size} sequences, that should all have the same score\r\n (logprob={uniform_logprob:.3f}), but instead they have these scores, which prefer shorter generations:\r\n buggy_score = logprob * gen_length / (gen_length + prompt_length - 1)\r\n {outputs.sequences_scores.tolist()}\r\n \"\"\")\r\n\r\n\r\nrun_example()\r\n```", "I see, accounting for the prompt length in the division (when `length_penalty != 0.0`) does change the beam search behavior. For context, beam search is mostly used with encoder-decoder models, whose prompt length is always `1`, and thus it wasn't a problem in the vast majority of use cases, nor it manifested as a preference for shorter sequences.\r\n\r\nI agree that it should be fixed, but the priority is not high -- we want to refactor beam search first.\r\n", "I had the same issue during my research experimentation with decoder-only LLMs, and found the exactly same cause as @TomerRonen34 already demoed. I have fixed the issue locally by making some minimal changes to the source codes, but I haven't submitted the PR yet. Actually I think this issue should be put into a higher priority because most recent LLMs are decoder-only, and the current problematic beam search with these LLMs have already been used in recently published research works without notice, e.g., https://github.com/lorenzkuhn/semantic_uncertainty. Since the `transformers` library is actively used by many AI researchers, many incorrect experimental results or misleading conclusions may be produced with the current buggy implementation. We should try to fix it ASAP. If @gante is busy with other more important refactoring works, I can submit a PR with a quick simple fix first.", "It's great to hear that you have a fix @VsonicV! I wrote my own local fix as well, but mine relies on `len(beam_indices)` to find the number of generated tokens so far, so it only works when `.generate` is called with `return_dict_in_generate=True, output_scores=True`, otherwise `beam_indices` is None.", "@VsonicV I'd love to merge the PR, assuming the fix is not large 🤗 ", "@gante I have created the PR #27351 for your review.\r\n\r\n@TomerRonen34 Feel free to check and test the PR #27351 " ]
1,696
1,700
1,700
NONE
null
### System Info - `transformers` version: 4.33.3 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @gante (recommended for generate-related issues) @patrickvonplaten (wrote the code according to git-blame) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = "cuda" model_name = "gpt2" short_prompt = "Once upon a time" long_prompt = """In a hole in the ground there lived a hobbit. Not a nasty, dirty, wet hole, filled with the ends of bworms and an oozy smell, nor yet a dry, bare, sandy hole with nothing in it to sit down on or to eat: it a was a hobbit-hole, and that means comfort. It had a perfectly round door like a porthole, painted green, with a shiny yellow brass knob in the exact middle. The door opened on to a tube-shaped hall like a tunnel: a very comfortable tunnel without smoke, with panelled walls, and floors tiled and carpeted, provided with polished chairs, and lots and lots of pegs for hats and coats- the hobbit was fond of visitors. The tunnel wound on and on – going fairly but not quite straight into the side of the hill – The Hill, as all the people for many miles around called it – and many little round doors opened out of it, first on one side and then on another. No going upstairs for the hobbit: bedrooms, bathrooms, cellars, pantries (lots of these), wardrobes (he had whole rooms devoted to clothes), kitchens, dining-rooms, all were on the same floor, and indeed on the same passage. The best rooms were all on the lefthand side (going in), for these were the only ones to have windows, deep-set round windows looking over his garden, and meadows beyond, sloping down to the river. This hobbit was a very well-to-do hobbit, and his name was Baggins. The Bagginses have lived in the neighbourhood of The Hill for time out of mind, and people considered them very respectable, not only because most of them were rich, but also because they never had any adventures or did anything unexpected: you could tell what a Baggins would say on any question without the bother of asking him. This is a story of how a Baggins had an adventure, and""" beam_size = 5 max_new_tokens = 1 num_return_sequences = beam_size with torch.device(device), torch.no_grad(): tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) for prompt_name, prompt in [("short_prompt", short_prompt), ("long_prompt", long_prompt)]: batch = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**batch, num_beams=beam_size, num_return_sequences=num_return_sequences, max_new_tokens=max_new_tokens, output_scores=True, return_dict_in_generate=True) print(f"{prompt_name}: scores={outputs.sequences_scores.tolist()}") ``` Prints: ``` short_prompt: scores=[-0.17024032771587372, -0.5479289293289185, -0.6405749320983887, -0.6600505113601685, -0.7051623463630676] long_prompt: scores=[-0.0049241515807807446, -0.006088315974920988, -0.006767737679183483, -0.006866625044494867, -0.006999899633228779] ``` ### Expected behavior When doing beam search, beam scores are normalized to represent the average token logprob. However, the current implementation divides the sum of **_generated token logprobs_** by the length of the **_entire sequence, including prompt_**. This creates inconsistencies between the scores of sequences of different lengths, and also prefers shorter generations. [Code here](https://github.com/huggingface/transformers/blob/75a33d60f25d99ff8cdd657d6ba685dc4336a0d1/src/transformers/generation/beam_search.py#L938). My reproduction example shows that the absolute values of beam scores returned by generating a single token with a long prompt are orders of magnitude smaller than beam scores returned by a short prompt. The main scenario where this behavior is problematic is for beams that terminate with an EOS before `max_new_tokens` is reached, since the denominator in their score calculation will be skewed. For example, if we have 2 candidates with lengths `l1` and `l2`, where all token logprobs are `s` and the prompt length is `p`, we'll have: `score_i = l_i * s / (l_i + p) = s / (1 + p / l_i)`, showing a preference for shorter generations since `s < 0`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26624/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26624/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26623
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26623/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26623/comments
https://api.github.com/repos/huggingface/transformers/issues/26623/events
https://github.com/huggingface/transformers/pull/26623
1,928,565,764
PR_kwDOCUB6oc5cBoIE
26,623
🎉 Add prompt tuning for bigcode models
{ "login": "mayank31398", "id": 32954280, "node_id": "MDQ6VXNlcjMyOTU0Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/32954280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayank31398", "html_url": "https://github.com/mayank31398", "followers_url": "https://api.github.com/users/mayank31398/followers", "following_url": "https://api.github.com/users/mayank31398/following{/other_user}", "gists_url": "https://api.github.com/users/mayank31398/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayank31398/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayank31398/subscriptions", "organizations_url": "https://api.github.com/users/mayank31398/orgs", "repos_url": "https://api.github.com/users/mayank31398/repos", "events_url": "https://api.github.com/users/mayank31398/events{/privacy}", "received_events_url": "https://api.github.com/users/mayank31398/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hmmm I'm not sure we have the expectation that all models have a `word_embeddings` property, at least we really don't enforce it right now.\r\n\r\nDoes PEFT make that assumption?", "Hi, I just realized PEFT doesn't make any such assumptions.\r\n\r\n\r\n![Screenshot 2023-10-06 at 2 45 07 PM](https://github.com/huggingface/transformers/assets/32954280/df9b6371-2762-44c7-bc88-a7a74637a0b4)\r\n^^ this is the logic used by PEFT to get the model's vocab.\r\n\r\nWhy it doesn't work for me:\r\n```python\r\n def _setup_model_for_peft(self, args: Union[TrainingArgs, InferenceArgs], model_kwargs: dict) -> None:\r\n self.deepspeed_config = HfDeepSpeedConfig(get_deepspeed_config(args))\r\n\r\n self.peft_config = PromptTuningConfig(\r\n task_type=TaskType.SEQ_2_SEQ_LM if self.is_encoder_decoder else TaskType.CAUSAL_LM,\r\n prompt_tuning_init=args.prompt_tuning_init,\r\n num_virtual_tokens=args.num_virtual_tokens,\r\n prompt_tuning_init_text=args.prompt_tuning_init_text,\r\n tokenizer_name_or_path=args.model_name,\r\n )\r\n self.model = args.model_class.from_pretrained(**model_kwargs, torch_dtype=self.dtype)\r\n```\r\n\r\nI am initializing the model like this. this creates a problem due to `HfDeepSpeedConfig` being called.\r\nDeepSpeed converts all the tensors to dummy device tensors with shape 0 as follows for faster initialization of the model.\r\n\r\n```shell\r\nlm_head torch.Size([0])\r\n```\r\n\r\nI am not sure what a good solution for this is.\r\nBut this PR is not needed to resolve the issue.\r\n@younesbelkada do you have any suggestions?", "Turns out that the issue can be resolved using this fix in PEFT:\r\n```python\r\nif value.shape[0] == self.base_model.config.vocab_size or (hasattr(value, \"ds_shape\") and value.ds_shape[0] == self.base_model.config.vocab_size):\r\n```", "this has been fixed with a PR to peft\r\nno changes are needed to this repo." ]
1,696
1,697
1,697
CONTRIBUTOR
null
# What does this PR do? Enables prompt tuning for BigCode models. Currently, prompt tuning for bigcode models fail due to this error AttributeError (no GPTBigCodeForCausalLM has no attribute named `word_embeddings`). @ArthurZucker, @younesbelkada, @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26623/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26623", "html_url": "https://github.com/huggingface/transformers/pull/26623", "diff_url": "https://github.com/huggingface/transformers/pull/26623.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26623.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26622
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26622/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26622/comments
https://api.github.com/repos/huggingface/transformers/issues/26622/events
https://github.com/huggingface/transformers/pull/26622
1,928,395,337
PR_kwDOCUB6oc5cBCaW
26,622
Don't install `pytorch-quantization` in Doc Builder docker file
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "For the record, here is the issue when running the suggested installation command: at **the doc build time**\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/transformers/utils/import_utils.py\", line 1282, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/usr/local/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 843, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/usr/local/lib/python3.8/site-packages/transformers/models/qdqbert/modeling_qdqbert.py\", line 60, in <module>\r\n from pytorch_quantization import nn as quant_nn\r\n File \"/usr/local/lib/python3.8/site-packages/pytorch_quantization/__init__.py\", line 20, in <module>\r\n from .quant_modules import *\r\n File \"/usr/local/lib/python3.8/site-packages/pytorch_quantization/quant_modules.py\", line 23, in <module>\r\n from pytorch_quantization import nn as quant_nn\r\n File \"/usr/local/lib/python3.8/site-packages/pytorch_quantization/nn/__init__.py\", line 19, in <module>\r\n from pytorch_quantization.nn.modules.tensor_quantizer import *\r\n File \"/usr/local/lib/python3.8/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py\", line 24, in <module>\r\n from pytorch_quantization.tensor_quant import QuantDescriptor, tensor_quant, fake_tensor_quant, scaled_e4m3\r\n File \"/usr/local/lib/python3.8/site-packages/pytorch_quantization/tensor_quant.py\", line 28, in <module>\r\n from pytorch_quantization import cuda_ext\r\nImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py\", line 197, in build_mdx_files\r\n content, new_anchors, source_files, errors = resolve_autodoc(\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py\", line 123, in resolve_autodoc\r\n doc = autodoc(\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/autodoc.py\", line 474, in autodoc\r\n obj = find_object_in_package(object_name=object_name, package=package)\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/autodoc.py\", line 39, in find_object_in_package\r\n submodule = getattr(module, split, None)\r\n File \"/usr/local/lib/python3.8/site-packages/transformers/utils/import_utils.py\", line 1273, in __getattr__\r\n value = getattr(module, name)\r\n File \"/usr/local/lib/python3.8/site-packages/transformers/utils/import_utils.py\", line 1272, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/doc-builder\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/commands/doc_builder_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/commands/build.py\", line 102, in build_command\r\n build_doc(\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py\", line 365, in build_doc\r\n anchors_mapping, source_files_mapping = build_mdx_files(\r\n File \"/usr/local/lib/python3.8/site-packages/doc_builder/build_doc.py\", line 230, in build_mdx_files\r\n raise type(e)(f\"There was an error when converting {file} to the MDX format.\\n\" + e.args[0]) from e\r\nRuntimeError: There was an error when converting transformers/docs/source/en/model_doc/qdqbert.md to the MDX format.\r\nFailed to import transformers.models.qdqbert.modeling_qdqbert because of the following error (look up to see its traceback):\r\nlibcudart.so.11.0: cannot open shared object file: No such file or directory\r\n\r\n```", "Thanks Yih-Dar!" ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Doc builder docker image build starts to fail with ``` The package you are trying to install is only a placeholder project on PyPI.org repository. This package is hosted on NVIDIA Python Package Index. This package can be installed as: $ pip install --no-cache-dir --extra-index-url https://pypi.nvidia.com/ pytorch-quantization ``` I tried to install it with the suggested command, but the doc build step will fail with some cuda libary issue. (when building `transformers/docs/source/en/model_doc/qdqbert.md`) **I removed the line that installs `pytorch-quantization` and doc build can pass (and docker image built).**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26622/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26622", "html_url": "https://github.com/huggingface/transformers/pull/26622", "diff_url": "https://github.com/huggingface/transformers/pull/26622.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26622.patch", "merged_at": 1696517871000 }
https://api.github.com/repos/huggingface/transformers/issues/26621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26621/comments
https://api.github.com/repos/huggingface/transformers/issues/26621/events
https://github.com/huggingface/transformers/pull/26621
1,928,376,712
PR_kwDOCUB6oc5cA-Ry
26,621
[DO NOT MERGE] Testing new release candidates.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26621/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26621", "html_url": "https://github.com/huggingface/transformers/pull/26621", "diff_url": "https://github.com/huggingface/transformers/pull/26621.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26621.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26619/comments
https://api.github.com/repos/huggingface/transformers/issues/26619/events
https://github.com/huggingface/transformers/issues/26619
1,928,270,059
I_kwDOCUB6oc5y7xDr
26,619
LayoutLMv3FeatureExtractor(apply_ocr=True) not returning words
{ "login": "rahulss14", "id": 45919225, "node_id": "MDQ6VXNlcjQ1OTE5MjI1", "avatar_url": "https://avatars.githubusercontent.com/u/45919225?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rahulss14", "html_url": "https://github.com/rahulss14", "followers_url": "https://api.github.com/users/rahulss14/followers", "following_url": "https://api.github.com/users/rahulss14/following{/other_user}", "gists_url": "https://api.github.com/users/rahulss14/gists{/gist_id}", "starred_url": "https://api.github.com/users/rahulss14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rahulss14/subscriptions", "organizations_url": "https://api.github.com/users/rahulss14/orgs", "repos_url": "https://api.github.com/users/rahulss14/repos", "events_url": "https://api.github.com/users/rahulss14/events{/privacy}", "received_events_url": "https://api.github.com/users/rahulss14/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks! ", "Hello \r\nthank you for your prompt response , I am new to this can you answer this here . I will ask my further questions on forum .\r\n\r\n\r\nAwaiting your response ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
I have trained a LILT using my custom dataset , everything works fine but I am not getting ocr words of the detected regions . How can I get the words .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26619/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26618/comments
https://api.github.com/repos/huggingface/transformers/issues/26618/events
https://github.com/huggingface/transformers/pull/26618
1,928,225,999
PR_kwDOCUB6oc5cAczY
26,618
Register ModelOutput as supported torch pytree nodes
{ "login": "XuehaiPan", "id": 16078332, "node_id": "MDQ6VXNlcjE2MDc4MzMy", "avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuehaiPan", "html_url": "https://github.com/XuehaiPan", "followers_url": "https://api.github.com/users/XuehaiPan/followers", "following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}", "gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions", "organizations_url": "https://api.github.com/users/XuehaiPan/orgs", "repos_url": "https://api.github.com/users/XuehaiPan/repos", "events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}", "received_events_url": "https://api.github.com/users/XuehaiPan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks important but I don't have the bandwidth to dive into it, could you take a look if you have bandwidth @ydshieh ?", "Hi @XuehaiPan,\r\n\r\nCould you elaborate a bit more on why we need to register `ModelOutput` itself: why #25358 isn't enough?\r\n\r\n", "> Could you elaborate a bit more on why we need to register `ModelOutput` itself: why #25358 isn't enough?\r\n\r\n@ydshieh\r\n\r\n1. `ModelOutput` is a container-like type, a subclass of `OrderedDict`. This fits the definition of a pytree node rather than a leaf.\r\n2. The PyTorch upstream provides a context manager to register `ModelOutput` itself and its subclasses as pytree nodes. When we exit the context manager in `__exit__` method, it will unregister the types that registered in the `__enter__` method.\r\nhttps://github.com/pytorch/pytorch/blob/53a9ac534c777047abc2b6b293f238865592291e/torch/onnx/_internal/fx/dynamo_graph_extractor.py#L31-L103\r\n\r\n We have already registered the `ModelOutput`'s subclasses as pytree node type. So in the `__enter__` method, it will additionally register `ModelOutput` as pytree node type, and unregister it in the `__exit__` method. All subclasses of `ModelOutput` in the pytree node type registry will remain untouched. See my comment in https://github.com/pytorch/pytorch/pull/109684#discussion_r1347384106.\r\n\r\n I think deleting or updating the pytree node registry is dangerous. It could lead to potential bugs when the developer passes the treespec object out of the scope of the context manager.\r\n\r\n ```python\r\n with PyTreeExtensionContext():\r\n # in the context\r\n # ModelOutput is a container-like node type, it will live in the treespec object\r\n leaves, treespec = pytree.flatten(tree)\r\n\r\n # out of the context\r\n # ModelOutput is not in the pytree node type registry and it will be considered as leaf\r\n # but in the treespec object, ModelOutput a node type\r\n pytree.tree_unflatten(leaves, treespec) # -> KeyError: cannot found ModelOutput in pytree.SUPPORTED_NODES\r\n ```\r\n\r\n I could either do not register the `ModelOutput` type in the context manager in the PyTorch upstream, or register it as pytree node in the first place in transformers. I think it is more practical to register both `ModelOutput` and its subclasses as node type (e.g., for case `isinstance(obj, ModelOutput)`). ", "Hi @XuehaiPan : I am not familiar with this, so would love to (and need) a bit more information 😅 \r\n\r\n- In the PyTorch upstream, `ModelOutput ` is already registered as as pytree nodes`?\r\n - and if so, why we need do this at both in `pytorch` and `transformers`\r\n - if they are not doing exactly the same job, what are the difference between `register in torch` and `register in transformers`\r\n\r\n- `... in the __enter__ method, it will additionally register ModelOutput as pytree node type`:\r\n - so #25358 (dealing with subclasses) will register the parent class `ModelOutput` as pytree node type ...? (because of your work in the torch upstream?)\r\n - and because of this, we need this PR, so `__exit__` won't unregister it?", "> `... in the __enter__ method, it will additionally register ModelOutput as pytree node type`:\r\n> \r\n> * so [Register ModelOutput subclasses as supported torch.utils._pytree nodes #25358](https://github.com/huggingface/transformers/pull/25358) (dealing with subclasses) will register the parent class `ModelOutput` as pytree node type ...? ~(because of your work in the torch upstream?)~\r\n> * and because of this, we need this PR, so `__exit__` won't unregister it?\r\n\r\n@ydshieh This is correct. Let me elaborate this.\r\n\r\nIn the PyTorch upstream:\r\n\r\nhttps://github.com/pytorch/pytorch/blob/359336e3e9a0f67974e53805b5207fbbbc149490/torch/onnx/_internal/fx/dynamo_graph_extractor.py#L93-L98\r\n\r\n```python\r\n # All 'ModelOutput' subclasses are defined under module 'modeling_outputs'.\r\n named_model_output_classes = inspect.getmembers(\r\n modeling_outputs,\r\n lambda x: inspect.isclass(x)\r\n and issubclass(x, modeling_outputs.ModelOutput),\r\n )\r\n```\r\n\r\nThe model output classes are determined by `issubclass(cls, ModelOutput)`. It will treat the `ModelOutput` itself as model output class.\r\n\r\n```python\r\n>>> issubclass(ModelOutput, ModelOutput)\r\nTrue\r\n```\r\n\r\n```python\r\nIn [1]: from transformers import modeling_outputs\r\n\r\nIn [2]: import inspect\r\n\r\nIn [3]: named_model_output_classes = inspect.getmembers(\r\n ...: modeling_outputs,\r\n ...: lambda x: inspect.isclass(x)\r\n ...: and issubclass(x, modeling_outputs.ModelOutput),\r\n ...: )\r\n\r\nIn [4]: dict(named_model_output_classes)\r\nOut[4]: \r\n{'BackboneOutput': transformers.modeling_outputs.BackboneOutput,\r\n ...,\r\n 'MoEModelOutputWithPastAndCrossAttentions': transformers.modeling_outputs.MoEModelOutputWithPastAndCrossAttentions,\r\n 'ModelOutput': transformers.utils.generic.ModelOutput,\r\n 'MultipleChoiceModelOutput': transformers.modeling_outputs.MultipleChoiceModelOutput,\r\n ...,\r\n 'XVectorOutput': transformers.modeling_outputs.XVectorOutput}\r\n```\r\n\r\nIn the PyTreeExtensionContext manager, it will ignore types that are already registered in the `pytree.SUPPORTED_NODES`.\r\n\r\nhttps://github.com/pytorch/pytorch/blob/359336e3e9a0f67974e53805b5207fbbbc149490/torch/onnx/_internal/fx/dynamo_graph_extractor.py#L67-L72\r\n\r\n```python\r\n if class_type in pytree.SUPPORTED_NODES or class_type in self._extensions:\r\n # PyTree node already registered.\r\n # E.g., `huggingface/transformer` registers `ModelOutput` as PyTree node after\r\n # https://github.com/huggingface/transformers/pull/25358.\r\n return\r\n self._extensions[class_type] = (flatten_func, unflatten_func)\r\n```\r\n\r\nWithout this PR, the upstream code behaves like:\r\n\r\n```python\r\n# This command will register all subclasses of `ModelOutput` except `ModelOutput`\r\n# itself as pytree node.\r\nimport transformers\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` is not pytree node\r\n\r\n# This command will add `ModelOutput` into `context._extension` because it is\r\n# not registered as pytree node.\r\n# All subclasses of `ModelOutput` will be ignored because they are already\r\n# registered as pytree node by transformers.\r\ncontext = PyTreeExtensionContext()\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` is not pytree node\r\n# - context._extension = {ModelOutput: (..., ...)}\r\n\r\n# This command will register elements in `context._extension` (which is only\r\n# `ModelOutput`) as pytree node.\r\ncontext.__enter__()\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` is pytree node\r\n# - context._extension = {ModelOutput: (..., ...)}\r\n\r\n# do something in the context\r\n...\r\n\r\n# This command will unregister elements in `context._extension` (which is only\r\n# `ModelOutput`) as pytree node.\r\ncontext.__exit__()\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` is not pytree node\r\n```\r\n\r\nwith this PR:\r\n\r\n```python\r\n# This command will register all subclasses of `ModelOutput` and `ModelOutput`\r\n# itself as pytree node.\r\nimport transformers\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` itself is also pytree node\r\n\r\n# All subclasses of `ModelOutput` and `ModelOutput` itself will be ignored\r\n# because they are already registered as pytree node by transformers.\r\n# This is a no-op and `context._extension` is empty.\r\ncontext = PyTreeExtensionContext()\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` itself is also pytree node\r\n# - context._extension = {} # empty\r\n\r\n# This command will register elements in `context._extension` (which is empty)\r\n# as pytree node. This is a no-op.\r\ncontext.__enter__()\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` is pytree node\r\n# - context._extension = {} # empty\r\n\r\n# do something in the context\r\n...\r\n\r\n# This command will unregister elements in `context._extension` (which is empty)\r\n# as pytree node. This is a no-op.\r\ncontext.__exit__()\r\n\r\n# <<< at here\r\n# - all subclasses of `ModelOutput` are pytree node\r\n# - `ModelOutput` itself is also pytree node\r\n```", "@XuehaiPan Thank you a lot. I will take a look next week! This is really new to me 💪 !", "@ydshieh Hi, any update for this?", "Hi @XuehaiPan, thank you a lot of your super clean and detailed comment, it helps me a lot to get started. So yes, **`ModelOutput` itself will behave differently depending on if it is inside the context manager or outside it.**\r\n\r\n> I think deleting or updating the pytree node registry is dangerous\r\n\r\nI have 2 (general) questions:\r\n\r\n - I understand what you mentioned. But does this mean pytorch unstream has to register all standard container-like classes (list, dict, etc.) as pytree nodes?\r\n - (I guess so, but want to hear from you 😄 ) \r\n - `ModelOutput` is not designed to be used directly (well, to be fair, within `transformers` code base).\r\n - All usage should inheritate `ModelOutput` and specify the attributes an output have), see, for example, https://github.com/huggingface/transformers/blob/929134bf65ac986c12c423c30b0db8a239f3b195/src/transformers/modeling_outputs.py#L25-L47\r\n - (it's kind abstract class, despite we can still use it directly, but that usage is outside `transformers` scope.)\r\n - Moreover, I would say it is super rare users will use `ModelOutput` directly (in a manually way) to store their model output (without creating a subclass first). \r\n - (and even more rare, this usage along with `PyTreeExtensionContext` context manager) \r\n\r\nOverall, **I am fine with the idea of registering `ModelOutput` as torch pytree nodes**, but would like the above 2 questions addressed. Then I can take a review on the changes themselves and request a core maintainer's review.\r\n\r\nThank you again @XuehaiPan !\r\n \r\n\r\n", "@ydshieh\r\n\r\n> * I understand what you mentioned. But does this mean pytorch unstream has to register all standard container-like classes (list, dict, etc.) as pytree nodes?\r\n> \r\n> * (I guess so, but want to hear from you 😄 )\r\n\r\nCurrently, the PyTorch upstream registers:\r\n\r\n- `tuple`\r\n- `list`\r\n- `dict`\r\n- `OrderedDict`\r\n- classes created by `collections.namedtuple`\r\n- `torch.return_types.*` (PyStructSequence, e.g., `torch.return_types.sort` for function `torch.sort()`)\r\n- `torch.fx.immutable_collections.immutable_list`\r\n- `torch.fx.immutable_collections.immutable_dict`\r\n\r\nas pytree nodes.\r\n\r\nWe are going to add more container-like classes in the stdlib (`collections.defaultdict` and `collections.deque`) as type pytree nodes in the follow-up PRs in the PyTorch upstream.\r\n\r\nFor classes from third-party packages (non-Python stdlib, and non-PyTorch internals), users or developers need to manually register the classes as pytree types themselves. For example, we manually register the subclasses of ModelOutput in the transformers package.\r\n\r\n------\r\n\r\n> * `ModelOutput` is not designed to be used directly (well, to be fair, within `transformers` code base).\r\n> \r\n> * All usage should inheritate `ModelOutput` and specify the attributes an output have)\r\n>\r\n> Moreover, I would say it is super rare users will use `ModelOutput` directly (in a manually way) to store their model output (without creating a subclass first).\r\n> * (and even more rare, this usage along with `PyTreeExtensionContext` context manager)\r\n\r\nI agree that is a rare case. So we can resolve this issue in two ways:\r\n\r\n- register ModelOutput as pytree node in transformers (this is what this PR does).\r\n- not registering the ModelOutput as pytree node in the `PyTreeExtensionContext` context manager in the PyTorch upstream.\r\n\r\nBoth solutions are fine for me.", "Let's go for `register ModelOutput as pytree node in transformers (this is what this PR does).` to align it with `OrderedDict` (it's parent). I will take a review on the changes 🤗 today or next Monday!", "@XuehaiPan \r\n\r\nCould you provide a complete, self-contained (i.e. with all imports and variables defined) code snippet of the following (the one you provided before):\r\n\r\n```python\r\nwith PyTreeExtensionContext():\r\n # in the context\r\n # ModelOutput is a container-like node type, it will live in the treespec object\r\n leaves, treespec = pytree.flatten(tree)\r\n\r\n# out of the context\r\n# ModelOutput is not in the pytree node type registry and it will be considered as leaf\r\n# but in the treespec object, ModelOutput a node type\r\npytree.tree_unflatten(leaves, treespec) # -> KeyError: cannot found ModelOutput in pytree.SUPPORTED_NODES\r\n```", "Here is a self-contained snippet with `torch==2.1.0` and `transformers==4.34.1`.\r\n\r\n```python\r\nimport torch\r\nimport transformers\r\nimport torch.utils._pytree as pytree\r\nfrom torch.onnx._internal.fx.dynamo_graph_extractor import _PyTreeExtensionContext as PyTreeExtensionContext\r\nfrom transformers.modeling_utils import ModelOutput\r\n\r\nprint('===== before entering context =====')\r\nleaves, treespec = pytree.tree_flatten(ModelOutput(a=1, b=2)) # ModelOutput is a leaf\r\nprint('leaves:', leaves)\r\nprint('treespec:', treespec)\r\nprint('reconstructed:', pytree.tree_unflatten(leaves, treespec))\r\n\r\nwith PyTreeExtensionContext():\r\n print('===== entered context =====')\r\n leaves, treespec = pytree.tree_flatten(ModelOutput(a=1, b=2)) # ModelOutput is a node\r\n print('leaves:', leaves)\r\n print('treespec:', treespec)\r\n print('reconstructed:', pytree.tree_unflatten(leaves, treespec))\r\n\r\nprint('===== exited context =====')\r\nprint('leaves:', leaves)\r\nprint('treespec:', treespec)\r\nprint('reconstructed:', pytree.tree_unflatten(leaves, treespec))\r\n```\r\n\r\nOutput:\r\n\r\n```console\r\n$ python3 test.py \r\n===== before entering context =====\r\nleaves: [ModelOutput([('a', 1), ('b', 2)])]\r\ntreespec: *\r\nreconstructed: ModelOutput([('a', 1), ('b', 2)])\r\n===== entered context =====\r\nleaves: [1, 2]\r\ntreespec: TreeSpec(ModelOutput, (<class 'transformers.utils.generic.ModelOutput'>, ['a', 'b']), [*,\r\n *])\r\nreconstructed: ModelOutput([('a', 1), ('b', 2)])\r\n===== exited context =====\r\nleaves: [1, 2]\r\ntreespec: TreeSpec(ModelOutput, (<class 'transformers.utils.generic.ModelOutput'>, ['a', 'b']), [*,\r\n *])\r\nTraceback (most recent call last):\r\n File \".../test.py\", line 23, in <module>\r\n print('reconstructed:', pytree.tree_unflatten(leaves, treespec))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \".../venv/lib/python3.11/site-packages/torch/utils/_pytree.py\", line 268, in tree_unflatten\r\n unflatten_fn = SUPPORTED_NODES[spec.type].unflatten_fn\r\n ~~~~~~~~~~~~~~~^^^^^^^^^^^\r\nKeyError: <class 'transformers.utils.generic.ModelOutput'>\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26618). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This registers `ModelOutput` as supported `torch` pytree nodes. Currently all subclasses of `ModelOutput` are already registered by `ModelOutput.__init_subclass__()`. This PR additionally registers `ModelOutput` itself as supported `torch` pytree nodes. See also: - #25357 - #25358 - https://github.com/pytorch/pytorch/pull/109684#discussion_r1347384106 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26618/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26618", "html_url": "https://github.com/huggingface/transformers/pull/26618", "diff_url": "https://github.com/huggingface/transformers/pull/26618.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26618.patch", "merged_at": 1698138160000 }
https://api.github.com/repos/huggingface/transformers/issues/26617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26617/comments
https://api.github.com/repos/huggingface/transformers/issues/26617/events
https://github.com/huggingface/transformers/pull/26617
1,928,206,804
PR_kwDOCUB6oc5cAYmy
26,617
[WIP] Add CharacterBERT model
{ "login": "helboukkouri", "id": 36409068, "node_id": "MDQ6VXNlcjM2NDA5MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/36409068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/helboukkouri", "html_url": "https://github.com/helboukkouri", "followers_url": "https://api.github.com/users/helboukkouri/followers", "following_url": "https://api.github.com/users/helboukkouri/following{/other_user}", "gists_url": "https://api.github.com/users/helboukkouri/gists{/gist_id}", "starred_url": "https://api.github.com/users/helboukkouri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/helboukkouri/subscriptions", "organizations_url": "https://api.github.com/users/helboukkouri/orgs", "repos_url": "https://api.github.com/users/helboukkouri/repos", "events_url": "https://api.github.com/users/helboukkouri/events{/privacy}", "received_events_url": "https://api.github.com/users/helboukkouri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nice, welcome back :grin:! Please don't hesitate to ping @ArthurZucker as soon as you're happy with the state of the PR so that we can merge it in quickly :eyes: \r\n\r\nThanks for your PR!", "Hey @LysandreJik 👋 been a while 😊\r\n\r\nQuick question if you don't mind:\r\n\r\nI'm trying to use the cookiecutter command line to start with a BERT-like model (I specified a standalone tokenizer however), but it seems that despite asking for a PyTorch only model I'm getting many Tensorflow-related files as well.\r\n\r\nIs this normal, or did I mess something up ?\r\n\r\nAlso, there are so many tests that don't pass already. It would have been nice to be able to start with a blank slate where all tests pass before I get to the actual custom stuff.", "Hmmm, which command did you use? Was it `add-new-model`? We recommend the `add-new-model-like` method as it's much more up-to-date. \r\n\r\nThis scripts asks for the following:\r\n\r\n```\r\nShould we add a version of your new model in all the frameworks implemented by bert (['pt', 'flax']) (yes/no)? [yes] n\r\nPlease enter the list of framworks you want (pt, tf, flax) separated by spaces pt\r\n```\r\n\r\nI verified in the generated files that by specifying `pt` only, I didn't get any tensorflow files:\r\n\r\n```\r\n(.env) ~/Workspaces/python/transformers (🌟) 🤗 git status\r\nOn branch main\r\nYour branch is up to date with 'originmain'.\r\n\r\nChanges to be committed:\r\n (use \"git restore --staged <file>...\" to unstage)\r\n\tmodified: docs/source/en/_toctree.yml\r\n\tnew file: docs/source/en/model_doc/characterbert.md\r\n\tmodified: src/transformers/__init__.py\r\n\tmodified: src/transformers/models/__init__.py\r\n\tmodified: src/transformers/models/auto/configuration_auto.py\r\n\tmodified: src/transformers/models/auto/modeling_auto.py\r\n\tmodified: src/transformers/models/auto/tokenization_auto.py\r\n\tnew file: src/transformers/models/characterbert/__init__.py\r\n\tnew file: src/transformers/models/characterbert/configuration_characterbert.py\r\n\tnew file: src/transformers/models/characterbert/convert_characterbert_original_tf2_checkpoint_to_pytorch.py\r\n\tnew file: src/transformers/models/characterbert/convert_characterbert_original_tf_checkpoint_to_pytorch.py\r\n\tnew file: src/transformers/models/characterbert/convert_characterbert_pytorch_checkpoint_to_original_tf.py\r\n\tnew file: src/transformers/models/characterbert/convert_characterbert_token_dropping_original_tf2_checkpoint_to_pytorch.py\r\n\tnew file: src/transformers/models/characterbert/modeling_characterbert.py\r\n\tnew file: tests/models/characterbert/__init__.py\r\n\tnew file: tests/models/characterbert/test_modeling_characterbert.py\r\n```", "Yes, alright, that's what I did. The conversion files had me worried but I guess those are normal.\r\n\r\nThanks!", "Hey @LysandreJik, all good ? :)\r\n\r\nI have some questions about this PR. There are quite some things that are different between CharacterBERT and BERT due to the former having 3D input ids instead of 2D (tokens are character id lists). This leads to many test failing and, in turn, requires either adapting the test at a global level or overriding them in the specific test suite for CharacterBERT.\r\n\r\nI avoid touching the common test utils of course so I end up doing quite a lot of custom tests to work around any incompatibility issues. I have already done that for the tokenization suite, and I am currently facing the same thing for the modeling tests as well.\r\n\r\nHowever, I'm unsure that this is the proper way of doing handling things. **If I start overriding all the tests in the models own test files, don't I risk divering from the common tests when these change in the future ?**\r\n\r\nThe more I customize the tests the more it feels like not a very robust solution.\r\n\r\nWhat do you think ?", "Thanks for the ping @helboukkouri! Pinging @ArthurZucker as he's a much better model reviewer than I am now :)", "Alright, thanks @LysandreJik :)", "I think this PR is in a good enough state to be reviewed if you can spare the time @ArthurZucker.\r\nThanks in advance 😊!", "Any feedback on this @ArthurZucker ? :)\r\nI see some more tests that I will need to fix but please let me know if I'm doing anything wrong.", "Hey! Sorry got a lot of models to review, but I'll get to this one soon! \r\nIf you can make the ci go green it would be great as well! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,705
1,705
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds support for the CharacterBERT model: https://github.com/helboukkouri/character-bert ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26617/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26617", "html_url": "https://github.com/huggingface/transformers/pull/26617", "diff_url": "https://github.com/huggingface/transformers/pull/26617.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26617.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26616/comments
https://api.github.com/repos/huggingface/transformers/issues/26616/events
https://github.com/huggingface/transformers/issues/26616
1,928,132,010
I_kwDOCUB6oc5y7PWq
26,616
Any way we can get dropout added to modeling_llama.py?
{ "login": "enn-nafnlaus", "id": 116288799, "node_id": "U_kgDOBu5tHw", "avatar_url": "https://avatars.githubusercontent.com/u/116288799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enn-nafnlaus", "html_url": "https://github.com/enn-nafnlaus", "followers_url": "https://api.github.com/users/enn-nafnlaus/followers", "following_url": "https://api.github.com/users/enn-nafnlaus/following{/other_user}", "gists_url": "https://api.github.com/users/enn-nafnlaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/enn-nafnlaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enn-nafnlaus/subscriptions", "organizations_url": "https://api.github.com/users/enn-nafnlaus/orgs", "repos_url": "https://api.github.com/users/enn-nafnlaus/repos", "events_url": "https://api.github.com/users/enn-nafnlaus/events{/privacy}", "received_events_url": "https://api.github.com/users/enn-nafnlaus/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @younesbelkada @ArthurZucker " ]
1,696
1,699
1,699
NONE
null
### Feature request Re: https://github.com/OpenAccess-AI-Collective/axolotl/issues/672 We currently lack any way to prevent overweighting of specific neurons, which becomes a problem when trying to finetune on limited datasets. Something like dropout should go a long way towards being able to develop better models at smaller parameter counts. ### Motivation See above. ### Your contribution I don't have familiarity with this codebase.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26616/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26615/comments
https://api.github.com/repos/huggingface/transformers/issues/26615/events
https://github.com/huggingface/transformers/pull/26615
1,927,997,563
PR_kwDOCUB6oc5b_q-E
26,615
Fix `transformers-pytorch-gpu` docker build
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Currently failed, see [here](https://github.com/huggingface/transformers/actions/runs/6413071912/job/17411419865)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26615/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26615", "html_url": "https://github.com/huggingface/transformers/pull/26615", "diff_url": "https://github.com/huggingface/transformers/pull/26615.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26615.patch", "merged_at": 1696512815000 }
https://api.github.com/repos/huggingface/transformers/issues/26614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26614/comments
https://api.github.com/repos/huggingface/transformers/issues/26614/events
https://github.com/huggingface/transformers/pull/26614
1,927,842,591
PR_kwDOCUB6oc5b_KQP
26,614
Don't close ClearML task if it was created externally
{ "login": "eugen-ajechiloae-clearml", "id": 97950284, "node_id": "U_kgDOBdaaTA", "avatar_url": "https://avatars.githubusercontent.com/u/97950284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eugen-ajechiloae-clearml", "html_url": "https://github.com/eugen-ajechiloae-clearml", "followers_url": "https://api.github.com/users/eugen-ajechiloae-clearml/followers", "following_url": "https://api.github.com/users/eugen-ajechiloae-clearml/following{/other_user}", "gists_url": "https://api.github.com/users/eugen-ajechiloae-clearml/gists{/gist_id}", "starred_url": "https://api.github.com/users/eugen-ajechiloae-clearml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eugen-ajechiloae-clearml/subscriptions", "organizations_url": "https://api.github.com/users/eugen-ajechiloae-clearml/orgs", "repos_url": "https://api.github.com/users/eugen-ajechiloae-clearml/repos", "events_url": "https://api.github.com/users/eugen-ajechiloae-clearml/events{/privacy}", "received_events_url": "https://api.github.com/users/eugen-ajechiloae-clearml/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,696
1,696
1,696
CONTRIBUTOR
null
One may want to create the ClearML task manually (for example by calling `Task.init`) to be used after the training ends. This PR handles that case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26614/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26614", "html_url": "https://github.com/huggingface/transformers/pull/26614", "diff_url": "https://github.com/huggingface/transformers/pull/26614.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26614.patch", "merged_at": 1696512785000 }
https://api.github.com/repos/huggingface/transformers/issues/26613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26613/comments
https://api.github.com/repos/huggingface/transformers/issues/26613/events
https://github.com/huggingface/transformers/issues/26613
1,927,714,145
I_kwDOCUB6oc5y5pVh
26,613
'flash_attn' library is not installed
{ "login": "priyaray21", "id": 139554535, "node_id": "U_kgDOCFFu5w", "avatar_url": "https://avatars.githubusercontent.com/u/139554535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/priyaray21", "html_url": "https://github.com/priyaray21", "followers_url": "https://api.github.com/users/priyaray21/followers", "following_url": "https://api.github.com/users/priyaray21/following{/other_user}", "gists_url": "https://api.github.com/users/priyaray21/gists{/gist_id}", "starred_url": "https://api.github.com/users/priyaray21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/priyaray21/subscriptions", "organizations_url": "https://api.github.com/users/priyaray21/orgs", "repos_url": "https://api.github.com/users/priyaray21/repos", "events_url": "https://api.github.com/users/priyaray21/events{/privacy}", "received_events_url": "https://api.github.com/users/priyaray21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada :grin: ", "Hi @priyaray21 - this sounds like an issue that is not related with transformers. Can you refer to the official repository of flash attention? https://github.com/Dao-AILab/flash-attention/\r\nYou might have a device that is not compatible with FA or a wrong CUDA version", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
### System Info %pip install transformers==4.33.3 %pip install mlflow %pip install azureml-mlflow==1.53.0 %pip install azure-ai-ml %pip install transformers[torch] %pip install torchvision %pip install azure-ai-ml %pip install azureml-core %pip install azureml-mlflow %pip install mlflow %pip install python-box %pip install sentencepices %pip install sacremoses %pip install fugashi[unidic-lite] %pip install packaging %pip install ipadic %pip install mecab-python3 %pip install transformers_stream_generator %pip install cpm_kernels %pip install tiktoken %pip install einops %pip install ninja ### Who can help? Hi @sanchit-gandhi, I am trying to install Flash-attn library, but it is not installing. %pip install flash-attn Collecting flash-attn Using cached flash_attn-2.3.0.tar.gz (2.3 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction %pip install flash-attn Collecting flash-attn Using cached flash_attn-2.3.0.tar.gz (2.3 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error ### Expected behavior How can i install this Library
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26613/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26612
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26612/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26612/comments
https://api.github.com/repos/huggingface/transformers/issues/26612/events
https://github.com/huggingface/transformers/issues/26612
1,927,666,593
I_kwDOCUB6oc5y5duh
26,612
OSError: Can't load tokenizer
{ "login": "amarahiqbal", "id": 141737528, "node_id": "U_kgDOCHK-OA", "avatar_url": "https://avatars.githubusercontent.com/u/141737528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amarahiqbal", "html_url": "https://github.com/amarahiqbal", "followers_url": "https://api.github.com/users/amarahiqbal/followers", "following_url": "https://api.github.com/users/amarahiqbal/following{/other_user}", "gists_url": "https://api.github.com/users/amarahiqbal/gists{/gist_id}", "starred_url": "https://api.github.com/users/amarahiqbal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amarahiqbal/subscriptions", "organizations_url": "https://api.github.com/users/amarahiqbal/orgs", "repos_url": "https://api.github.com/users/amarahiqbal/repos", "events_url": "https://api.github.com/users/amarahiqbal/events{/privacy}", "received_events_url": "https://api.github.com/users/amarahiqbal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @amarahiqbal, see the model card of this model here: https://huggingface.co/facebook/xmod-base#tokenizer\r\n\r\nNamely, this part:\r\n![image](https://github.com/huggingface/transformers/assets/30755778/822925e7-9e03-4b60-9b79-2ff00c6612e1)\r\n\r\nFeel free to open a discussion in their repository; ideally, they should copy/paste the tokenizer files from `xlm-roberta-base` into their own repository so that the command you have above works seamlessly. \r\n\r\nLet me know if you'd like me to open a thread there instead!", "Hi I can help you to solve this issue ? \r\nIf you want to use the tokenizer of facebook/xmod-base it uses the below tokenizer.\r\n![image](https://github.com/huggingface/transformers/assets/77787482/8c04d3e7-03c7-4081-a418-1b246d60b1f7)\r\nThe code to load the tokenizer is as follows : \r\nFirst do : pip install transformers\r\n\r\nNow to just load the tokenizer : \r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base') \r\n#This is the tokenizer used for the above model you have mentioned facebook/xmod-base\r\n\r\n\r\n```\r\nBut use the above code if you want to use just use the tokenizer that is used by facebook/xmod-base.\r\n\r\nNow if you want to use the model facebook/xmod-base as a whole and not just the tokenizer then use the below code : \r\n\r\n```\r\n\r\nfrom transformers import XmodModel\r\n\r\nmodel = XmodModel.from_pretrained(\"facebook/xmod-base\")\r\nmodel.set_default_language(\"en_XX\")\r\n\r\n\r\n```\r\n\r\nThe HF link for the tokenizer : https://huggingface.co/xlm-roberta-base", "from lavis.models import load_model_and_preprocess\r\nfrom lavis.processors import load_processor\r\n# setup device to use\r\n# from transformers import BertTokenizer\r\n\r\n# tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else \"cpu\"\r\nmodel, vis_processors, txt_processors = load_model_and_preprocess(name=\"blip2_feature_extractor\", model_type=\"pretrain\", is_eval=True, device=device)\r\n#image = vis_processors[\"eval\"](raw_image).unsqueeze(0).to(device)\r\n\r\n\r\n\r\nOSError: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.\r\n\r\n\r\nMaybe I have the same error, but I can't solve it too. Do you solve the problem?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
### System Info Python Version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] Transformers Version: 4.29.2 Platform - Azure ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction model_name = "facebook/xmod-base" tokenizer = AutoTokenizer.from_pretrained(model_name) OSError: Can't load tokenizer for 'facebook/xmod-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/xmod-base' is the correct path to a directory containing all relevant files for a XLMRobertaTokenizerFast tokenizer. ### Expected behavior The tokenizer is not getting loaded. OSError: Can't load tokenizer for 'facebook/xmod-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/xmod-base' is the correct path to a directory containing all relevant files for a XLMRobertaTokenizerFast tokenizer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26612/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26611
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26611/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26611/comments
https://api.github.com/repos/huggingface/transformers/issues/26611/events
https://github.com/huggingface/transformers/issues/26611
1,927,643,141
I_kwDOCUB6oc5y5YAF
26,611
AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear8bitLt'
{ "login": "priyaray21", "id": 139554535, "node_id": "U_kgDOCFFu5w", "avatar_url": "https://avatars.githubusercontent.com/u/139554535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/priyaray21", "html_url": "https://github.com/priyaray21", "followers_url": "https://api.github.com/users/priyaray21/followers", "following_url": "https://api.github.com/users/priyaray21/following{/other_user}", "gists_url": "https://api.github.com/users/priyaray21/gists{/gist_id}", "starred_url": "https://api.github.com/users/priyaray21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/priyaray21/subscriptions", "organizations_url": "https://api.github.com/users/priyaray21/orgs", "repos_url": "https://api.github.com/users/priyaray21/repos", "events_url": "https://api.github.com/users/priyaray21/events{/privacy}", "received_events_url": "https://api.github.com/users/priyaray21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @priyaray21 \r\nThanks for the issue, this sounds like an issue with peft, can you try to uninstall `bitsandbytes-cuda111` as it is an old version of the lib and install `bitsandbytes` instead?\r\n\r\n```bash\r\npip uninstall bitsandbytes-cuda11\r\npip install -U bitsandbytes\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
### System Info transformers==4.33.3 python 3.10 azureml-mlflow==1.53.0 ### Who can help? Hi, @ArthurZucker , @younesbelkada and @sanchit-gandhi When I am trying to run GPTQ family models, throwing AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear8bitLt' <img width="408" alt="image" src="https://github.com/huggingface/transformers/assets/139554535/8e9a8708-dfe1-43cc-9fc8-eec036d83b7d"> ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I have installed bitsandbytes-cuda111 but getting Attribute Errors. ### Expected behavior How can I fix that issue?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26611/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26610
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26610/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26610/comments
https://api.github.com/repos/huggingface/transformers/issues/26610/events
https://github.com/huggingface/transformers/pull/26610
1,927,610,345
PR_kwDOCUB6oc5b-YCo
26,610
`HfQuantizer` class for quantization-related stuff in `modeling_utils.py`
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for your huge work! I really like the idea! Let me know once this is ready for a first pass, I'll have a deep look", "Hi, @younesbelkada!\r\nBy now I reworked `_load_pretrained_model()` and `_load_state_dict_into_model()` functions. \r\n`Integations.bitsandbytes.py::set_module_quantized_tensor_to_device()` is replicated in new code and no longer needed.\r\nI like that by having separate `HFQuantizer` classes there is no need to have multiple ifs in the code.\r\n\r\nThe code works fine and passes all RUN_SLOW quantization tests: gptq, 4/8 bit bnb.\r\nIt is now ready for the first detailed review - please take a look.\r\n\r\nIt may be further optimized, but I tried to preserve some resemblance with the earlier code.\r\nThree are quite a few TODOs left - some of them require your guidance.\r\nPlease recommend ways to simplify the `HFQuantizer` class, to make it easier to implement new quantizers. \r\nFor instance, there are few small functions to validate and adjust environment. Could they be consolidated? Also, shall we separate fresh quantization code from loading pre-quantized?", "@younesbelkada , I addressed most of your comments and left couple open. Take a look. All SLOW tests pass now.", "@younesbelkada ,\r\nI changed to enum, added typing. \r\nlet's try these battle tests.", "Thanks so much @poedator ! Will try that out by today and let you know", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26610). All of your documentation changes will be reflected on that endpoint.", "hi @poedator , sorry did not had time to have a deep look today, will do it tomorrow first thing in the morning! Thanks again for your patience! 🙏 ", "> ...can you also help fix the merge conflicts?\r\n\r\nI fixed the merge conflicts, however it cancelled some of the changes from https://github.com/huggingface/transformers/pull/27043 by @patrickvonplaten. \r\nAs I see it, keeping changes to the config makes no effect, since `process_model_after_weight_loading()` only returns the model. Please recheck, whether and how to reapply changes from #27043. Perhaps you have some test to check if the problem solved by 27043 reappears.", "Ideas for before and after merge:\r\n- move quantization code to `quantize` folder - it will contain quantization_config.py, quantizers.py and files for future quant classes\r\n- should there be more documentation/blog post on how to build a new quantizer? The AWQ #27045 may be a good basis for this.\r\n- the checks for `is_loaded_in_4bit` should now be replaced to checks based on model.config.quantization_config.", "Thanks for all your work and for addressing all comments!\r\n\r\n> I fixed the merge conflicts, however it cancelled some of the changes from https://github.com/huggingface/transformers/pull/27043 by @patrickvonplaten.\r\nAs I see it, keeping changes to the config makes no effect, since process_model_after_weight_loading() only returns the model. Please recheck, whether and how to reapply changes from https://github.com/huggingface/transformers/pull/27043. Perhaps you have some test to check if the problem solved by 27043 reappears.\r\n\r\nSure yes I will check that out myself and confirm you here\r\n\r\n> should there be more documentation/blog post on how to build a new quantizer? The AWQ https://github.com/huggingface/transformers/pull/27045 may be a good basis for this.\r\n\r\nDefinitely 💯 ! we can always do it after this PR being merged as a follow up, so no worries I would say", "@younesbelkada,\r\nI see that #27045 is merged now. I guess it is time to update this PR again - I can try it over the weekend. Please confirm that this makes sense, or is there any other way forward? ", "hi @poedator , thanks for getting back , yes what you have said makes sense we still want to merge this PR. On my end the tests I performed were good, I can try again once you merge with main! 🙏 ", "@younesbelkada , @SunMarc , @ArthurZucker, \r\nThe PR is rebased to `main` as of Nov 3, including the AWQ changes. It is ready for review - please take a look.\r\n\r\nmajor issues:\r\n- seemingly unrelated test fails in `examples_torch` \r\n- some tests with multiple GPUs fail - need more time to debug\r\n- the number of methods in HFQuantizer bothers me. Some could be combined if we pass context from above. See separate note\r\n- there is still a number of TODOs left - need hints on how to approach them.\r\n- get rid of `is_loaded_in_4[8]bit` use in `modeling_utils.py`. (see `.save_pretrained()` and `.num_parameters()`) - use quant_config refs.\r\n- can we drop `is_loaded_in_4[8]bit` completely? or replace with @property with warning? ", "Here are the calls to `HFQuantizer` from `modeling_utils.py`. Would be great to reduce their number. \r\nSo far I only see it possible with contextlib. See `HFQuantizer.get_locals_from_above()` as prototype.\r\n\r\n\r\nafter initialization:\r\n-\t`quantizer.validate_environment(torch_dtype=torch_dtype, from_tf=from_tf, from_flax=from_flax) # odd args selection`\r\n-\t`quantizer.set_torch_dtype(torch_dtype)`\r\n-\t`quantizer.update_device_map(device_map)`\r\nthe calls in this group could be combined with `__init__()` if we pass context from .from_pretrained()\r\n\r\nbefore weight loading:\r\n-\tquantizer.process_model_before_weight_loading(model, device_map, torch_dtype, keep_in_fp32_modules)\r\n-\tquantizer.get_special_dtypes_update(model, torch_dtype)\r\n-\tquantizer.adjust_target_dtype(target_dtype)\r\n-\tquantizer.adjust_max_memory(max_memory)\r\n-\tquantizer.validate_device_map(device_map)\r\nthe calls in this group could be combined into `process_model_before_weight_loading` if we pass context from .from_pretrained()\r\n\r\nin `_load_pretrained_model()`:\r\n-\t`quantizer.check_quantized_param(model, param_value=value, param_name=key, state_dict={})`\r\n-\t`quantizer.update_mismatched_keys(unexpected_keys, missing_keys)`\r\n\r\nin `_load_state_dict_into_meta_model()`:\r\n-\t`quantizer.create_quantized_param` \r\n\r\nafter weight loading:\r\n-\t`quantizer.process_model_after_weight_loading(model)`", "@ArthurZucker , @SunMarc , could you comment on this PR please.\r\ncc @younesbelkada ", "Yeah sorry, it's a big PR and not super time sensitive so I've been pushing it back 😓 will try to do this week", "cc @Titus-von-Koeller for visibility", "Hi @poedator ! Happy new year and thanks for putting so much efforts on this PR! Would you be happy helping us finalising the PR ? Otherwise happy to take it over as well as we want to have this refactor ASAP on transformers!", "Hi, @younesbelkada ,\r\nI am ready to finalize this together with you and your team. For now I see the following open issues:\r\n- completing response to comments of @SunMarc \r\n- rebase to resolve conflicts (hope that this is trivial this time)\r\n- updating `save_prequantized` (optional)\r\n- another round of tests and reviews ?", "Hi @poedator thanks for getting back ! I think it would be great if you could first resolve the merge conflicts, then complete the final comments from Marc, after that we'll do a final review and run the tests if that sounds good to you", "The PR code has been rebased to the latest main (as of 2023.01.07). It is pretty much ready for the next round of reviews @younesbelkada @SunMarc @ArthurZucker @amyeroberts \r\n\r\nMost of the comments of Marc are resolved, a few are addressed but left in the open form for Marc to revisit.\r\nOne open item is # TODO: consider removing used param_parts from state_dict before return. I want to do some more tests before committing it.\r\n\r\nI left a few TODOs in the quantizers.py for the maintainers to consider choices - please comment on them or delete.\r\n\r\nThere are further optimisation opportunities in this refactoring - I did not try to pursue them actively. \r\nExamples:\r\n- refactor the corresponding code in `accelerate:: ... ::set_module_tensor_to_device()`\r\n- refactor save_pretrained() - DONE\r\n- replace `is_loaded_in_4bit` and `is_4bit_serializable` with references to config - DONE\r\n- integrations/bitsandbytes can be integrated into quantizers.py (set_module_quantized_tensor_to_device is not used in the main code)\r\n- quantizers.py may be sliced into smaller files (one per method)\r\n\r\nThe Long tests pass with 2 exceptions\r\n- FAILED tests/quantization/gptq/test_gptq.py::GPTQTest::test_change_loading_attributes - AssertionError: True != False\r\n- FAILED tests/quantization/gptq/test_gptq.py::GPTQTest::test_serialization - ValueError: Found modules on cpu/disk. ...\r\nNot sure yet what caused them, maybe they are specific to my setup. Will update here if I find the causes.", "@ArthurZucker, thank you for the kind words and deep comments!\r\n\r\nGeneral reply first: I started this PR trying to repack existing quantization-related code to make it easier adding other quantization methods. What you see by today is effectively the same operations as before but packaged into `HFQuantizer` classes. This explains the numerous calls to quantizer - they remain where I found them. Combining calls to quantizers would be nice, but may require more massive refactoring (and understanding of the code history) and I am feeling a bit out of my depth here. More specific refactoring comments from the maintainers do help me moving forward. \r\n\r\nAnother reason why I refrained from deeper refactoring was to keep code familiar to the reviewers.\r\nWith this context, would it be possible to postpone deeper refactoring to a separate PR? Although it would be ideal to set the class interface properly now.\r\n\r\nNow more on some of the topics that you raised:\r\n\r\nThe QuantizationMethod enum and config classes like BitsAndBytesConfig were pre-existing. QuantizationMethod can be removed, but I'd still need a way to keep track of chosen q-method. Maybe it is possible to use config class for that.\r\n\r\nre \"IMO We should not need an additional parser.\" - I packaged there the original code that selected `QuantizationMethod`. It should be possible to make it a class method instead. But the HF quantizer is already quite heavy. Currently the parser is a factory for quantizers. \r\n\r\nre \"about potential uses outside ... community interests / use cases?\" - I envisage possibility of people writing/importing custom quantizers as subclasses of HFQuantizer, and using them to quantize models as they like. Also this PR makes it easier to add more quantization methods, like SPQR which I coauthored recently :)\r\nPerhaps it is time to raise it in Reddit/LocalLlama or other similar place where LLM quantization enthusiasts meet. \r\n\r\nUPD (16.01.23): stopped using quantization method in fac=vor of just type(quantization_config); refactored parser into simpler class with single call.", "cc @ArthurZucker I just ported out the commits from https://github.com/huggingface/transformers/pull/28703 here", "Merging! ", "Thank you, @ArthurZucker @younesbelkada @SunMarc @amyeroberts and the rest of HF team involved, for getting this to the merge! I was very impressed watching you work on this PR in the past month! " ]
1,696
1,706
1,706
CONTRIBUTOR
null
### What does this PR do? Refactoring `modeling_utils.py` to move all quantization-related logic into new `HFQuantizer` class ### Reasons and benefits: - easier to understand how quantization works during loading. - easier to add new quantization methods like SPQR. - much easier to implement 4-bit serialization (once BnB supports it) - it was a rainy day outside ) ### Things to be done: - extend `HFQuantizer` functionality to cover `_load_pretrained_model()` and `_load_state_dict_into_model()` - extend `HFQuantizer` functionality in BnB to cover `set_module_quantized_tensor_to_device()` from `integrations/bitsandbytes.py` - review multiple TODOs left in the comments. - undo temporary changes in the tests - check for backward compatibility issues - eliminate repeats and redundancies, if any - consider fully absorbing `integrations/bitsandbytes.py` into new class - move new classes / functions into proper project file. possibly separate folder. ### Current state: - reworked code up to `_load_state_dict_into_model()` - all tests in BnB and GPTQ still pass with `RUN_SLOW=1`. summoning @SunMarc and @younesbelkada to comment on the idea and current state.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26610/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/26610/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26610", "html_url": "https://github.com/huggingface/transformers/pull/26610", "diff_url": "https://github.com/huggingface/transformers/pull/26610.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26610.patch", "merged_at": 1706579305000 }
https://api.github.com/repos/huggingface/transformers/issues/26609
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26609/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26609/comments
https://api.github.com/repos/huggingface/transformers/issues/26609/events
https://github.com/huggingface/transformers/issues/26609
1,927,569,191
I_kwDOCUB6oc5y5F8n
26,609
AttributeError: module transformers has no attribute RWForCausalLM / MosaicGPT / GPT2LMHeadCustomModel / BTLMLMHeadModel / MossForCausalLM / MPTForCausalLM / InternLMForCausalLM / DistilBertJapaneseTokenizer / FalconForCausalLM / LongLlamaForCausalLM
{ "login": "amarahiqbal", "id": 141737528, "node_id": "U_kgDOCHK-OA", "avatar_url": "https://avatars.githubusercontent.com/u/141737528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amarahiqbal", "html_url": "https://github.com/amarahiqbal", "followers_url": "https://api.github.com/users/amarahiqbal/followers", "following_url": "https://api.github.com/users/amarahiqbal/following{/other_user}", "gists_url": "https://api.github.com/users/amarahiqbal/gists{/gist_id}", "starred_url": "https://api.github.com/users/amarahiqbal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amarahiqbal/subscriptions", "organizations_url": "https://api.github.com/users/amarahiqbal/orgs", "repos_url": "https://api.github.com/users/amarahiqbal/repos", "events_url": "https://api.github.com/users/amarahiqbal/events{/privacy}", "received_events_url": "https://api.github.com/users/amarahiqbal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @amarahiqbal, I'm not sure we have access to your entire script nor traceback. Could you please share it here so that we may see how to help you best?\r\n\r\nThanks.", "Sharing the script for 1 Attribute Error - \r\n\r\n```py\r\nfrom huggingface_hub import HfApi\r\nimport pandas as pd\r\nimport re\r\n\r\nLIST_OF_COLUMNS = ['modelId', 'downloads',\r\n 'lastModified', 'tags', 'pipeline_tag']\r\nTASK_NAME = ['fill-mask', 'token-classification', 'question-answering',\r\n 'summarization', 'text-generation', 'text-classification', 'translation']\r\nSTRING_TO_CHECK = 'transformers'\r\nmodel_name = \"cerebras/btlm-3b-8k-base\"\r\n\r\nhf_api = HfApi()\r\nmodels = hf_api.list_models(\r\n full=True, sort='lastModified', direction=-1)\r\n\r\nrequired_data = [i for i in models]\r\n\r\ndaata_dict = {}\r\n\r\nfor data in required_data:\r\n for key in data.__dict__.keys():\r\n if key in LIST_OF_COLUMNS:\r\n if daata_dict.get(key) is None:\r\n daata_dict[key] = []\r\n values = daata_dict.get(key)\r\n if key == 'tags':\r\n values.append(data.__dict__.get(key, [\"Empty\"]))\r\n else:\r\n values.append(data.__dict__.get(key, \"Empty\"))\r\n daata_dict[key] = values\r\n\r\ndf = pd.DataFrame(daata_dict)\r\ndf = df[df.tags.apply(lambda x: STRING_TO_CHECK in x)]\r\ndf = df[df['pipeline_tag'].isin(TASK_NAME)]\r\n\r\nrequired_data = df[df.modelId.apply(lambda x: x == model_name)]\r\ntask = required_data[\"pipeline_tag\"].to_string()\r\npattern = r'[0-9\\s+]'\r\ntask = re.sub(pattern, '', task)\r\n\r\nregistered_model_name = model_name.replace(\"/\", \"-\")\r\nartifact_path = registered_model_name + \"-artifact\"\r\nlibrary_name = library.get(task)\r\nmodel_library = getattr(transformers, library_name)\r\nmodel = model_library.from_pretrained(model_name, trust_remote_code=True)\r\n\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\r\nfrom transformers import pipeline\r\nmodel_and_tokenizer = {\"model\": model, \"tokenizer\": tokenizer}\r\n\r\npipeline = pipeline(task=task, model=model, tokenizer=tokenizer)\r\n\r\noutput = generate_signature_output(\r\n pipeline, text_generation.input_data)\r\nsignature = infer_signature(text_generation.input_data, output)\r\nmlflow.transformers.log_model(\r\n transformers_model=pipeline,\r\n task=task,\r\n artifact_path=artifact_path,\r\n registered_model_name=registered_model_name,\r\n signature=signature,\r\n input_example=text_generation.input_data\r\n)\r\n\r\nfrom mlflow.tracking.client import MlflowClient\r\nclient = MlflowClient()\r\nregistered_model_detail = client.get_latest_versions(\r\n name=registered_model_name, stages=[\"None\"])\r\nmodel_detail = registered_model_detail[0]\r\nprint(\"Latest registered model version is : \", model_detail.version)\r\nloaded_model_pipeline = mlflow.transformers.load_model(\r\n model_uri=model_detail.source, return_type=\"pipeline\")\r\n```\r\n\r\nGot below Error - \r\n```\r\nFile /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/transformers/utils/import_utils.py:1165, in _LazyModule.__getattr__(self, name)\r\n 1163 value = getattr(module, name)\r\n 1164 else:\r\n-> 1165 raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\r\n 1167 setattr(self, name, value)\r\n 1168 return value\r\n\r\nAttributeError: module transformers has no attribute BTLMLMHeadModel\r\n```", "Is that the full stack trace? It seems like there are a few tracebacks missing", "Its the entire end to end script. The error is coming on the last line when am loading back the model.\r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In[24], line 85\r\n 83 model_detail = registered_model_detail[0]\r\n 84 print(\"Latest registered model version is : \", model_detail.version)\r\n---> 85 loaded_model_pipeline = mlflow.transformers.load_model(model_uri=model_detail.source, return_type=\"pipeline\")\r\n\r\nFile /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/mlflow/utils/docstring_utils.py:235, in docstring_version_compatibility_warning.<locals>.annotated_func.<locals>.version_func(*args, **kwargs)\r\n 233 if installed_version < Version(min_ver) or installed_version > Version(max_ver):\r\n 234 warnings.warn(notice, category=FutureWarning, stacklevel=2)\r\n--> 235 return func(*args, **kwargs)\r\n\r\nFile /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/mlflow/transformers.py:822, in load_model(model_uri, dst_path, return_type, device, **kwargs)\r\n 813 raise MlflowException(\r\n 814 \"This model has been saved with a processor. Processor objects are \"\r\n 815 \"not compatible with Pipelines. Please load this model by specifying \"\r\n 816 \"the 'return_type'='components'.\",\r\n 817 error_code=BAD_REQUEST,\r\n 818 )\r\n 820 _add_code_from_conf_to_system_path(local_model_path, flavor_config)\r\n--> 822 return _load_model(local_model_path, flavor_config, return_type, device, **kwargs)\r\n\r\nFile /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/mlflow/transformers.py:870, in _load_model(path, flavor_config, return_type, device, **kwargs)\r\n 865 \"\"\"\r\n 866 Loads components from a locally serialized ``Pipeline`` object.\r\n 867 \"\"\"\r\n 868 import transformers\r\n--> 870 model_instance = getattr(transformers, flavor_config[_PIPELINE_MODEL_TYPE_KEY])\r\n 871 local_path = pathlib.Path(path)\r\n 872 model_path = local_path.joinpath(flavor_config.get(_MODEL_BINARY_KEY, _MODEL_BINARY_FILE_NAME))\r\n\r\nFile /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/transformers/utils/import_utils.py:1165, in _LazyModule.__getattr__(self, name)\r\n 1163 value = getattr(module, name)\r\n 1164 else:\r\n-> 1165 raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\r\n 1167 setattr(self, name, value)\r\n 1168 return value\r\n\r\nAttributeError: module transformers has no attribute BTLMLMHeadModel\r\n", "Hi @amarahiqbal \r\nthe call \r\n\r\n```python\r\nmodel_instance = getattr(transformers, flavor_config[_PIPELINE_MODEL_TYPE_KEY])\r\n```\r\nwon't work as you are trying to import some modules that are not registered in the transformers library that they live on the Hub.\r\n\r\nI believe you can simply use `AutoModelForCausalLM` for your usecase and make sure to insert `trust_remote_code=True` when calling `from_pretrained`", "@amarahiqbal did you get the resolution? I am also facing the same issue.?\r\n", "```python\r\nmodel = AutoModelForCausalLM.from_pretrained(\"mosaicml/mpt-7b\", torch_dtype=torch.bfloat16, trust_remote_code=True)\r\ntokenizer = AutoTokenizer.from_pretrained(\"mosaicml/mpt-7b\")\r\n\r\nimport mlflow\r\nfrom mlflow.models.signature import ModelSignature\r\nfrom mlflow.types.schema import ColSpec, Schema\r\nimport numpy as np\r\n\r\n# Define the model input and output schema\r\ninput_schema = Schema([\r\n ColSpec(\"string\", \"prompt\"),\r\n ColSpec(\"double\", \"temperature\", optional=True),\r\n ColSpec(\"integer\", \"max_tokens\", optional=True),\r\n ColSpec(\"string\", \"stop\", optional=True),\r\n ColSpec(\"integer\", \"candidate_count\", optional=True)\r\n])\r\n\r\noutput_schema = Schema([\r\n ColSpec('string', 'predictions')\r\n])\r\n\r\nsignature = ModelSignature(inputs=input_schema, outputs=output_schema)\r\n\r\n# Define an example input\r\ninput_example = {\r\n \"prompt\": np.array([\r\n \"Below is an instruction that describes a task. \"\r\n \"Write a response that appropriately completes the request.\\n\\n\"\r\n \"### Instruction:\\n\"\r\n \"What is Apache Spark?\\n\\n\"\r\n \"### Response:\\n\"\r\n ])\r\n}\r\n\r\nregistered_model_name = f\"mpt-7b\"\r\n\r\n# Start a new MLflow run\r\nwith mlflow.start_run():\r\n components = {\r\n \"model\": model,\r\n \"tokenizer\": tokenizer,\r\n }\r\n mlflow.transformers.log_model(\r\n transformers_model=components,\r\n artifact_path=\"model\",\r\n signature=signature,\r\n input_example=input_example,\r\n metadata={\"task\": \"llm/v1/completions\"}\r\n )\r\n\r\n# After running above code. It able to log the run. But when I load it use this\r\n\r\nlogged_model = 'runs:/b7c5d5fb5a1f4c23b161578aab847cb9/model'\r\n\r\nloaded_model = mlflow.transformers.load_model(logged_model)\r\nloaded_model = mlflow.pyfunc.load_model(logged_model)\r\n\r\n # in both cases it thrown an error\r\n```\r\nError:\r\n```python\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nFile <command-4055176443088148>, line 5\r\n 2 logged_model = 'runs:/b7c5d5fb5a1f4c23b161578aab847cb9/model'\r\n 4 # Load model as a PyFuncModel.\r\n----> 5 loaded_model = mlflow.pyfunc.load_model(logged_model)\r\n 7 # Predict on a Pandas DataFrame.\r\n 8 # import pandas as pd\r\n 9 # loaded_model.predict(pd.DataFrame(data))\r\n\r\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-7db9db37-9b10-4f51-b86e-a7a7f12a7c41/lib/python3.10/site-packages/mlflow/pyfunc/__init__.py:637, in load_model(model_uri, suppress_warnings, dst_path)\r\n 635 data_path = os.path.join(local_path, conf[DATA]) if (DATA in conf) else local_path\r\n 636 try:\r\n--> 637 model_impl = importlib.import_module(conf[MAIN])._load_pyfunc(data_path)\r\n 638 except ModuleNotFoundError as e:\r\n 639 if conf[MAIN] == _DATABRICKS_FS_LOADER_MODULE:\r\n\r\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-7db9db37-9b10-4f51-b86e-a7a7f12a7c41/lib/python3.10/site-packages/mlflow/transformers.py:1544, in _load_pyfunc(path)\r\n 1541 flavor_configuration = _get_flavor_configuration(local_path, FLAVOR_NAME)\r\n 1542 inference_config = _get_inference_config(local_path.joinpath(_COMPONENTS_BINARY_KEY))\r\n 1543 return _TransformersWrapper(\r\n-> 1544 _load_model(str(local_path), flavor_configuration, \"pipeline\"),\r\n 1545 flavor_configuration,\r\n 1546 inference_config,\r\n 1547 )\r\n\r\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-7db9db37-9b10-4f51-b86e-a7a7f12a7c41/lib/python3.10/site-packages/mlflow/transformers.py:881, in _load_model(path, flavor_config, return_type, device, **kwargs)\r\n 876 \"\"\"\r\n 877 Loads components from a locally serialized ``Pipeline`` object.\r\n 878 \"\"\"\r\n 879 import transformers\r\n--> 881 model_instance = getattr(transformers, flavor_config[_PIPELINE_MODEL_TYPE_KEY])\r\n 882 local_path = pathlib.Path(path)\r\n 883 # NB: Path resolution for models that were saved prior to 2.4.1 release when the pathing for\r\n 884 # the saved pipeline or component artifacts was handled by duplicate entries for components\r\n 885 # (artifacts/pipeline/* and artifacts/components/*) and pipelines were saved via the\r\n 886 # \"artifacts/pipeline/*\" path. In order to load the older formats after the change, the\r\n 887 # presence of the new path key is checked.\r\n\r\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-7db9db37-9b10-4f51-b86e-a7a7f12a7c41/lib/python3.10/site-packages/transformers/utils/import_utils.py:1177, in _LazyModule.__getattr__(self, name)\r\n 1175 value = getattr(module, name)\r\n 1176 else:\r\n-> 1177 raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\r\n 1179 setattr(self, name, value)\r\n 1180 return value\r\n\r\nAttributeError: module transformers has no attribute MPTForCausalLM\r\n\r\n```\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,700
1,700
NONE
null
### System Info Python Version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] Transformers Version: 4.29.2 Platform: Azure ### Who can help? @gante @Rocketknight1 @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction loaded_model_pipeline = mlflow.transformers.load_model(model_uri=model_detail.source, return_type="pipeline") File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/transformers/utils/import_utils.py:1165, in _LazyModule.__getattr__(self, name) 1163 value = getattr(module, name) 1164 else: -> 1165 raise AttributeError(f"module {self.__name__} has no attribute {name}") 1167 setattr(self, name, value) 1168 return value AttributeError: module transformers has no attribute BaiChuanForCausalLM ### Expected behavior Am bringing the model from hugging face and registering it in the Azure workspace. But when am trying to load the model back, it is throwing me AttributeError.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26609/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26608
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26608/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26608/comments
https://api.github.com/repos/huggingface/transformers/issues/26608/events
https://github.com/huggingface/transformers/pull/26608
1,927,564,100
PR_kwDOCUB6oc5b-O8G
26,608
[ `NougatProcessor`] Fix the default channel
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26608). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fixes #26597 by updating the default data format
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26608/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26608", "html_url": "https://github.com/huggingface/transformers/pull/26608", "diff_url": "https://github.com/huggingface/transformers/pull/26608.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26608.patch", "merged_at": 1696491489000 }
https://api.github.com/repos/huggingface/transformers/issues/26607
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26607/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26607/comments
https://api.github.com/repos/huggingface/transformers/issues/26607/events
https://github.com/huggingface/transformers/pull/26607
1,927,537,864
PR_kwDOCUB6oc5b-JNr
26,607
Fix failing tests on `main` due to torch 2.1
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@michaelbenayoun Could you help us on the torch fx tests for wav2vec2/hubert with torch 2.1.\r\n\r\nSee [this internal discussion](https://huggingface.slack.com/archives/C01NE71C4F7/p1695971833833119)\r\n\r\nBut in short, it can't do\r\n```\r\nif attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len)\r\n```\r\nas the corresponding proxy object has no `_metadata` attribute (with torch 2.1) but it has it with torch 2.0." ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fix failing tests on `main` due to torch 2.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26607/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26607", "html_url": "https://github.com/huggingface/transformers/pull/26607", "diff_url": "https://github.com/huggingface/transformers/pull/26607.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26607.patch", "merged_at": 1696494425000 }
https://api.github.com/repos/huggingface/transformers/issues/26606
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26606/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26606/comments
https://api.github.com/repos/huggingface/transformers/issues/26606/events
https://github.com/huggingface/transformers/pull/26606
1,927,530,775
PR_kwDOCUB6oc5b-HrP
26,606
[`LlamaTokenizerFast`] Adds edge cases for the template processor
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yep rebased on main to remove them! " ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fixes #26605 by making sure the LlamaTokenizerFast handles the cases when bos or eos is None and `add_bos_token` is set to `True` by raising an error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26606/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26606", "html_url": "https://github.com/huggingface/transformers/pull/26606", "diff_url": "https://github.com/huggingface/transformers/pull/26606.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26606.patch", "merged_at": 1696603255000 }
https://api.github.com/repos/huggingface/transformers/issues/26605
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26605/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26605/comments
https://api.github.com/repos/huggingface/transformers/issues/26605/events
https://github.com/huggingface/transformers/issues/26605
1,927,367,296
I_kwDOCUB6oc5y4UqA
26,605
Cannot save/load tokenizer without special tokens
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting fixing now 😉 " ]
1,696
1,696
1,696
NONE
null
### System Info main ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('stabilityai/FreeWilly2') tokenizer.pad_token_id = None tokenizer.eos_token_id = None tokenizer.bos_token_id = None tokenizer.unk_token_id = None tokenizer.save_pretrained('/tmp/tok_test') transformers.AutoTokenizer.from_pretrained('/tmp/tok_test') ``` ``` transformers/models/llama/tokenization_llama_fast.py", line 152, in update_post_processor single = f"{(bos+':0 ') * self.add_bos_token}$A:0{(' '+eos+':0') if self.add_eos_token else ''}" TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ``` ### Expected behavior This works on transformers==4.33.1 I use a separate tokenizer for user input so it doesn't convert "special tokens" in unsanitized input. Maybe there's a different way to do this than setting all the special tokens to None?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26605/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26604
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26604/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26604/comments
https://api.github.com/repos/huggingface/transformers/issues/26604/events
https://github.com/huggingface/transformers/pull/26604
1,927,264,931
PR_kwDOCUB6oc5b9N_D
26,604
Create motion-detection.py
{ "login": "mdazfar2", "id": 100375390, "node_id": "U_kgDOBfubXg", "avatar_url": "https://avatars.githubusercontent.com/u/100375390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mdazfar2", "html_url": "https://github.com/mdazfar2", "followers_url": "https://api.github.com/users/mdazfar2/followers", "following_url": "https://api.github.com/users/mdazfar2/following{/other_user}", "gists_url": "https://api.github.com/users/mdazfar2/gists{/gist_id}", "starred_url": "https://api.github.com/users/mdazfar2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mdazfar2/subscriptions", "organizations_url": "https://api.github.com/users/mdazfar2/orgs", "repos_url": "https://api.github.com/users/mdazfar2/repos", "events_url": "https://api.github.com/users/mdazfar2/events{/privacy}", "received_events_url": "https://api.github.com/users/mdazfar2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @mdazfar2 I'm not sure I asked for this? In which issue was this mentioned?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
@LysandreJik done i will upload the code according to your request if you like feel free to ask anything
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26604/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26604", "html_url": "https://github.com/huggingface/transformers/pull/26604", "diff_url": "https://github.com/huggingface/transformers/pull/26604.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26604.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26603
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26603/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26603/comments
https://api.github.com/repos/huggingface/transformers/issues/26603/events
https://github.com/huggingface/transformers/issues/26603
1,927,138,431
I_kwDOCUB6oc5y3cx_
26,603
[Docs] Weird rendering of a code block
{ "login": "wfjsw", "id": 2220320, "node_id": "MDQ6VXNlcjIyMjAzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2220320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wfjsw", "html_url": "https://github.com/wfjsw", "followers_url": "https://api.github.com/users/wfjsw/followers", "following_url": "https://api.github.com/users/wfjsw/following{/other_user}", "gists_url": "https://api.github.com/users/wfjsw/gists{/gist_id}", "starred_url": "https://api.github.com/users/wfjsw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wfjsw/subscriptions", "organizations_url": "https://api.github.com/users/wfjsw/orgs", "repos_url": "https://api.github.com/users/wfjsw/repos", "events_url": "https://api.github.com/users/wfjsw/events{/privacy}", "received_events_url": "https://api.github.com/users/wfjsw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think it's because there's an extra line between where the function is defined and where we map it. Removing the extra line should fix it:\r\n\r\n```diff\r\n>>> def tokenize_dataset(dataset):\r\n... return tokenizer(dataset[\"text\"])\r\n# remove blank line here\r\n\r\n>>> dataset = dataset.map(tokenize_dataset, batched=True)\r\n```" ]
1,696
1,696
1,696
CONTRIBUTOR
null
See https://github.com/huggingface/transformers/blob/3e203f92bed937fa13c35adee1bdc45a92d18e61/docs/source/zh/quicktour.md?plain=1#L459 https://huggingface.co/docs/transformers/main/zh/quicktour#:~:text=dataset%20%3D%20dataset.map(tokenize_dataset%2C%20batched%3DTrue) ![image](https://github.com/huggingface/transformers/assets/2220320/14a17760-3fbc-4187-896f-cf4e83d0b81f) I cannot come up with a valid reason why it is rendered in this way.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26603/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26602
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26602/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26602/comments
https://api.github.com/repos/huggingface/transformers/issues/26602/events
https://github.com/huggingface/transformers/pull/26602
1,927,131,303
PR_kwDOCUB6oc5b8xTw
26,602
fix RoPE t range issue for fp16
{ "login": "rui-ren", "id": 15321482, "node_id": "MDQ6VXNlcjE1MzIxNDgy", "avatar_url": "https://avatars.githubusercontent.com/u/15321482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rui-ren", "html_url": "https://github.com/rui-ren", "followers_url": "https://api.github.com/users/rui-ren/followers", "following_url": "https://api.github.com/users/rui-ren/following{/other_user}", "gists_url": "https://api.github.com/users/rui-ren/gists{/gist_id}", "starred_url": "https://api.github.com/users/rui-ren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rui-ren/subscriptions", "organizations_url": "https://api.github.com/users/rui-ren/orgs", "repos_url": "https://api.github.com/users/rui-ren/repos", "events_url": "https://api.github.com/users/rui-ren/events{/privacy}", "received_events_url": "https://api.github.com/users/rui-ren/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Seems fair, WDYT @Rocketknight1 ?", "This will cause outputs to change numerically a bit when running in `float16` or `bfloat16` because `freqs` will be calculated in higher precision. The memory/performance impact probably wouldn't be huge, but this might introduce a small deviation for models that were trained in those precisions! Let me run some tests before I approve it.", "After testing, outputs seem equivalent for `bfloat16` models so I'm happy to approve this!", "@rui-ren let me know if you want to add anything else to this PR, or if you're happy for me to merge it now!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26602). All of your documentation changes will be reflected on that endpoint.", "@Rocketknight1 Please merge this PR. Thank you for your review. ", "Done. Thanks for a clean and helpful PR @rui-ren!" ]
1,696
1,696
1,696
CONTRIBUTOR
null
#### Issue Sometimes training with `fp16`, the `dtype` of `self.inv_freq` will be changed from `fp32` to `fp16`. This scenario will cause the position `t` to use dtype of `fp16`, like ``` t = torch.arange(seq_len, device=device, dtype=torch.float16) ``` After converting to onnx graph, however, Range Ops in `onnx` do not support `fp16` as [here](https://github.com/onnx/onnx/blob/e11dacfa9930eafd3b34391ef5422d09ba9896dc/onnx/defs/generator/defs.cc#L488-L557) #### Update Use the below to avoid this scenario ``` t = torch.arange(seq_len, device=device).to(dtype) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26602/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26602", "html_url": "https://github.com/huggingface/transformers/pull/26602", "diff_url": "https://github.com/huggingface/transformers/pull/26602.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26602.patch", "merged_at": 1696590294000 }
https://api.github.com/repos/huggingface/transformers/issues/26601
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26601/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26601/comments
https://api.github.com/repos/huggingface/transformers/issues/26601/events
https://github.com/huggingface/transformers/issues/26601
1,927,102,726
I_kwDOCUB6oc5y3UEG
26,601
llama2 forward pass seemingly not working with padded inputs, unless one element in batch is not padded
{ "login": "joebhakim", "id": 13984157, "node_id": "MDQ6VXNlcjEzOTg0MTU3", "avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joebhakim", "html_url": "https://github.com/joebhakim", "followers_url": "https://api.github.com/users/joebhakim/followers", "following_url": "https://api.github.com/users/joebhakim/following{/other_user}", "gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}", "starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions", "organizations_url": "https://api.github.com/users/joebhakim/orgs", "repos_url": "https://api.github.com/users/joebhakim/repos", "events_url": "https://api.github.com/users/joebhakim/events{/privacy}", "received_events_url": "https://api.github.com/users/joebhakim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "WDYT @ArthurZucker @Rocketknight1 ?", "Running on CPU + float32, but I can't reproduce this issue, unfortunately! The pasted code works for me. So either the issue is specific to the device + dtype, which seems unlikely, or it's some weird dependency/version problem.", "@Rocketknight1 which vesion of CUDA are you on (I'm on 11.7 due to compatibility stuff)? Also, how did you get float32 to work? I get \r\n`RuntimeError: FlashAttention only support fp16 and bf16 data type` when I set `torch_dtype=torch.float32`, when it's on a cuda device. When it's not, I get `ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)`, so I assumed it *had* to be initialized like I did it?", "Ah, sorry! I realized the issue - your reproduction script used `trust_remote_code=True`, but the code worked for me without it, so I thought that was just a mistake and I didn't need it. I realize now that the repo actually does have remote code, it just uses the `LLaMA` class names, and as a result will map to the code in the `transformers` library when `trust_remote_code=False`. When I use `trust_remote_code=True`, the error occurs.\r\n\r\nUnfortunately, this means the error is specific to the user code in that repo - I'd suggest opening [an issue on the repo](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct/discussions) instead and pinging the authors!", "Opened an issue on their page, will close this one. Thanks @Rocketknight1!", "Hello , Did you solve thie problem @joebhakim " ]
1,696
1,700
1,696
NONE
null
### System Info - `transformers` version: 4.33.2 - Platform: Linux-3.10.0-1160.99.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.16 - Huggingface_hub version: 0.17.2 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0.dev20230621+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Below is an example script that creates an example input with padding, then uses the first element (replicating when the tokenizer batch dimension is larger than the batch dimension fed into a model). Please let me know if I'm missing something! ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaTokenizerFast tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct") tokenizer = LlamaTokenizerFast.from_pretrained( "togethercomputer/Llama-2-7B-32K-Instruct" ) model = AutoModelForCausalLM.from_pretrained( "togethercomputer/Llama-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16, ).cuda() """ THIS works in both cases model = MT5ForConditionalGeneration.from_pretrained( 'google/mt5-xl' """ encoded = tokenizer( [ "[INST]\nWrite a poem about cats\n[/INST]\n\n", "[INST]\nWrite " + "a poem about" * 400 + " cats\n[/INST]\n\n", ], return_tensors="pt", padding="longest", ).to(model.device) encoded_firstelem = { "input_ids": encoded["input_ids"][:1, :], "attention_mask": encoded["attention_mask"][:1, :], } breakpoint() print(encoded_firstelem) # {'input_ids': tensor([[ 0, 0, 0, ..., 29962, 13, 13]], device='cuda:0'), 'attention_mask': tensor([[0, 0, 0, ..., 1, 1, 1]], device='cuda:0')} # works print(model(**encoded)) # breaks print(model(**encoded_firstelem)) ``` This gives the following: ``` Traceback (most recent call last): File "<console>", line 1, in <module> File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/home/jh499/.cache/huggingface/modules/transformers_modules/togethercomputer/LLaMA-2-7B-32K/3c84db12268fac86081ec1229a5ef48414478c88/modeling_flash_llama.py", line 812, in forward outputs = self.model( File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/home/jh499/.cache/huggingface/modules/transformers_modules/togethercomputer/LLaMA-2-7B-32K/3c84db12268fac86081ec1229a5ef48414478c88/modeling_flash_llama.py", line 696, in forward layer_outputs = decoder_layer( File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/home/jh499/.cache/huggingface/modules/transformers_modules/togethercomputer/LLaMA-2-7B-32K/3c84db12268fac86081ec1229a5ef48414478c88/modeling_flash_llama.py", line 447, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/jh499/env_39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) File "/home/jh499/.cache/huggingface/modules/transformers_modules/togethercomputer/LLaMA-2-7B-32K/3c84db12268fac86081ec1229a5ef48414478c88/modeling_flash_llama.py", line 380, in forward attn_output = pad_input( RuntimeError: shape '[1, 1215, 4096]' is invalid for input of size 73728 ``` ### Expected behavior This should work with the padded example, as it does in the mt5-xl model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26601/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26600
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26600/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26600/comments
https://api.github.com/repos/huggingface/transformers/issues/26600/events
https://github.com/huggingface/transformers/issues/26600
1,927,078,767
I_kwDOCUB6oc5y3ONv
26,600
Introducing magic number to get the probability distribution of an image captioning model
{ "login": "snpushpi", "id": 55248448, "node_id": "MDQ6VXNlcjU1MjQ4NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/55248448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snpushpi", "html_url": "https://github.com/snpushpi", "followers_url": "https://api.github.com/users/snpushpi/followers", "following_url": "https://api.github.com/users/snpushpi/following{/other_user}", "gists_url": "https://api.github.com/users/snpushpi/gists{/gist_id}", "starred_url": "https://api.github.com/users/snpushpi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snpushpi/subscriptions", "organizations_url": "https://api.github.com/users/snpushpi/orgs", "repos_url": "https://api.github.com/users/snpushpi/repos", "events_url": "https://api.github.com/users/snpushpi/events{/privacy}", "received_events_url": "https://api.github.com/users/snpushpi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also cc @rafaelpadilla @amyeroberts ", "Hi @snpushpi \r\nThanks a lot for your question, per my understanding the output logits have the shape that you have specified, `batch_size, sequence_length, vocab_size`\r\nMeaning each element in the second dimension corresponds to `P(w_{i+1}| image, w_i, ..)` - in your case `output[:,32:,:]` will give you all the logits after the 32th token (as you are slicing it with `32:`) (it might contain more than one), if you want to retrieve let's say the last token logits you can just get ` output[:,-1,:]`\r\nHope this was clear, I will let others reply and add any extra clarification in case I missed any", "so the issue is that if we tokenized the sentence(using the tokenizer we are supposed to use for blip2) there are around 18 tokens, but why are there 50 things in the second dimension? Why are the rest 32 things for? I would like to know if I am getting the probability distribution right. Thank you @younesbelkada ", "Hi @snpushpi \r\n\r\nIn your example the text tokens (`input_ids`) has shape (1,20) and the logits output by your model has shape (1, 52, 50272), which, as stated by @younesbelkada, represents `batch_size, sequence_length, vocab_size`\r\n\r\nThe `sequence_length = 52` results from the concatenation of the 20 (from the `input_ids`) + 32 (from the model's `num_query_tokens`).\r\n\r\nI'm not sure if you can obtain the **probabilities** of the tokens (not words themselves) by simply accessing them directly by their index , because they do not sum up to 1 - As for example, accessing the probabilities of the ith token, with respect to the first image (batch=0) you just do `outputs[0, i, :]`. You might need to pass them by a softmax to possibly represent them as probabilities.", "@rafaelpadilla thanks for your response. MY question is how di I know that the models num_query tokens are appended after and not before? So I basically calculated the suprisals from the model for a given image and I figured that the values make so much more sense when I start considering considered_logits = output[:,32:,:] and not considered_logits = output[:,:20,:]. This is the rest of my code - \r\n`for i in range(1, input_ids.shape[1]-1):\r\n\r\n token_logits = considered_logits[:,i,:]\r\n\r\n token_prob = torch.log_softmax(token_logits,dim=-1)[0,input_ids[0,i+1]]\r\n\r\n token_surprisal = -token_prob/math.log(2)\r\n\r\n print(processor.tokenizer.convert_ids_to_tokens([input_ids[0,i+1]]), token_surprisal)`\r\n\r\nso when my considered_logits= [:,32:,:] the surpurisal values are small and nice and increase if say, I replace \"giraffe\" with an elephant, or change other things in the sentence that are obviously present in the image. but if I take considered_logits= [:,:20,:] then the surprisal values are really big and doesn't change systematically when I change the tokens. \r\n\r\nDo you think this is right approach to chop off the first 32 entries in the second dimension? or do you think there's some value to those chopped off entries in calculating the distribution? \r\n\r\nThank you for your time and help! @younesbelkada @rafaelpadilla \r\n\r\n", "Hi @snpushpi,\r\n\r\nTo answer your question, I delved into the code and compared it with the scheme from the [Blip2 paper](https://arxiv.org/pdf/2301.12597.pdf). Basically, the LLM (OPT) receives the output of the Q-Former, and it is reflected in the code [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip_2/modeling_blip_2.py#L1504). \r\nSee that the LLM receives the `inputs_embeds`, which comes from the concatenation of `language_model_inputs` and `inputs_embeds`:\r\n```python\r\ninputs_embeds = torch.cat([language_model_inputs, inputs_embeds], dim=1)\r\n```\r\nThe `language_model_inputs` comes from a projection that uses the `self.qformer`'s output, which receives the `image_embeds`.\r\nOn the other hand, the `inputs_embeds` results from the text tokens (`input_ids`) passed to the language model's embeddings.\r\nThus, I suppose that's why the first part of the output logits may have a higher correlation to the image (`image_embeds`) and the last part could be more correlated to the text (`inputs_embeds`). \r\n\r\nHowever the logits do not represent the probabilities. So, passing the logits by a softmax is an attempt to make them look like probabilities and sum up to 1. Although your test shows that this might be correct, I would test more samples varying images and texts to make sure that this is the right way to go.\r\n\r\nI hope that helps :) ", "Hi @rafaelpadilla \r\nThanks for sharing more valuable details! So I ran the code using output[:,32:,:] on 36 images and their captions and the surprisals of those tokens in each caption are qualitatively similar to what they look like in gpt2, as in collecting surprisal values from gpt2 using the same captions. But if I do output[:,:20,:], they look quite off, a whole lot more off than what they look like in gpt2. So that makes me really feel that output[:,32:,:] should be the way to go, but the issue is idk why it doesn't do better than gpt2 cuz it really should given that gpt2 surprisals are just being calculated from text context whereas this is being calculated from both text and image context. So shouldn't the surprisals be better for blip2? For context, I am analyzing some human reading task reaction time data with these surprisal values, and blip2 surprisals and gpt2 perform almost the same.\r\n\r\nSo I wonder if the output[,:32,:] has something valuable in it that I should take into account while calculating probability distributions and hence surprisals. I also wonder if there is any way to calculate image conditioned probability distribution in general from other models that you know could do better than gpt2? \r\n\r\nThanks again for your time and thoughts!", "Hey @snpushpi,\r\n\r\nThat's a very interesting work of yours! I could shout out many hypothesis why gpt2 may have better results than blip2, but they would be merely guesses. I had to take a deeper dive into each model, training strategy and datasets to understand what's happening under the table in details. Maybe [this video](https://www.youtube.com/watch?v=k0DAtZCCl1w) can give you some insights.\r\nAlso, as you're interested in the text part only, the `get_text_features` from the Blip2Model ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip_2/modeling_blip_2.py#L1253)) could be tried.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,700
1,700
NONE
null
### System Info I was considering the BLIP2 model for getting the probability distribution of each token in the caption given an image. So basically if the words in a caption are w1,w2,w3,…wt then I want to get these estimates P(w1|image), P(w2|image,w1),P(w3|image,w1,w2) etc. So this is the approach I took ```python from PIL import Image import requests from transformers import Blip2Processor, Blip2ForConditionalGeneration import torch device = "cuda" processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2Model.from_pretrained( "Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16 ) model.to(device) url = "http://images.cocodataset.org/val2017/000000485895.jpg" image = Image.open(requests.get(url, stream=True).raw) pixel_values = processor(images=image, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device, torch.float16) sentence = 'A giraffe stares into the camera while standing on green grass in front of a shade tree.' input_ids = processor.tokenizer(sentence, return_tensors = 'pt').input_ids.to(device) output = model(pixel_values,input_ids = input_ids).logits.detach() considered_logits = output[:,32:,:] ``` So the considered logits is the probability distribution of each token in the caption, am I doing it right? I know this isn't a bug but this is weird because I am using this weird magic number 32, but I posted it on forum and no one replied to me :( can i please get a response here? ### Who can help? @gante @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The code is provided above ### Expected behavior from documentation of blip2 model forward method output: logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model. But I see a much longer sequence(longer than the sequence length) and it only made sense to chop off the first 32 from the 1st dimension, I wonder why this is happening?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26600/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26599
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26599/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26599/comments
https://api.github.com/repos/huggingface/transformers/issues/26599/events
https://github.com/huggingface/transformers/issues/26599
1,927,008,060
I_kwDOCUB6oc5y2888
26,599
Recent docker images missing from dockerhub
{ "login": "Ben-Epstein", "id": 22605641, "node_id": "MDQ6VXNlcjIyNjA1NjQx", "avatar_url": "https://avatars.githubusercontent.com/u/22605641?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ben-Epstein", "html_url": "https://github.com/Ben-Epstein", "followers_url": "https://api.github.com/users/Ben-Epstein/followers", "following_url": "https://api.github.com/users/Ben-Epstein/following{/other_user}", "gists_url": "https://api.github.com/users/Ben-Epstein/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ben-Epstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ben-Epstein/subscriptions", "organizations_url": "https://api.github.com/users/Ben-Epstein/orgs", "repos_url": "https://api.github.com/users/Ben-Epstein/repos", "events_url": "https://api.github.com/users/Ben-Epstein/events{/privacy}", "received_events_url": "https://api.github.com/users/Ben-Epstein/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ydshieh in case you have the bandwidth to take a look at this!", "Thank you for reporting @Ben-Epstein .\r\n\r\n@glegendre01 @mfuntowicz Due to the disk space issue, I change the workflow file to build AMD CI docker image on AMD machine. See https://github.com/huggingface/transformers/blob/54e17a15dc4fb4be329eb9aaf534a4c6e776d598/.github/workflows/build-docker-images.yml#L212-L213\r\n\r\nIt worked on Sep. 21, see [here](https://github.com/huggingface/transformers/actions/runs/6255587777) but failing next day [here](https://github.com/huggingface/transformers/actions/runs/6268551528/job/17023670809)\r\n\r\nDo you have any idea what could be the cause? \r\n\r\nThe failing is \r\nSet up docker Buildx\r\n```\r\nError: EACCES: permission denied, mkdir '/home/github_actions/.docker/buildx/certs'\r\n```\r\n", "The doc builder docker image fails to build due to `pytorch-quantization` installation. \r\n\r\nIt's likely it is not working well with the just released torch 2.1.\r\n\r\nAs this image is for building doc, I don't know well if this package is required (I guess so). \r\n\r\n@LysandreJik Are you OK if I change the docker file to pin torch 2.0 here?", "There other failing build is due to disk issue. I will take a look", "> @LysandreJik Are you OK if I change the docker file to pin torch 2.0 here?\r\n\r\nYes, no problem for me!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "There is still AMD docker image failing to build, but it's a known issue.\r\nThe daily CI image fails to build again on Nov. 2 - I will check." ]
1,696
1,701
1,701
NONE
null
### System Info It seems like your GHA to push released images to dockerhub has been failing for a bit https://github.com/huggingface/transformers/actions/workflows/build-docker-images.yml I wanted to flag it incase it was going unnoticed. Your latest is consistently being updated, <img width="337" alt="image" src="https://github.com/huggingface/transformers/assets/22605641/9a6e1918-0ac0-4c2e-808b-eb1eca847e61"> but your tagged images aren't getting released anymore (hasn't been pushed since 4.29) <img width="1279" alt="image" src="https://github.com/huggingface/transformers/assets/22605641/2103fdd5-6491-4f47-baf8-770f9fec4d88"> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction N/A ### Expected behavior N/A
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26599/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26598
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26598/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26598/comments
https://api.github.com/repos/huggingface/transformers/issues/26598/events
https://github.com/huggingface/transformers/issues/26598
1,926,729,066
I_kwDOCUB6oc5y141q
26,598
[SpeechT5] Attention mask not changed according to decoder inputs
{ "login": "Joao-Maria-Janeiro", "id": 34111347, "node_id": "MDQ6VXNlcjM0MTExMzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/34111347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Joao-Maria-Janeiro", "html_url": "https://github.com/Joao-Maria-Janeiro", "followers_url": "https://api.github.com/users/Joao-Maria-Janeiro/followers", "following_url": "https://api.github.com/users/Joao-Maria-Janeiro/following{/other_user}", "gists_url": "https://api.github.com/users/Joao-Maria-Janeiro/gists{/gist_id}", "starred_url": "https://api.github.com/users/Joao-Maria-Janeiro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Joao-Maria-Janeiro/subscriptions", "organizations_url": "https://api.github.com/users/Joao-Maria-Janeiro/orgs", "repos_url": "https://api.github.com/users/Joao-Maria-Janeiro/repos", "events_url": "https://api.github.com/users/Joao-Maria-Janeiro/events{/privacy}", "received_events_url": "https://api.github.com/users/Joao-Maria-Janeiro/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[ { "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false } ]
[ "cc @ylacombe could you take a look when you get the chance? You know SpeechT5 pretty well by now!", "Hey, thanks for opening this issue!\r\nI will take a look in the next few days, in the meantime, do you have a script to reproduce the mismatch @Joao-Maria-Janeiro ?", "Hey @Joao-Maria-Janeiro , any update on a reproducing script?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@ylacombe - +1 This is still an issue. It's very easy to reproduce:\r\n\r\n```python\r\nfrom transformers import SpeechT5Processor, SpeechT5ForSpeechToSpeech\r\nimport numpy as np\r\n\r\nprocessor = SpeechT5Processor.from_pretrained(\"microsoft/speecht5_vc\")\r\nmodel = SpeechT5ForSpeechToSpeech.from_pretrained(\"microsoft/speecht5_vc\")\r\n\r\nfeatures = processor(\r\n audio=[np.random.random(size=(2048,)) for waveform in range(3)],\r\n audio_target=[np.random.random(size=(2048,)) for waveform in range(3)], \r\n return_tensors=\"pt\",\r\n padding=True,\r\n sampling_rate=16000,\r\n)\r\noutputs = model(**features, return_dict=True)\r\n```\r\n\r\nProduces:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"[REDACTED]/reproduce.py\", line 8, in <module>\r\n outputs = model(**features, return_dict=True)\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/[REDACTED]/transformers/models/speecht5/modeling_speecht5.py\", line 2953, in forward\r\n outputs = self.speecht5(\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/[REDACTED]/transformers/models/speecht5/modeling_speecht5.py\", line 2211, in forward\r\n decoder_outputs = self.decoder(\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/[REDACTED]/transformers/models/speecht5/modeling_speecht5.py\", line 1734, in forward\r\n outputs = self.wrapped_decoder(\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/[REDACTED]/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/[REDACTED]/transformers/models/speecht5/modeling_speecht5.py\", line 1594, in forward\r\n attention_mask = _prepare_4d_causal_attention_mask(\r\n File \"/[REDACTED]/transformers/modeling_attn_mask_utils.py\", line 195, in _prepare_4d_causal_attention_mask\r\n attention_mask = attn_mask_converter.to_4d(\r\n File \"/[REDACTED]/transformers/modeling_attn_mask_utils.py\", line 117, in to_4d\r\n expanded_4d_mask = expanded_attn_mask if causal_4d_mask is None else expanded_attn_mask + causal_4d_mask\r\nRuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 3\r\n```", "Hey @DavidMChan, thanks for the script, #28071 should fix this!", "Thanks for following up on this!\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Not stale :) This still requires a fix.", "Continues to be useful. @ylacombe - what needs to be done to get #28071 merged? Is there anything that would be helpful for me to take a look at? ", "Hey @DavidMChan, sorry for the late response, there was an issue with the model slow tests that I have yet to find the time to resolve, but getting back on this as soon as possible this week" ]
1,696
1,707
null
NONE
null
### System Info - `transformers` version: 4.33.3 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The decoder inputs are changed to be shifted right by one, and interleaved by the reduction factor. However, the attention mask to the decoder remains the same, which if we use a reduction_factor != 1 will result in a shape missmatch. You can check the line I am referring to here: https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/models/speecht5/modeling_speecht5.py#L2733 ### Expected behavior The attention mask should have the same changes applied as the decoder input, resulting in the same shape, I believe.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26598/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/26597
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26597/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26597/comments
https://api.github.com/repos/huggingface/transformers/issues/26597/events
https://github.com/huggingface/transformers/issues/26597
1,926,433,368
I_kwDOCUB6oc5y0wpY
26,597
processor_nougat has wrong default data type
{ "login": "NormXU", "id": 33339685, "node_id": "MDQ6VXNlcjMzMzM5Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/33339685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NormXU", "html_url": "https://github.com/NormXU", "followers_url": "https://api.github.com/users/NormXU/followers", "following_url": "https://api.github.com/users/NormXU/following{/other_user}", "gists_url": "https://api.github.com/users/NormXU/gists{/gist_id}", "starred_url": "https://api.github.com/users/NormXU/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NormXU/subscriptions", "organizations_url": "https://api.github.com/users/NormXU/orgs", "repos_url": "https://api.github.com/users/NormXU/repos", "events_url": "https://api.github.com/users/NormXU/events{/privacy}", "received_events_url": "https://api.github.com/users/NormXU/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting I'll open a PR for a fix asap\r\n" ]
1,696
1,696
1,696
CONTRIBUTOR
null
### System Info - `transformers` version: 4.34.0 - Platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3-post.1 - Accelerate version: 0.22.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.13.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 ### Who can help? @amyeroberts @ArthurZucker ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The nougat processor fails to work. The test code I run is pasted as below: ```python PRETRAINED_PATH_TO_NOUGAT = "" processor = NougatProcessor.from_pretrained(PRETRAINED_PATH_TO_NOUGAT) model = VisionEncoderDecoderModel.from_pretrained(PRETRAINED_PATH_TO_NOUGAT") device = "cuda:0" if torch.cuda.is_available() else "cpu" model.to(device) # prepare PDF image for the model filepath = "/path/to/dummy/image.png" image = Image.open(filepath) pixel_values = processor(image, return_tensors="pt").pixel_values # generate transcription (here we only generate 30 tokens) outputs = model.generate( pixel_values.to(device), min_length=1, max_new_tokens=512, bad_words_ids=[[processor.tokenizer.unk_token_id]], ) sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0] sequence = processor.post_process_generation(sequence, fix_markdown=False) ``` The error log is as below: ``` Traceback (most recent call last): File "/home/ysocr/tests/test_generate.py", line 15, in <module> pixel_values = processor(image, return_tensors="pt").pixel_values File "/home/venv/lib/python3.8/site-packages/transformers/models/nougat/processing_nougat.py", line 91, in __call__ inputs = self.image_processor( File "/home/venv/lib/python3.8/site-packages/transformers/image_processing_utils.py", line 546, in __call__ return self.preprocess(images, **kwargs) File "/home/venv/lib/python3.8/site-packages/transformers/models/nougat/image_processing_nougat.py", line 505, in preprocess images = [ File "/home/venv/lib/python3.8/site-packages/transformers/models/nougat/image_processing_nougat.py", line 506, in <listcomp> to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images File "/home/venv/lib/python3.8/site-packages/transformers/image_transforms.py", line 78, in to_channel_dimension_format target_channel_dim = ChannelDimension(channel_dim) File "/usr/lib/python3.8/enum.py", line 304, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.8/enum.py", line 595, in __new__ raise exc File "/usr/lib/python3.8/enum.py", line 579, in __new__ result = cls._missing_(value) File "/home/venv/lib/python3.8/site-packages/transformers/utils/generic.py", line 433, in _missing_ raise ValueError( ValueError: ChannelDimension.FIRST is not a valid ChannelDimension, please select one of ['channels_first', 'channels_last'] ``` After checking the codes, I found it is the default data type of ``data_format`` that leads to this error. I believe the expected data type of ``data_format`` should be ``Optional[ChannelDimension] = ChannelDimension.FIRST`` rather than ``Optional["ChannelDimension"] = "ChannelDimension.FIRST"``. Besides, it is weird that default datatype of ``resample``and ``input_data_format`` is ``"PILImageResampling"`` and ``"ChannelDimension"`` respectively. See line 55, line 64 and line 65. https://github.com/huggingface/transformers/blob/6015f91a5a28548a597f8d24341d089fe04994e8/src/transformers/models/nougat/processing_nougat.py#L55-L66 I notice @ArthurZucker made such changes and added some comments. It could be a bug or maybe it is just some design I misunderstand? ### Expected behavior Ensure the nougat example works.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26597/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26596
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26596/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26596/comments
https://api.github.com/repos/huggingface/transformers/issues/26596/events
https://github.com/huggingface/transformers/pull/26596
1,926,433,088
PR_kwDOCUB6oc5b6Ztb
26,596
Fix embarrassing typo in the doc chat template!
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
MEMBER
null
Please ignore the extra unwanted bracket
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26596/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26596", "html_url": "https://github.com/huggingface/transformers/pull/26596", "diff_url": "https://github.com/huggingface/transformers/pull/26596.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26596.patch", "merged_at": 1696433334000 }
https://api.github.com/repos/huggingface/transformers/issues/26595
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26595/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26595/comments
https://api.github.com/repos/huggingface/transformers/issues/26595/events
https://github.com/huggingface/transformers/pull/26595
1,926,407,780
PR_kwDOCUB6oc5b6T7g
26,595
Image-to-Image Task Guide
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@LysandreJik can you give a review or merge if this looks good? " ]
1,696
1,697
1,697
CONTRIBUTOR
null
This PR contributes task guide for image-to-image. cc @NielsRogge @rafaelpadilla @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26595/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26595/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26595", "html_url": "https://github.com/huggingface/transformers/pull/26595", "diff_url": "https://github.com/huggingface/transformers/pull/26595.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26595.patch", "merged_at": 1697461924000 }
https://api.github.com/repos/huggingface/transformers/issues/26594
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26594/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26594/comments
https://api.github.com/repos/huggingface/transformers/issues/26594/events
https://github.com/huggingface/transformers/pull/26594
1,926,397,686
PR_kwDOCUB6oc5b6Rm6
26,594
skip flaky hub tests
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? cc @LysandreJik, @ydshieh main is red quite often because of these two tests (custom pr-ci as well) let's skip for now. Marked flaky by test insight
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26594/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26594", "html_url": "https://github.com/huggingface/transformers/pull/26594", "diff_url": "https://github.com/huggingface/transformers/pull/26594.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26594.patch", "merged_at": 1696434475000 }
https://api.github.com/repos/huggingface/transformers/issues/26593
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26593/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26593/comments
https://api.github.com/repos/huggingface/transformers/issues/26593/events
https://github.com/huggingface/transformers/pull/26593
1,926,367,441
PR_kwDOCUB6oc5b6K_a
26,593
[Mistral] Update config docstring
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "#26592 is all green 👼🏻 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26593). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? Runs `make fix-copies` to correct the Mistral config after the PR #26052, and subsequently fills out the missing docstring args.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26593/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26593", "html_url": "https://github.com/huggingface/transformers/pull/26593", "diff_url": "https://github.com/huggingface/transformers/pull/26593.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26593.patch", "merged_at": 1696431755000 }
https://api.github.com/repos/huggingface/transformers/issues/26592
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26592/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26592/comments
https://api.github.com/repos/huggingface/transformers/issues/26592/events
https://github.com/huggingface/transformers/pull/26592
1,926,357,614
PR_kwDOCUB6oc5b6I2l
26,592
[`CI-Quality`] Main is red following `Docstring check (#26052)`
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26592). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fixes the doc of mistral
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26592/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26592", "html_url": "https://github.com/huggingface/transformers/pull/26592", "diff_url": "https://github.com/huggingface/transformers/pull/26592.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26592.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26591
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26591/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26591/comments
https://api.github.com/repos/huggingface/transformers/issues/26591/events
https://github.com/huggingface/transformers/pull/26591
1,926,334,208
PR_kwDOCUB6oc5b6Dx4
26,591
Update conftest.py
{ "login": "A-R-I-S-E", "id": 123771692, "node_id": "U_kgDOB2CbLA", "avatar_url": "https://avatars.githubusercontent.com/u/123771692?v=4", "gravatar_id": "", "url": "https://api.github.com/users/A-R-I-S-E", "html_url": "https://github.com/A-R-I-S-E", "followers_url": "https://api.github.com/users/A-R-I-S-E/followers", "following_url": "https://api.github.com/users/A-R-I-S-E/following{/other_user}", "gists_url": "https://api.github.com/users/A-R-I-S-E/gists{/gist_id}", "starred_url": "https://api.github.com/users/A-R-I-S-E/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/A-R-I-S-E/subscriptions", "organizations_url": "https://api.github.com/users/A-R-I-S-E/orgs", "repos_url": "https://api.github.com/users/A-R-I-S-E/repos", "events_url": "https://api.github.com/users/A-R-I-S-E/events{/privacy}", "received_events_url": "https://api.github.com/users/A-R-I-S-E/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "i have imported numpy that has been used afterwards in the code." ]
1,696
1,696
1,696
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26591/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26591", "html_url": "https://github.com/huggingface/transformers/pull/26591", "diff_url": "https://github.com/huggingface/transformers/pull/26591.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26591.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26590
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26590/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26590/comments
https://api.github.com/repos/huggingface/transformers/issues/26590/events
https://github.com/huggingface/transformers/pull/26590
1,926,184,793
PR_kwDOCUB6oc5b5jEK
26,590
Update mistral.md to update 404 link
{ "login": "Galland", "id": 3932759, "node_id": "MDQ6VXNlcjM5MzI3NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3932759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Galland", "html_url": "https://github.com/Galland", "followers_url": "https://api.github.com/users/Galland/followers", "following_url": "https://api.github.com/users/Galland/following{/other_user}", "gists_url": "https://api.github.com/users/Galland/gists{/gist_id}", "starred_url": "https://api.github.com/users/Galland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Galland/subscriptions", "organizations_url": "https://api.github.com/users/Galland/orgs", "repos_url": "https://api.github.com/users/Galland/repos", "events_url": "https://api.github.com/users/Galland/events{/privacy}", "received_events_url": "https://api.github.com/users/Galland/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,696
1,696
1,696
CONTRIBUTOR
null
url changed, previous returns 404
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26590/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26590", "html_url": "https://github.com/huggingface/transformers/pull/26590", "diff_url": "https://github.com/huggingface/transformers/pull/26590.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26590.patch", "merged_at": 1696434492000 }
https://api.github.com/repos/huggingface/transformers/issues/26589
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26589/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26589/comments
https://api.github.com/repos/huggingface/transformers/issues/26589/events
https://github.com/huggingface/transformers/pull/26589
1,926,148,958
PR_kwDOCUB6oc5b5bTB
26,589
[`core`] fix silent bug `keep_in_fp32` modules
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Before merging this PR I want to test it on T5 models and 8bit tests as this might affect them", "Relevant T5 and bnb tests are passing, this PR is ready for review! ", "... and tested the failing instructblip tests on the latest docker image and they pass with these changes", "Thanks for the PR! Do we have a common test that could have failed in this specific instance? If not, would it be possible to work on one?\r\n\r\nI'm a bit afraid of the repercussions of such a change without a test that ensures the modules that should be kept in fp32 actually are and that those that shouldn't are kept in their original dtype. It is fixing a silent error but also seems like it could break some silent successes ", "Hi @LysandreJik - OK makes sense, I am happy to work on a common test for that - I'll ping you once this is done", "Tests are passing, this is ready for another review!" ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? Same PR as https://github.com/huggingface/transformers/pull/26484 but without any extra diff Before this PR we were performing a simple check if module_name in key but that lead to some modules silently converted in fp32. For example instructblip models got their word_embedding layers converted in fp32 because _keep_in_fp32_modules includes "wo" which is contained in the string word_embedding. The fix is to check if module_name in key.split(".") I can confirm with this PR the failing instructblip tests now pass
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26589/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/26589/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26589", "html_url": "https://github.com/huggingface/transformers/pull/26589", "diff_url": "https://github.com/huggingface/transformers/pull/26589.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26589.patch", "merged_at": 1696509871000 }
https://api.github.com/repos/huggingface/transformers/issues/26588
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26588/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26588/comments
https://api.github.com/repos/huggingface/transformers/issues/26588/events
https://github.com/huggingface/transformers/issues/26588
1,926,006,311
I_kwDOCUB6oc5yzIYn
26,588
RWKV's "RNN mode" results in unchecked memory growth, causing OOM
{ "login": "LuciferianInk", "id": 94832312, "node_id": "U_kgDOBacGuA", "avatar_url": "https://avatars.githubusercontent.com/u/94832312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LuciferianInk", "html_url": "https://github.com/LuciferianInk", "followers_url": "https://api.github.com/users/LuciferianInk/followers", "following_url": "https://api.github.com/users/LuciferianInk/following{/other_user}", "gists_url": "https://api.github.com/users/LuciferianInk/gists{/gist_id}", "starred_url": "https://api.github.com/users/LuciferianInk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LuciferianInk/subscriptions", "organizations_url": "https://api.github.com/users/LuciferianInk/orgs", "repos_url": "https://api.github.com/users/LuciferianInk/repos", "events_url": "https://api.github.com/users/LuciferianInk/events{/privacy}", "received_events_url": "https://api.github.com/users/LuciferianInk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for opening a detailed issue! If you are training it would make sense to have an increasing memory usage! Coud you try with `model.eval()` and `with torch.no_grad()`? \r\n", "I'm not training, but `torch.no_grad()` does seem to work! Y'all HF guys are always so quick to respond; I really appreciate the tip! I'm going to give this a day or two (because memory has not _technically_ stopped increasing, though it has slowed-down immensely). If all goes well, I'll close the issue.\r\n\r\nThanks again!", "You need to apply `detach()` to the state like for a traditional RNN (`state = outputs.state.detach()` in your code) otherwise you compute gradients over all your steps which indeed takes a lot of memory 😅 ", "Well, I couldn't really figure how to how to use `detach()`, since every attempt ended in some variant of this error:\r\n```\r\nvtx-bot-1 | Traceback (most recent call last):\r\nvtx-bot-1 | File \"/src/main.py\", line 7, in <module>\r\nvtx-bot-1 | import machine\r\nvtx-bot-1 | File \"/src/machine.py\", line 9, in <module>\r\nvtx-bot-1 | from lab import scratch\r\nvtx-bot-1 | File \"/src/lab/scratch.py\", line 25, in <module>\r\nvtx-bot-1 | state = outputs.state.detach()\r\nvtx-bot-1 | AttributeError: 'list' object has no attribute 'detach'\r\n```\r\nNevertheless, the `torch.no_grad()` method does work. Sadly though, the model degenerates so quickly, I'm not really sure how to make effective use of RNN mode at this point.\r\n\r\nI appreciate the advice though. Issue resolved!", "> Nevertheless, the `torch.no_grad()` method does work. Sadly though, the model degenerates so quickly, I'm not really sure how to make use of RNN mode at this point.\r\n\r\nLooks like more bugs. All inference of RWKV are in RNN mode and works perfectly well.\r\n\r\nPlease try official rwkv pip package: https://pypi.org/project/rwkv/\r\n\r\n" ]
1,696
1,696
1,696
NONE
null
### System Info - `transformers` version: 4.33.3 - Platform: Linux-6.5.3-arch1-1-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help? @sgugger @ArthurZucker ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Run this code, and your GPU will go OOM very quickly: ``` import time import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "RWKV/rwkv-4-430m-pile" string = "Lorem ipsum dolor sit amet, consectetur adipiscing elit." tokenizer = AutoTokenizer.from_pretrained( model_name, ) model = AutoModelForCausalLM.from_pretrained( model_name, output_hidden_states=True, device_map="auto", ) state = None while True: inputs = tokenizer(string, return_tensors="pt") outputs = model(inputs["input_ids"][:, :2].to(model.device.type), state=state) state = outputs.state time.sleep(1) ``` ### Expected behavior According to @BlinkDL (the creator of RWKV), using RNN mode should not cause unchecked memory growth like it does here. Yet when I accumulate the model's state into itself over and over again, VRAM grows indefinitely, until CUDA finally crashes. I can't tell if I'm doing something wrong, but my code is nearly identical to @sgugger's documentation: https://huggingface.co/docs/transformers/model_doc/rwkv Thanks in advance for any help you can provide!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26588/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26587
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26587/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26587/comments
https://api.github.com/repos/huggingface/transformers/issues/26587/events
https://github.com/huggingface/transformers/pull/26587
1,925,974,477
PR_kwDOCUB6oc5b408-
26,587
Fix encoder->decoder typo bug in convert_t5x_checkpoint_to_pytorch.py
{ "login": "soyoung97", "id": 29880214, "node_id": "MDQ6VXNlcjI5ODgwMjE0", "avatar_url": "https://avatars.githubusercontent.com/u/29880214?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soyoung97", "html_url": "https://github.com/soyoung97", "followers_url": "https://api.github.com/users/soyoung97/followers", "following_url": "https://api.github.com/users/soyoung97/following{/other_user}", "gists_url": "https://api.github.com/users/soyoung97/gists{/gist_id}", "starred_url": "https://api.github.com/users/soyoung97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soyoung97/subscriptions", "organizations_url": "https://api.github.com/users/soyoung97/orgs", "repos_url": "https://api.github.com/users/soyoung97/repos", "events_url": "https://api.github.com/users/soyoung97/events{/privacy}", "received_events_url": "https://api.github.com/users/soyoung97/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26587). All of your documentation changes will be reflected on that endpoint.", "I think it went unnoticed because v1.0 model conversion is not used frequently than v1.1 models. Thanks a lot for the fast review!!" ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? The convert_t5x_checkpoint_to_pytorch is used to convert t5x models into pytorch models. However, it contains a typo at [line 142](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py#L142): The wi weights in **decoder** is converted to weights in **encoder**, and it makes the following errors when we run the script **for t5 v1.0 models** (where Split MLP layers is false and uses T5DenseActDense(wi) instead of T5DenseGatedActDense(wi_0, wi_1) is run: ``` File "/opt/conda/envs/myenv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration: Missing key(s) in state_dict: "decoder.block.0.layer.2.DenseReluDense.wi.weight", "decoder.block.1.layer.2.DenseReluDense.wi.weight", "decoder.block.2.layer.2.DenseReluDense.wi.weight", "decoder.block.3.layer.2.DenseReluDense.wi.weight", "decoder.block.4.layer.2.DenseReluDense.wi.weight", "decoder.block.5.layer.2.DenseReluDense.wi.weight", "decoder.block.6.layer.2.DenseReluDense.wi.weight", "decoder.block.7.layer.2.DenseReluDense.wi.weight", "decoder.block.8.layer.2.DenseReluDense.wi.weight", "decoder.block.9.layer.2.DenseReluDense.wi.weight", "decoder.block.10.layer.2.DenseReluDense.wi.weight", "decoder.block.11.layer.2.DenseReluDense.wi.weight". Unexpected key(s) in state_dict: "encoder.block.0.layer.2.DenseReluDense.wi.weight", "encoder.block.1.layer.2.DenseReluDense.wi.weight", "encoder.block.2.layer.2.DenseReluDense.wi.weight", "encoder.block.3.layer.2.DenseReluDense.wi.weight", "encoder.block.4.layer.2.DenseReluDense.wi.weight", "encoder.block.5.layer.2.DenseReluDense.wi.weight", "encoder.block.6.layer.2.DenseReluDense.wi.weight", "encoder.block.7.layer.2.DenseReluDense.wi.weight", "encoder.block.8.layer.2.DenseReluDense.wi.weight", "encoder.block.9.layer.2.DenseReluDense.wi.weight", "encoder.block.10.layer.2.DenseReluDense.wi.weight", "encoder.block.11.layer.2.DenseReluDense.wi.weight". ``` The following is the changed part: ``` if split_mlp_wi: new[f"decoder.block.{i}.layer.2.DenseReluDense.wi_0.weight"] = wi[0].T new[f"decoder.block.{i}.layer.2.DenseReluDense.wi_1.weight"] = wi[1].T else: new[f"encoder.block.{i}.layer.2.DenseReluDense.wi.weight"] = wi.T ``` When changing the following line from ``` new[f"encoder.block.{i}.layer.2.DenseReluDense.wi.weight"] = wi.T ``` to ``` new[f"decoder.block.{i}.layer.2.DenseReluDense.wi.weight"] = wi.T ``` the code works fine. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Related pull request seems to be [this one](https://github.com/huggingface/transformers/pull/20801), so tagging the original author @basting and the one mentioned in that PR: @patrickvonplaten @sanchit-gandhi @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26587/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26587", "html_url": "https://github.com/huggingface/transformers/pull/26587", "diff_url": "https://github.com/huggingface/transformers/pull/26587.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26587.patch", "merged_at": 1696433673000 }
https://api.github.com/repos/huggingface/transformers/issues/26586
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26586/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26586/comments
https://api.github.com/repos/huggingface/transformers/issues/26586/events
https://github.com/huggingface/transformers/pull/26586
1,925,792,775
PR_kwDOCUB6oc5b4NFj
26,586
Fix failing `MusicgenTest .test_pipeline_text_to_audio`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @sanchit-gandhi \r\n\r\nI have updated the tiny models on the Hub (config/model file) manually. I have to update the commit sha info. so the pipeline tests will take this new versions.\r\n\r\nFailing test passes now.\r\n\r\nI will rework the tiny model creation script in a separate PR.", "@LysandreJik Ready for a review and we might have a green CI during the weekend :-) 🙏 " ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fix failing `MusicgenTest .test_pipeline_text_to_audio`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26586/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26586", "html_url": "https://github.com/huggingface/transformers/pull/26586", "diff_url": "https://github.com/huggingface/transformers/pull/26586.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26586.patch", "merged_at": 1696600439000 }
https://api.github.com/repos/huggingface/transformers/issues/26585
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26585/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26585/comments
https://api.github.com/repos/huggingface/transformers/issues/26585/events
https://github.com/huggingface/transformers/pull/26585
1,925,677,806
PR_kwDOCUB6oc5b30fX
26,585
Add Bert flash attention2
{ "login": "sorenmc", "id": 42963644, "node_id": "MDQ6VXNlcjQyOTYzNjQ0", "avatar_url": "https://avatars.githubusercontent.com/u/42963644?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sorenmc", "html_url": "https://github.com/sorenmc", "followers_url": "https://api.github.com/users/sorenmc/followers", "following_url": "https://api.github.com/users/sorenmc/following{/other_user}", "gists_url": "https://api.github.com/users/sorenmc/gists{/gist_id}", "starred_url": "https://api.github.com/users/sorenmc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sorenmc/subscriptions", "organizations_url": "https://api.github.com/users/sorenmc/orgs", "repos_url": "https://api.github.com/users/sorenmc/repos", "events_url": "https://api.github.com/users/sorenmc/events{/privacy}", "received_events_url": "https://api.github.com/users/sorenmc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @younesbelkada \r\n\r\nNo problem! I feel a little stuck with the CI errors I am getting. I finished most of the work for bert, but as you can see `check_repository_consistency` fails because several models are linked to BERT, meaning that i would have to do this same implementation for 10+ models. Might need some help with that, or figure out an alternative way to split some of these up into smaller tasks that could be included in a later PR. Also tests are failing for `Wav2Vec2` that i have not touched - I'm guessing this could be related to the consistency checks?\r\n\r\n\r\n\r\n\r\n", "You should first rebase on main, then run `make fix-copies` that changes will be automatically ported to architectures that are similar to bert. But yes, this requires adding tests using `Copied from` as well in the testing files of all affected models! ", "Unrelated test are fixed in main 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,703
1,703
NONE
null
# What does this PR do? This introduces flash attention 2 for bert as discussed in https://github.com/huggingface/transformers/issues/26350 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26585/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/26585/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26585", "html_url": "https://github.com/huggingface/transformers/pull/26585", "diff_url": "https://github.com/huggingface/transformers/pull/26585.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26585.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26584
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26584/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26584/comments
https://api.github.com/repos/huggingface/transformers/issues/26584/events
https://github.com/huggingface/transformers/issues/26584
1,925,513,670
I_kwDOCUB6oc5yxQHG
26,584
Convert MetaCLIP checkpoints
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "@NielsRogge Hi! I will do it. ", "Hi @NielsRogge, I am new to all of this and not sure what HF format is. Could you please help me with some resources to learn about it a little more.\r\n", "@plon-Susk7 / @Natyren , The translation of these weights should more or less follow a similar pattern to the conversion script here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py\r\n\r\nIf you haven't seen it already, I'd also suggest looking through [this doc](https://huggingface.co/docs/transformers/add_new_model) as it covers in detail all the steps for adding a model to transformers and will give you a better idea of what hf format means.\r\n", "Yes @shauray8 although that script was used to convert the OpenAI CLIP model to the HF format.\r\n\r\nThe MetaCLIP checkpoints are in the OpenCLIP format, for which you can use this script: https://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990", "> Yes @shauray8 although that script was used to convert the OpenAI CLIP model to the HF format.\r\n> \r\n> The MetaCLIP checkpoints are in the OpenCLIP format, for which you can use this script: https://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990\r\n\r\nI tried to run this code but couldn't pass the allclose assert here even with other open_clip models https://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990#file-convert_open_clip_to_hf-py-L235\r\nWould like to see if it works for other folks", "@TonyZhanghm the MetaCLIP checkpoints use a different activation function as pointed out in their README. You need to adjust it accordingly when creating the `CLIPConfig`", "@NielsRogge Hey, Can I take this issue?", "@NielsRogge @shauray8 Hi!\r\nI have converted the weights and am currently in the loading stage (by the way, could you please advise on the best way to do this into Meta AI hf hub?). Regarding the initial scripts, I want to point out that the model incorrectly processes torch.arange(0, 77) tokens. To check this, I tokenized real text using open_clip.tokenize(), and in this case, everything is working fine.\r\nHere is the new script.\r\nhttps://gist.github.com/Natyren/c7d7889095e8e06df76e8316c2fcf89e", "@NielsRogge I've uploaded the weights (here https://huggingface.co/GeorgeBredis/MetaCLIP_b32_400m), could you please tell me if I can upload them to the original Meta repository? As far as I understand, I may need additional permissions for this.\r\nI've also converted the weights of the remaining models and am waiting for approval to upload them.", "Hi @Natyren, thanks for converting them and updating the script accordingly!\r\n\r\nFeel free to place them under your username, we'll transfer them to the Meta organization. Also, a minor thing, could you use the format `metaclip-b32-400m` for instance regarding naming of the checkpoints (just lowercase please)", "@NielsRogge Thank you for your response. I will soon upload it with the correct names", "@NielsRogge I've uploaded all the models, you can find them there (https://huggingface.co/GeorgeBredis), but there are no preprocessing and tokenizer configs in these versions, but everything should work fine with them. If something doesn't work smoothly, feel free to ping and write me! ", "Awesome, great work! I guess we can also add the `CLIPImageProcessor` and `CLIPTokenizer` files. Usually I do the following to make sure the inputs are prepared in the same way:\r\n```\r\nimport torch\r\nfrom PIL import Image\r\nimport open_clip\r\nfrom transformers import CLIPImageProcessor, CLIPTokenizer\r\n\r\nmodel, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32-quickgelu', pretrained='metaclip/b32_400m.pt')\r\n\r\nimage = Image.open(\"CLIP.png\")\r\noriginal_pixel_values = preprocess(image).unsqueeze(0)\r\ntext = [\"a diagram\", \"a dog\", \"a cat\"]\r\noriginal_input_ids = open_clip.tokenize(text)\r\n\r\n# verify pixel_values\r\nimage_processor = CLIPImageProcessor()\r\npixel_values = image_processor(image, return_tensors=\"pt\").pixel_values\r\nassert torch.allclose(pixel_values, original_pixel_values)\r\n\r\n# verify input_ids\r\ntokenizer = CLIPTokenizer.from_pretrained(\"openai/clip-vit-base-patch32\")\r\ninput_ids = image_processor(text, return_tensors=\"pt\").input_ids\r\nassert torch.allclose(input_ids, original_input_ids)\r\n```\r\n\r\nIf those pass, then we can create a CLIPProcessor which wraps both, and then push those to the hub.\r\n```\r\nfrom transformers import CLIPProcessor\r\n\r\nprocessor = CLIPProcessor(image_processor=image_processor, tokenizer=tokenizer)\r\nprocessor.push_to_hub(\"...\")\r\n```", "@NielsRogge Uploaded, preprocessed in the repositories (https://huggingface.co/GeorgeBredis). Here is the code for testing:\r\nhttps://gist.github.com/Natyren/47e564357cac95bf923b7c65781492c2\r\nAfter testing the model's functionality in the WebUI, everything is fine.\r\nIf something go wrong, feel free to write me", "Awesome work 👏 will transfer all checkpoints to the `facebook` organization", "There you have it: 7 checkpoints with added model cards: https://huggingface.co/models?other=metaclip.\r\n\r\nThanks a lot 🙏 looks like they're still training a giant-sized version on the 2.5 billion samples.\r\n\r\nWill be interesting to see whether these models are as good (or better) than the ones OpenAI released.", "Good job, thank you. As soon as they add it, I will upload it too. Yes, I agree, it will be interesting to see what they achieve in terms of metrics.", "@Natyren some people reported to me that some MetaCLIP models on the hub are using \"gelu\" instead of \"quickgelu\", did you verify all conversions? e.g. https://huggingface.co/facebook/metaclip-h14-fullcc2.5b/blob/main/config.json#L10", "@NielsRogge Hello! Yes, you are correct. Upon taking a closer look at the Meta results, I see that they have decided to follow the original architecture of CLIP. I didn't pay special attention to this as some of the models with GELU activation were showing similar results during transfer. There are two possible solutions here. First, we can simply replace the activation with quick-gelu in the repository. Second, I will recreate the models with the correct architecture (in theory, the results should be the same except for the config). I apologize for this oversight.", "Sure, let me locally test the difference with the config change and a complete model re-save (just to be safe). If everything turns out to be identical, we can replace the configs. If not, I will upload a new version.\r\n", "I've just checked, and the results are significantly different (though I haven't figured out why yet). I'll make the necessary corrections and upload new weights within the next day. I apologize for any inconvenience this may have caused.", "@NielsRogge I've uploaded the new models, and now they align with the architecture proposed by OpenAI for CLIP. I've tested a portion of them in the web interface, and everything is working fine. You can find them here https://huggingface.co/GeorgeBredis", "@Natyren is the only thing that you changed when converting the checkpoints to update the activation function? In that case I can just update the config attributes of the `facebook` checkpoints on the hub.", "Hello @NielsRogge! I also thought that would be sufficient (as mentioned above), but for the sake of certainty, I decided to verify it. It turned out that the results of the models in such a situation differ (such behavior remained a mystery to me), so I decided to reconvert the weights. However, in theory, the only thing that should differ between the versions is the configuration settings.", "Ok I've updated all checkpoints. https://huggingface.co/models?other=metaclip", "Hi @NielsRogge , when loading the tokenizers i get error:\r\n```\r\nValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.\r\n```", "Hi,\r\n\r\nI'm not able to reproduce that:\r\n\r\n```\r\n>>> from transformers import AutoProcessor\r\n>>> processor = AutoProcessor.from_pretrained(\"facebook/metaclip-b32-400m\")\r\nDownloading (…)rocessor_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 504/504 [00:00<00:00, 149kB/s]\r\nDownloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 950/950 [00:00<00:00, 494kB/s]\r\nDownloading (…)olve/main/vocab.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 8.97MB/s]\r\nDownloading (…)olve/main/merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 5.93MB/s]\r\nDownloading (…)in/added_tokens.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 57.0/57.0 [00:00<00:00, 60.4kB/s]\r\nDownloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 133/133 [00:00<00:00, 154kB/s]\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\n```", "@NielsRogge upgrading the transformers version made it works, Thanks!", "Hello @NielsRogge!\r\n\r\nThis is my very first time deciding to contribute to open source projects inspired by my participation in the Hugging Face event in Paris and the insightful conversations I had with the project maintainers.\r\n\r\nAs a final-year graduate student in Math and AI, I am eager to explore opportunities to collaborate on this issue. I would greatly appreciate it if you could provide more information on how I can get involved.\r\n\r\nThank you in advance.", "Closing this issue since it has been resolved. @mhdirnjbr I'd recommend taking a look at other \"good first issues\" or \"good second issues\", you could also contribute a Transformer-based model to the library" ]
1,696
1,704
1,699
CONTRIBUTOR
null
### Feature request Would be great to port the MetaCLIP checkpoints released by Meta to the HF format. Link: https://github.com/facebookresearch/MetaCLIP ### Motivation MetaCLIP claims to reproduce the pipeline that OpenAI used when creating CLIP, outperforming the original models. ### Your contribution I could do this myself but would be great if someone else can take this up
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26584/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26583
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26583/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26583/comments
https://api.github.com/repos/huggingface/transformers/issues/26583/events
https://github.com/huggingface/transformers/issues/26583
1,925,394,685
I_kwDOCUB6oc5ywzD9
26,583
Training issue with the Transformer CAPTCHA recognition model: Unable to converge
{ "login": "Arc-2023", "id": 64178177, "node_id": "MDQ6VXNlcjY0MTc4MTc3", "avatar_url": "https://avatars.githubusercontent.com/u/64178177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arc-2023", "html_url": "https://github.com/Arc-2023", "followers_url": "https://api.github.com/users/Arc-2023/followers", "following_url": "https://api.github.com/users/Arc-2023/following{/other_user}", "gists_url": "https://api.github.com/users/Arc-2023/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arc-2023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arc-2023/subscriptions", "organizations_url": "https://api.github.com/users/Arc-2023/orgs", "repos_url": "https://api.github.com/users/Arc-2023/repos", "events_url": "https://api.github.com/users/Arc-2023/events{/privacy}", "received_events_url": "https://api.github.com/users/Arc-2023/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks! ", "> Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n> \r\n> Thanks!\r\n\r\nThank you for the suggestion, since it has been solved, thus I will close it, this is the jump link: \r\n> https://discuss.huggingface.co/t/training-issue-with-the-transformer-captcha-recognition-model-unable-to-converge/57334/3" ]
1,696
1,696
1,696
NONE
null
I have built a model from scratch, inspired by the Transformer model and related code (such as ViT), with the goal of recognizing CAPTCHAs. However, during training, I've encountered an issue with the Transformer model. After several batch iterations, I consistently observe that the highest probability value in the output probability matrix is `<EOS>`, and this problem persists even after prolonged training. Here is an overview of my approach: I initially followed the ViT approach, where I divide input images into many small patches. Each patch is then linearly mapped to a fixed emb_d dimension. For the decoder, I map the CAPTCHA letters to the same fixed emb_d values (note: the vocabulary includes digits and letters [0-9a-zA-Z]). This way, I construct an input sequence for the encoder. For the encoder, I use the image patches as input and pass them through multiple encoder blocks, each consisting of multi-head self-attention layers, layer normalization, residual connections, and linear layers. Finally, the encoder's output matches the input's shape, i.e., [batch len_batch emb_d], and this output serves as both the key and value matrices for the decoder. For the decoder, I use the target sequence (with a shape of [batch len_batch emb_d] and the last token removed) as input and set the target sequence (with the first token removed) as the actual target. I then compute the cross-entropy loss between the output and the target. ![image](https://github.com/huggingface/transformers/assets/64178177/f3baa707-7b56-4335-8c66-197440e0c4bd) The issues I've identified are as follows: In the screenshots, it's evident that after taking the argmax of the output probability matrix, it should yield the index of the predicted label (out), which ideally should match the target index (tgt). However, I've noticed that the output for 'out' consistently corresponds to index 1, indicating "`<EOS>`." You can find the code for this top get the errors of structures in the following location: > https://nbviewer.org/github/Arc-2023/IPYNB/blob/main/notebook59efbd9b73.ipynb I have roughly verified the network structure and found no errors, but I remain uncertain. I hope someone can help me analyze this issue, and I would be extremely grateful for any assistance in resolving it. [1]: https://i.stack.imgur.com/UCtqy.png
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26583/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26582
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26582/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26582/comments
https://api.github.com/repos/huggingface/transformers/issues/26582/events
https://github.com/huggingface/transformers/pull/26582
1,925,317,061
PR_kwDOCUB6oc5b2nTj
26,582
testing doc-builder
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26582). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
CONTRIBUTOR
null
testing https://github.com/huggingface/doc-builder/pull/423
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26582/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26582", "html_url": "https://github.com/huggingface/transformers/pull/26582", "diff_url": "https://github.com/huggingface/transformers/pull/26582.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26582.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26581
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26581/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26581/comments
https://api.github.com/repos/huggingface/transformers/issues/26581/events
https://github.com/huggingface/transformers/pull/26581
1,925,220,979
PR_kwDOCUB6oc5b2S6U
26,581
Add # Copied from statements to audio feature extractors that use the floats_list function
{ "login": "dg845", "id": 58458699, "node_id": "MDQ6VXNlcjU4NDU4Njk5", "avatar_url": "https://avatars.githubusercontent.com/u/58458699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dg845", "html_url": "https://github.com/dg845", "followers_url": "https://api.github.com/users/dg845/followers", "following_url": "https://api.github.com/users/dg845/following{/other_user}", "gists_url": "https://api.github.com/users/dg845/gists{/gist_id}", "starred_url": "https://api.github.com/users/dg845/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dg845/subscriptions", "organizations_url": "https://api.github.com/users/dg845/orgs", "repos_url": "https://api.github.com/users/dg845/repos", "events_url": "https://api.github.com/users/dg845/events{/privacy}", "received_events_url": "https://api.github.com/users/dg845/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The `floats_list` function is typically preceded by a `global_rng = random.Random()` statement:\r\n\r\nhttps://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/tests/models/whisper/test_feature_extraction_whisper.py#L39-L45\r\n\r\nhttps://github.com/huggingface/transformers/pull/24799#discussion_r1325147861 suggests replacing this with something like `transformers`'s [`set_seed`](https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/trainer_utils.py#L85) utility, not sure if we should consider adding a change like this in this PR.\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26581). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? This PR adds `# Copied from` statements to audio feature extractors (and other related data processing modules) that use the `floats_list` function. The [Whisper version](https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/tests/models/whisper/test_feature_extraction_whisper.py#L42-L53) of `floats_list` is considered the "canonical" version of the function, since the CLAP model has an existing `# Copied from tests.models.whisper.test_feature_extraction_whisper.floats_list` statement for its [`floats_list`](https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/tests/models/clap/test_feature_extraction_clap.py#L37-L38) function. This issue was brought up in https://github.com/huggingface/transformers/pull/24799#discussion_r1325148003, https://github.com/huggingface/transformers/pull/24799#discussion_r1326652124, and the following thread. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26581/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26581", "html_url": "https://github.com/huggingface/transformers/pull/26581", "diff_url": "https://github.com/huggingface/transformers/pull/26581.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26581.patch", "merged_at": 1696432188000 }
https://api.github.com/repos/huggingface/transformers/issues/26580
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26580/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26580/comments
https://api.github.com/repos/huggingface/transformers/issues/26580/events
https://github.com/huggingface/transformers/pull/26580
1,925,192,575
PR_kwDOCUB6oc5b2MyK
26,580
Bump pillow from 9.3.0 to 10.0.1 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
CONTRIBUTOR
null
Bumps [pillow](https://github.com/python-pillow/Pillow) from 9.3.0 to 10.0.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/releases">pillow's releases</a>.</em></p> <blockquote> <h2>10.0.1</h2> <p><a href="https://pillow.readthedocs.io/en/stable/releasenotes/10.0.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/10.0.1.html</a></p> <h2>Changes</h2> <ul> <li>Updated libwebp to 1.3.2 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7395">#7395</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Updated zlib to 1.3 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7344">#7344</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> </ul> <h2>10.0.0</h2> <p><a href="https://pillow.readthedocs.io/en/stable/releasenotes/10.0.0.html">https://pillow.readthedocs.io/en/stable/releasenotes/10.0.0.html</a></p> <h2>Changes</h2> <ul> <li>Fixed deallocating mask images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7246">#7246</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Added ImageFont.MAX_STRING_LENGTH <a href="https://redirect.github.com/python-pillow/Pillow/issues/7244">#7244</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Fix Windows build with pyproject.toml <a href="https://redirect.github.com/python-pillow/Pillow/issues/7230">#7230</a> [<a href="https://github.com/nulano"><code>@​nulano</code></a>]</li> <li>Do not close provided file handles with libtiff <a href="https://redirect.github.com/python-pillow/Pillow/issues/7199">#7199</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Convert to HSV if mode is HSV in getcolor() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7226">#7226</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Added alpha_only argument to getbbox() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7123">#7123</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Prioritise speed in <em>repr_png</em> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7242">#7242</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Limit size even if one dimension is zero in decompression bomb check <a href="https://redirect.github.com/python-pillow/Pillow/issues/7235">#7235</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Restored 32-bit support <a href="https://redirect.github.com/python-pillow/Pillow/issues/7234">#7234</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Removed deleted file from codecov.yml and increased coverage threshold <a href="https://redirect.github.com/python-pillow/Pillow/issues/7232">#7232</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Removed support for 32-bit <a href="https://redirect.github.com/python-pillow/Pillow/issues/7228">#7228</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Use --config-settings instead of deprecated --global-option <a href="https://redirect.github.com/python-pillow/Pillow/issues/7171">#7171</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Better C integer definitions <a href="https://redirect.github.com/python-pillow/Pillow/issues/6645">#6645</a> [<a href="https://github.com/Yay295"><code>@​Yay295</code></a>]</li> <li>Fixed finding dependencies on Cygwin <a href="https://redirect.github.com/python-pillow/Pillow/issues/7175">#7175</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Improved checks in font_render <a href="https://redirect.github.com/python-pillow/Pillow/issues/7218">#7218</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Change <code>grabclipboard()</code> to use PNG compression on macOS <a href="https://redirect.github.com/python-pillow/Pillow/issues/7219">#7219</a> [<a href="https://github.com/abey79"><code>@​abey79</code></a>]</li> <li>Added PyPy 3.10 and removed PyPy 3.8 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7216">#7216</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Added in_place argument to ImageOps.exif_transpose() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7092">#7092</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Corrected error code <a href="https://redirect.github.com/python-pillow/Pillow/issues/7177">#7177</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Use &quot;not in&quot; <a href="https://redirect.github.com/python-pillow/Pillow/issues/7174">#7174</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Only call text_layout once in getmask2 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7206">#7206</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Fixed calling putpalette() on L and LA images before load() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7187">#7187</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Removed unused INT64 definition <a href="https://redirect.github.com/python-pillow/Pillow/issues/7180">#7180</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Updated xz to 5.4.3 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7136">#7136</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Fixed saving TIFF multiframe images with LONG8 tag types <a href="https://redirect.github.com/python-pillow/Pillow/issues/7078">#7078</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Do not set size unnecessarily if image fails to open <a href="https://redirect.github.com/python-pillow/Pillow/issues/7056">#7056</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Removed unused code <a href="https://redirect.github.com/python-pillow/Pillow/issues/7210">#7210</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Removed unused variables <a href="https://redirect.github.com/python-pillow/Pillow/issues/7205">#7205</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Fixed signedness comparison warning <a href="https://redirect.github.com/python-pillow/Pillow/issues/7203">#7203</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Fixed combining single duration across duplicate APNG frames <a href="https://redirect.github.com/python-pillow/Pillow/issues/7146">#7146</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Remove temporary file when error is raised <a href="https://redirect.github.com/python-pillow/Pillow/issues/7148">#7148</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Do not use temporary file when grabbing clipboard on Linux <a href="https://redirect.github.com/python-pillow/Pillow/issues/7200">#7200</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>If the clipboard fails to open on Windows, wait and try again <a href="https://redirect.github.com/python-pillow/Pillow/issues/7141">#7141</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Fixed saving multiple 1 mode frames to GIF <a href="https://redirect.github.com/python-pillow/Pillow/issues/7181">#7181</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Replaced absolute PIL import with relative import <a href="https://redirect.github.com/python-pillow/Pillow/issues/7173">#7173</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Removed files and types override <a href="https://redirect.github.com/python-pillow/Pillow/issues/7194">#7194</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst">pillow's changelog</a>.</em></p> <blockquote> <h2>10.0.1 (2023-09-15)</h2> <ul> <li> <p>Updated libwebp to 1.3.2 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7395">#7395</a> [radarhere]</p> </li> <li> <p>Updated zlib to 1.3 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7344">#7344</a> [radarhere]</p> </li> </ul> <h2>10.0.0 (2023-07-01)</h2> <ul> <li> <p>Fixed deallocating mask images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7246">#7246</a> [radarhere]</p> </li> <li> <p>Added ImageFont.MAX_STRING_LENGTH <a href="https://redirect.github.com/python-pillow/Pillow/issues/7244">#7244</a> [radarhere, hugovk]</p> </li> <li> <p>Fix Windows build with pyproject.toml <a href="https://redirect.github.com/python-pillow/Pillow/issues/7230">#7230</a> [hugovk, nulano, radarhere]</p> </li> <li> <p>Do not close provided file handles with libtiff <a href="https://redirect.github.com/python-pillow/Pillow/issues/7199">#7199</a> [radarhere]</p> </li> <li> <p>Convert to HSV if mode is HSV in getcolor() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7226">#7226</a> [radarhere]</p> </li> <li> <p>Added alpha_only argument to getbbox() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7123">#7123</a> [radarhere. hugovk]</p> </li> <li> <p>Prioritise speed in <em>repr_png</em> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7242">#7242</a> [radarhere]</p> </li> <li> <p>Do not use CFFI access by default on PyPy <a href="https://redirect.github.com/python-pillow/Pillow/issues/7236">#7236</a> [radarhere]</p> </li> <li> <p>Limit size even if one dimension is zero in decompression bomb check <a href="https://redirect.github.com/python-pillow/Pillow/issues/7235">#7235</a> [radarhere]</p> </li> <li> <p>Use --config-settings instead of deprecated --global-option <a href="https://redirect.github.com/python-pillow/Pillow/issues/7171">#7171</a> [radarhere]</p> </li> <li> <p>Better C integer definitions <a href="https://redirect.github.com/python-pillow/Pillow/issues/6645">#6645</a> [Yay295, hugovk]</p> </li> <li> <p>Fixed finding dependencies on Cygwin <a href="https://redirect.github.com/python-pillow/Pillow/issues/7175">#7175</a> [radarhere]</p> </li> <li> <p>Changed grabclipboard() to use PNG instead of JPG compression on macOS <a href="https://redirect.github.com/python-pillow/Pillow/issues/7219">#7219</a> [abey79, radarhere]</p> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/python-pillow/Pillow/commit/e34d346f10c0b1c814661e662a3e0c1ef084cf1c"><code>e34d346</code></a> Updated order</li> <li><a href="https://github.com/python-pillow/Pillow/commit/a62f2402a6bcf11a0a1670542216725a3f9190e0"><code>a62f240</code></a> 10.0.1 version bump</li> <li><a href="https://github.com/python-pillow/Pillow/commit/d50250d9eab741ae3ddd592d8910cfd7973b9d35"><code>d50250d</code></a> Added release notes for 10.0.1</li> <li><a href="https://github.com/python-pillow/Pillow/commit/b4c7d4b8b2710b7af6cc944a804902eb75fd9056"><code>b4c7d4b</code></a> Update CHANGES.rst [ci skip]</li> <li><a href="https://github.com/python-pillow/Pillow/commit/730f74600e8215ab510f71bb1fbb49d906c4356b"><code>730f746</code></a> Updated libwebp to 1.3.2</li> <li><a href="https://github.com/python-pillow/Pillow/commit/b0e28048d692effadfe7a4268a03e1d20e0198bb"><code>b0e2804</code></a> Updated zlib to 1.3</li> <li><a href="https://github.com/python-pillow/Pillow/commit/6e28ed1f36d0eb74053af54e1eddc9c29db698cd"><code>6e28ed1</code></a> 10.0.0 version bump</li> <li><a href="https://github.com/python-pillow/Pillow/commit/c827f3b30f50bf04fd65daeeba6bbfd56fc7b50e"><code>c827f3b</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7246">#7246</a> from radarhere/deallocate</li> <li><a href="https://github.com/python-pillow/Pillow/commit/39a3b1d83edcf826c3864e26bedff5b4e4dd331b"><code>39a3b1d</code></a> Fixed deallocating mask images</li> <li><a href="https://github.com/python-pillow/Pillow/commit/8c1dc819fd91471825da01976ac0e0bc8789590f"><code>8c1dc81</code></a> Update CHANGES.rst [ci skip]</li> <li>Additional commits viewable in <a href="https://github.com/python-pillow/Pillow/compare/9.3.0...10.0.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pillow&package-manager=pip&previous-version=9.3.0&new-version=10.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26580/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26580", "html_url": "https://github.com/huggingface/transformers/pull/26580", "diff_url": "https://github.com/huggingface/transformers/pull/26580.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26580.patch", "merged_at": 1696413166000 }
https://api.github.com/repos/huggingface/transformers/issues/26579
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26579/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26579/comments
https://api.github.com/repos/huggingface/transformers/issues/26579/events
https://github.com/huggingface/transformers/pull/26579
1,925,147,424
PR_kwDOCUB6oc5b2Cu0
26,579
Fix TypicalLogitsWarper tensor OOB indexing edge case
{ "login": "njhill", "id": 16958488, "node_id": "MDQ6VXNlcjE2OTU4NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/16958488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/njhill", "html_url": "https://github.com/njhill", "followers_url": "https://api.github.com/users/njhill/followers", "following_url": "https://api.github.com/users/njhill/following{/other_user}", "gists_url": "https://api.github.com/users/njhill/gists{/gist_id}", "starred_url": "https://api.github.com/users/njhill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/njhill/subscriptions", "organizations_url": "https://api.github.com/users/njhill/orgs", "repos_url": "https://api.github.com/users/njhill/repos", "events_url": "https://api.github.com/users/njhill/events{/privacy}", "received_events_url": "https://api.github.com/users/njhill/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm going to be honest - I don't fully understand the code here!\r\n\r\nThe array `last_ind` is created as:\r\n`last_ind = (cumulative_probs < self.mass).sum(dim=1)`\r\n\r\nThis is the sum of a boolean array, which should be strictly nonnegative, because boolean arrays only contain 0 and 1 values. Therefore, I don't understand why the original line `last_ind[last_ind < 0] = 0` or the replacement using `torch.clamp_` are necessary - I don't see how you'd get negative values without an integer overflow, and if we're getting an integer overflow we should be using bigger integer dtypes, not fixing it with value clamping.\r\n\r\nDo you know why this is necessary in the first place?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26579). All of your documentation changes will be reflected on that endpoint.", "@Rocketknight1 good point, I hadn't considered whether the existing check was valid/redundant, I guess it's [always been there](https://github.com/huggingface/transformers/pull/15504).\r\n\r\nI guess it would make more sense if the prior line was `last_ind = (cumulative_probs < self.mass).sum(dim=1) - 1`, perhaps that was originally intended but left out. I'll update this PR accordingly.", "I guess this will actually mean a subtle change in behaviour, but I'm fairly sure it's what was originally intended. Not sure whether this is ok though w.r.t. transformers policies around this kind of thing...", "cc @gante here - I think you might know better than me what the code is doing!", "Thanks @gante @Rocketknight1, I've now rebased and added a commit with the explicit `min` arg suggestion.", "Thank you for the fix @njhill 💪 " ]
1,696
1,698
1,698
CONTRIBUTOR
null
This can be triggered fairly quickly with low precision e.g. `bfloat16` and `typical_p=0.99`. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26579/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26579", "html_url": "https://github.com/huggingface/transformers/pull/26579", "diff_url": "https://github.com/huggingface/transformers/pull/26579.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26579.patch", "merged_at": 1698230203000 }
https://api.github.com/repos/huggingface/transformers/issues/26578
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26578/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26578/comments
https://api.github.com/repos/huggingface/transformers/issues/26578/events
https://github.com/huggingface/transformers/issues/26578
1,925,137,998
I_kwDOCUB6oc5yv0ZO
26,578
No `past_key_values` argument for RobertaForMaskedLM, RobertaForTokenClassification, etc.
{ "login": "simonlevine", "id": 50503513, "node_id": "MDQ6VXNlcjUwNTAzNTEz", "avatar_url": "https://avatars.githubusercontent.com/u/50503513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonlevine", "html_url": "https://github.com/simonlevine", "followers_url": "https://api.github.com/users/simonlevine/followers", "following_url": "https://api.github.com/users/simonlevine/following{/other_user}", "gists_url": "https://api.github.com/users/simonlevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonlevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonlevine/subscriptions", "organizations_url": "https://api.github.com/users/simonlevine/orgs", "repos_url": "https://api.github.com/users/simonlevine/repos", "events_url": "https://api.github.com/users/simonlevine/events{/privacy}", "received_events_url": "https://api.github.com/users/simonlevine/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ArthurZucker I'm not sure if this is just a miss-out (maybe it's not as `MaskedLMOutput/TokenClassifierOutput` does not have a `past-key-value` arg), but `past_key_values` are of course necessary for PEFT, a quick fix would be to pass `past_key_values` through forward. \r\nhttps://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/models/roberta/modeling_roberta.py#L1059-L1073\r\nand through the model as well.\r\nhttps://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/models/roberta/modeling_roberta.py#L1084-L1095\r\nwith changes to *modeling_output*.\r\nhttps://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/modeling_outputs.py#L725-L728\r\n", "Hey! I am a bit confused here, `past_key_values` are usually only needed when you perform **generation** using the `generate` function with either an *encoder-decoder* or a *decoder*. Pretty sure the task you are referring to do not need this no? ", "> Hey! I am a bit confused here, `past_key_values` are usually only needed when you perform **generation** using the `generate` function with either an *encoder-decoder* or a *decoder*. Pretty sure the task you are referring to do not need this no? \n\nYes, but I think one should be able to use PEFT with bidirectional models for feature extraction etc. so either the PEFT implementation, documentation (mentioning BERT/RoBERTa), or function signature is incorrect. If the past key values aren't needed then the inspection of the base model's forward shouldn't check for it.", "As @ArthurZucker said, `past_key_values` is not for the models you mentioned.\r\n\r\n> If the past key values aren't needed then the inspection of the base model's forward shouldn't check for it.\r\nCould you point us to the place of the `inspection` you mean here? \r\n\r\n\r\n", "See [https://github.com/huggingface/peft/blob/dbd40d96a15d9b8b04c3582bb9ea00ae24f56348/src/peft/peft_model.py#L828](https://github.com/huggingface/peft/blob/dbd40d96a15d9b8b04c3582bb9ea00ae24f56348/src/peft/peft_model.py#L828).\n\nAlso, the forward arguments for RobertaSelfAttention in the non-causal decoder case (\"generate\") [here](https://github.com/huggingface/transformers/blob/75a33d60f25d99ff8cdd657d6ba685dc4336a0d1/src/transformers/models/roberta/modeling_roberta.py#L213C13-L213C13). Maybe it's the case that PEFT shouldn't be using this in the first place even if it's useful for causal models.", "This is indeed something to be addressed on PEFT side, as it uses\r\n```\r\nsignature(self.base_model.forward)\r\n```\r\nby looking the `base_model`, which can take more arguments (as there are different types of models with head on top of the base model).\r\n" ]
1,696
1,696
1,696
NONE
null
### System Info @ArthurZucker @younesbelkada RoBERTa's encoder allows for `past_key_values` to be passed, but RobertaForMaskedLM / RobertaForTokenClassification `forward()` doesn't. As a result, prefix tuning (in PEFT) fails as the forward signature `past_key_values` . I'd suggest adding encoder kwargs or similar. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Trying to run Prefix Tuning on RoBERTa (for feature extraction, etc.) results in this issue. ### Expected behavior Expect the example to run given that RoBERTa is supported.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26578/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26577
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26577/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26577/comments
https://api.github.com/repos/huggingface/transformers/issues/26577/events
https://github.com/huggingface/transformers/issues/26577
1,925,052,707
I_kwDOCUB6oc5yvfkj
26,577
Getting token probabilities of a caption given an image from BLIP2
{ "login": "snpushpi", "id": 55248448, "node_id": "MDQ6VXNlcjU1MjQ4NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/55248448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snpushpi", "html_url": "https://github.com/snpushpi", "followers_url": "https://api.github.com/users/snpushpi/followers", "following_url": "https://api.github.com/users/snpushpi/following{/other_user}", "gists_url": "https://api.github.com/users/snpushpi/gists{/gist_id}", "starred_url": "https://api.github.com/users/snpushpi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snpushpi/subscriptions", "organizations_url": "https://api.github.com/users/snpushpi/orgs", "repos_url": "https://api.github.com/users/snpushpi/repos", "events_url": "https://api.github.com/users/snpushpi/events{/privacy}", "received_events_url": "https://api.github.com/users/snpushpi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! I would like to work on this issue.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @snpushpi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,701
1,701
NONE
null
I was considering the BLIP2 model for getting the probability distribution of each token in the caption given an image. So basically if the words in a caption are w1,w2,w3,…wt then I want to get these estimates P(w1|image), P(w2|image,w1),P(w3|image,w1,w2) etc. So this is the approach I took - ` from PIL import Image import requests from transformers import Blip2Processor, Blip2ForConditionalGeneration import torch device = "cuda" processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2Model.from_pretrained( "Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16 ) model.to(device) url = "http://images.cocodataset.org/val2017/000000485895.jpg" image = Image.open(requests.get(url, stream=True).raw) pixel_values = processor(images=image, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device, torch.float16) sentence = 'A giraffe stares into the camera while standing on green grass in front of a shade tree.' input_ids = processor.tokenizer(sentence, return_tensors = 'pt').input_ids.to(device) output = model(pixel_values,input_ids = input_ids).logits.detach() considered_logits = output[:,32:,:] ` So the considered logits is the probability distribution of each token in the caption, am I doing it right?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26577/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26576
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26576/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26576/comments
https://api.github.com/repos/huggingface/transformers/issues/26576/events
https://github.com/huggingface/transformers/pull/26576
1,924,730,717
PR_kwDOCUB6oc5b0n4A
26,576
Update tokenization_code_llama_fast.py
{ "login": "andyl98", "id": 31980222, "node_id": "MDQ6VXNlcjMxOTgwMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/31980222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andyl98", "html_url": "https://github.com/andyl98", "followers_url": "https://api.github.com/users/andyl98/followers", "following_url": "https://api.github.com/users/andyl98/following{/other_user}", "gists_url": "https://api.github.com/users/andyl98/gists{/gist_id}", "starred_url": "https://api.github.com/users/andyl98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyl98/subscriptions", "organizations_url": "https://api.github.com/users/andyl98/orgs", "repos_url": "https://api.github.com/users/andyl98/repos", "events_url": "https://api.github.com/users/andyl98/events{/privacy}", "received_events_url": "https://api.github.com/users/andyl98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Added some unit tests :) ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26576). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the issue: https://github.com/huggingface/transformers/issues/26575 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/26575 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26576/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26576", "html_url": "https://github.com/huggingface/transformers/pull/26576", "diff_url": "https://github.com/huggingface/transformers/pull/26576.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26576.patch", "merged_at": 1696582143000 }
https://api.github.com/repos/huggingface/transformers/issues/26575
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26575/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26575/comments
https://api.github.com/repos/huggingface/transformers/issues/26575/events
https://github.com/huggingface/transformers/issues/26575
1,924,714,840
I_kwDOCUB6oc5yuNFY
26,575
CodeLlama FastTokenizer Infilling Format Bug with `suffix_first=True`
{ "login": "andyl98", "id": 31980222, "node_id": "MDQ6VXNlcjMxOTgwMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/31980222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andyl98", "html_url": "https://github.com/andyl98", "followers_url": "https://api.github.com/users/andyl98/followers", "following_url": "https://api.github.com/users/andyl98/following{/other_user}", "gists_url": "https://api.github.com/users/andyl98/gists{/gist_id}", "starred_url": "https://api.github.com/users/andyl98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyl98/subscriptions", "organizations_url": "https://api.github.com/users/andyl98/orgs", "repos_url": "https://api.github.com/users/andyl98/repos", "events_url": "https://api.github.com/users/andyl98/events{/privacy}", "received_events_url": "https://api.github.com/users/andyl98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,696
1,697
1,697
CONTRIBUTOR
null
### System Info Suppose the code is: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("codellama/codellama-7b-hf") input = tokenizer("# print hello world<FILL_ME>orld')", suffix_first=True) tokenizer.batch_decode([input["input_ids"]])[0] ``` The output will be ``` "<s> <PRE> <SUF> # print hello world <MID>orld')" ``` Which is incorrect because the prefix is put into the suffix field. The desired output should be `"<s> <PRE> <SUF>orld') <MID> # print hello world"` If we don't use the fast tokenizer ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("codellama/codellama-7b-hf", use_fast=False) input = tokenizer("# print hello world<FILL_ME>orld')", suffix_first=True) tokenizer.batch_decode([input["input_ids"]])[0] ``` The output will be correct `"<s>▁<PRE>▁<SUF>orld')▁<MID> # print hello world"` The issue is on [this line](https://github.com/huggingface/transformers/blob/5af2c6269672cda01c24ad48fab13f14a3ffb746/src/transformers/models/code_llama/tokenization_code_llama_fast.py#L292) Where the `$A` and `$B` should be swapped. @ArthurZucker ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Suppose the code is: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("codellama/codellama-7b-hf") input = tokenizer("# print hello world<FILL_ME>orld')", suffix_first=True) tokenizer.batch_decode([input["input_ids"]])[0] ``` The output will be ``` "<s> <PRE> <SUF> # print hello world <MID>orld')" ``` Which is incorrect because the prefix is put into the suffix field. The desired output should be `"<s> <PRE> <SUF>orld') <MID> # print hello world"` If we don't use the fast tokenizer ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("codellama/codellama-7b-hf", use_fast=False) input = tokenizer("# print hello world<FILL_ME>orld')", suffix_first=True) tokenizer.batch_decode([input["input_ids"]])[0] ``` The output will be correct `"<s>▁<PRE>▁<SUF>orld')▁<MID> # print hello world"` ### Expected behavior Output should be `"<s> <PRE> <SUF>orld') <MID> # print hello world"` when using `suffix_first=True`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26575/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26574
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26574/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26574/comments
https://api.github.com/repos/huggingface/transformers/issues/26574/events
https://github.com/huggingface/transformers/pull/26574
1,924,608,122
PR_kwDOCUB6oc5b0NYa
26,574
[Tokenizers] Skip tests temporarily
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26574). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
MEMBER
null
Skip tests temporarily so that `main` remains green.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26574/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26574", "html_url": "https://github.com/huggingface/transformers/pull/26574", "diff_url": "https://github.com/huggingface/transformers/pull/26574.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26574.patch", "merged_at": 1696355022000 }
https://api.github.com/repos/huggingface/transformers/issues/26573
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26573/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26573/comments
https://api.github.com/repos/huggingface/transformers/issues/26573/events
https://github.com/huggingface/transformers/pull/26573
1,924,601,301
PR_kwDOCUB6oc5b0L6Q
26,573
Add add_generation_prompt argument to apply_chat_template
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
MEMBER
null
This PR adds a new `add_generation_prompt` argument to `apply_chat_template`. We need this when we want to chat with a chat model - if the model has special tokens that indicate the start of a bot message, then we need to append these to the end of a generation prompt to indicate to the model that it should write a bot reply, and not continue the user message or something like that. Note that many prompts (e.g. LLaMA) don't include special tokens at the start of bot messages - this makes them very easy to generate for. This argument would have no effect for them, so I don't need to update their chat templates to support it. Fixes #26539
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26573/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26573", "html_url": "https://github.com/huggingface/transformers/pull/26573", "diff_url": "https://github.com/huggingface/transformers/pull/26573.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26573.patch", "merged_at": 1696428930000 }
https://api.github.com/repos/huggingface/transformers/issues/26572
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26572/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26572/comments
https://api.github.com/repos/huggingface/transformers/issues/26572/events
https://github.com/huggingface/transformers/pull/26572
1,924,580,423
PR_kwDOCUB6oc5b0HYS
26,572
F.scaled_dot_product_attention support
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26572). All of your documentation changes will be reflected on that endpoint.", "As https://github.com/huggingface/transformers/pull/26792 was merged will get back to it this week, targeting next to next transformers release.", "I think it would be wise to put a requirement on `torch>=2.1.1` due to this issue https://github.com/pytorch/pytorch/issues/112577, but you are the judge. Happy to do a check on torch version and call `contiguous()` of we want to support `2.1.0`.", "This PR adds the support of SDPA for the following architectures, activated by default with `torch>=2.1.1`:\r\n* Llama\r\n* Falcon (support extended to alibi, bugfix with attention row of mask all masked)\r\n* GPT BigCode\r\n* Bart\r\n* Whisper\r\n\r\nThe other model files changed are the result of `Copied from`. None of them are removed.\r\n\r\nThe method `_unmask_unattended` added to `AttentionMaskConverter` is critical to avoid this bug https://github.com/pytorch/pytorch/issues/110213.\r\n\r\nThe addition of the methods `_prepare_4d_causal_attention_mask_for_sdpa` and `_prepare_4d_attention_mask_for_sdpa` are useful to dispatch to FA1/FA2 that can not handle a non `None` `attn_mask` argument.\r\n\r\nThe `\"default\"` in `ATTENTION_CLASSES` is replaced by `\"eager\"`, as the default will be dependent on PyTorch version. Maybe there is a better name to find than `\"eager\"`.\r\n\r\nI argue that this issue https://github.com/pytorch/pytorch/issues/112577 is a good enough argument to put a requirement on `torch>2.1.1` (not yet released).\r\n\r\nThere are likely still bugs - but I think it is a good time for review @patrickvonplaten @younesbelkada @amyeroberts @ArthurZucker @LysandreJik ", "@ArthurZucker @patrickvonplaten @LysandreJik @amyeroberts feel free to rereview, this should be in good shape", "@ArthurZucker thanks a lot for the great points raised in your review. They should be addressed.\r\n\r\n@patrickvonplaten @LysandreJik happy to hear your opinion as well", "Before merge (or whenever it's the time to check), please ping me to trigger CI on GPU. Thanks.", "For sure @ydshieh!", "> Looks good to me, but tend to think that the mask transformation might be better in the AttentionClass rather that in the model class (as we have if else for each implementation basically). It's probably the last blocker for me with regard to the modeling codes.\r\n\r\nYes that's a great point, and related to some discussions in https://github.com/huggingface/transformers/pull/26792 & https://huggingface.slack.com/archives/C060RADBR4J/p1697050725287089 (private, you should be in). My understanding is that @patrickvonplaten at first wanted to do exactly that, moving the attention_mask logic to `LlamaAttention` class, with a caching mechanism to avoid recomputing at each layer: https://github.com/huggingface/transformers/pull/26792#issuecomment-1771836936\r\n\r\nHowever, my understanding is the caching mechanism was not very elegant, which just motivated the if/else in a forward at the LlamaModel level. I agree it is not super super ideal. @patrickvonplaten what was the issue with https://github.com/huggingface/transformers/blob/ae3eb2e72aa0c0e8fad8df66e0b4199168c98619/src/transformers/models/llama/modeling_llama.py#L93? That `tensor.data_ptr()` is not enough and we need actually a `torch.equal()` to check that the mask is the same?", "What is the final API now that the user can use to enable SDPA? ", "@patrickvonplaten \r\n> I propose to enable SDPA by default if torch>=2.1.1 (released 15 Nov. 2023), for the reasons written in the PR.\r\n\r\nApart from that, exposed API is `attn_implementation=\"sdpa\"`, `attn_implementation=\"eager\"` that may be used e.g. to disable SDPA if required (head_mask, output_attentions).\r\n\r\nWe may need to agree on the name `attn_implementation` and `eager`.", "`config.attn_implementation` is made private to `config._attn_implementation` in https://github.com/huggingface/transformers/pull/26572/commits/5c77b944749268b11f30c01a010d84fd692d670a as suggested", "`torch.jit.trace` is unhappy about our implementation https://github.com/pytorch/pytorch/issues/115262, making some tests fail. It's a tricky one.", "@fxmarty can we maybe just disable `torch.jit.trace` for testing now? Don't think it's super important tbh", "@patrickvonplaten Indeed unfortunately that's what I go for for now: one need `attn_implementation=\"eager\"` (or `torch<2.1.1`) to use `torch.jit.trace` successfully for architectures that support SDPA.\r\n\r\nThis PR should be in good shape.\r\n\r\nLeft to do:\r\n- [x] Falcon with torch==2.0 needs to use SDPA anymore by default for BC\r\n- [x] Make `torch.jit.trace` work with SDPA attention when an attention mask is provided. Tracing SDPA with torch.jit.trace does not work by default with models supporting SDPA and loaded with `torch>=2.1.1` (see https://github.com/pytorch/pytorch/issues/115262) when no attention_mask is passed. Models should be loaded with `attn_implementation=\"eager\"` to be exported with `torch.jit.trace`, or use `attention_mask` when tracing. This may be improved in the future.\r\n\r\nFor the record, the `is_causal` controlflow is needed due to https://github.com/pytorch/pytorch/issues/108108 & the fact that we want as much as possible be able to dispatch to flash attention (passing `attn_mask` makes it impossible).\r\n\r\nAs SDPA is still going through changes in PyTorch (see https://github.com/pytorch/pytorch/pull/114823 & https://github.com/pytorch/pytorch/issues/110681), I think we should keep the flexibility of the minimum `torch` version required to use those classes (e.g. we may want to bump to `torch>=2.2` in the future.", "It is ready. Here is a summary of the relevant CI.\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/llama -s -vvvvv`\r\n\r\nFlacky (new):\r\n```\r\nFAILED tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_eager_matches_sdpa_inference_1_bfloat16 - AssertionError: False is not true : padding_side=left, use_mask=False, batch_size=5, enable_kernels=True: mean relative difference: 8.728e-03, torch ato...\r\n```\r\n\r\nAlready failing on `main`:\r\n```\r\nFAILED tests/models/llama/test_modeling_llama.py::CodeLlamaIntegrationTest::test_model_7b_logits - AssertionError: Lists differ: ['<s>▁<PRE> def remove_non_ascii(s: str) -> st[893 chars]ID>'] != ['<s> <PRE> def remove_non_ascii(s: str) -> st[893 chars...\r\nFAILED tests/models/llama/test_tokenization_llama.py::LlamaIntegrationTest::test_conversion - AssertionError: '{\\n [964 chars]or\": {\\n \"type\": \"TemplateProcessing\",\\n [1795198 chars]}\\n}' != '{\\n [964 chars]or\": null,\\n \"decoder\": {\\n \"t...\r\n```\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/whisper -s -vvvvv`\r\n\r\nAlready failing on `main`:\r\n```\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation_multilingual - FileNotFoundError: https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ja.tar.gz\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch - assert [' While Porashaggy sits there, a cooing dove. He has gone, and gone for good,\" answered Polychrom, who had managed to squeeze into the room besi...\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard - assert \" Folks, if you watch the show, you know, I spent a lot of time right over there. Patiently and astutely scrutinizing the boxwood and mahogany ch...\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch - assert [\" Because you were sleeping instead of conquering, the lovely rose princess has become a fiddle without a bow, all poor ashaggy sits there, acco...\r\n```\r\n\r\nFlacky (on `main`):\r\n```\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_generate_left_padding - AssertionError: False is not true\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_inference - AssertionError: assert False\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_inference_padding_right - AssertionError: assert False\r\n```\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bart -s -vvvvv`\r\n\r\nFlacky on `main`:\r\n```\r\nAILED tests/models/bart/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_cpu_offload - AssertionError: False is not true\r\n```\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/falcon -s -vvvvv`\r\n\r\nFlacky (new):\r\n```\r\nFAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_eager_matches_sdpa_inference_1_bfloat16 - AssertionError: False is not true : padding_side=left, use_mask=True, batch_size=5, enable_kernels=True: mean relative difference: 7.141e-03, torch atol...\r\n```\r\n\r\nFlacky (on `main`):\r\n```\r\nFAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\n```\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/idefics -s -vvvvv`\r\n\r\nall pass\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bert -s -vvvvv`\r\n\r\nall pass\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/gpt2 -s -vvvvv`\r\n\r\nall pass\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/test_modeling_utils.py -s -vvvvv`\r\n\r\nAlready failing on `main`:\r\n```\r\nFAILED tests/test_modeling_utils.py::ModelUtilsTest::test_legacy_load_from_url - huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'https://huggingface.co/hf-intern...\r\nFAILED tests/test_modeling_utils.py::ModelUtilsTest::test_load_from_one_file - huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/tmp/tmp64wrpwyf'. Use `repo_typ...\r\nFAILED tests/test_modeling_utils.py::ModelUtilsTest::test_model_from_pretrained - AssertionError: 7 != 8\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_conversion - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_conversion_gated - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_conversion_private - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_sharded_conversion - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_sharded_conversion_gated - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_sharded_conversion_private - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_specific_revision - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_wrong_user_opened_pr - ValueError: Cannot run tests as secret isn't setup.\r\n```\r\n\r\n#### `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/ -s -vvvvv -k \"flash or sdpa\"`\r\n\r\nFlacky (new):\r\n```\r\nFAILED tests/models/falcon/test_modeling_falcon.py::FalconModelTest::test_eager_matches_sdpa_inference_1_bfloat16 - AssertionError: False is not true : padding_side=left, use_mask=False, batch_size=1, enable_kernels=True: mean relative difference: 7.660e-03, torch ato...\r\n```\r\n\r\nAlready failing/flacky on `main`:\r\n```\r\nFAILED tests/models/bark/test_modeling_bark.py::BarkSemanticModelTest::test_flash_attn_2_from_config - ValueError: Unrecognized configuration class <class 'transformers.models.bark.configuration_bark.BarkSemanticConfig'> for this kind of AutoModel: AutoMo...\r\nFAILED tests/models/bark/test_modeling_bark.py::BarkSemanticModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\nFAILED tests/models/bark/test_modeling_bark.py::BarkCoarseModelTest::test_flash_attn_2_from_config - ValueError: Unrecognized configuration class <class 'transformers.models.bark.configuration_bark.BarkCoarseConfig'> for this kind of AutoModel: AutoMode...\r\nFAILED tests/models/distilbert/test_modeling_distilbert.py::DistilBertModelTest::test_flash_attn_2_inference_padding_right - AssertionError: False is not true\r\nFAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\nFAILED tests/models/gpt_neo/test_modeling_gpt_neo.py::GPTNeoModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\nFAILED tests/models/gpt_neox/test_modeling_gpt_neox.py::GPTNeoXModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\nFAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_inference - AssertionError: assert False\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_flash_attn_2_inference_padding_right - AssertionError: assert False\r\n```" ]
1,696
1,708
1,702
COLLABORATOR
null
As per title, this PR proposes to support natively `torch.nn.functional.scaled_dot_product_attention` in transformers. I propose to enable SDPA by default if `torch>=2.1.1` (released 15 Nov. 2023), for the reasons written in the PR. The support could then be extended using https://github.com/huggingface/optimum/blob/main/optimum/bettertransformer/models/attention.py. --- The introduced `_unmask_unattended` is a workaround for https://github.com/pytorch/pytorch/issues/110213. It behaves as follow: If attention_mask is ``` [[0, 0, 1] [1, 1, 1] [0, 1, 1]] ``` and expanded_mask is (e.g. here left-padding case) ``` [[[[0, 0, 0], [0, 0, 0], [0, 0, 1]]], [[[1, 0, 0], [1, 1, 0], [1, 1, 1]]], [[[0, 0, 0], [0, 1, 0], [0, 1, 1]]]] ``` then the modified expanded_mask will be ``` [[[[1, 1, 1], <-- modified [1, 1, 1], <-- modified [0, 0, 1]]], [[[1, 0, 0], [1, 1, 0], [1, 1, 1]]], [[[1, 1, 1], <-- modified [0, 1, 0], [0, 1, 1]]]] ``` Modifying as such the attention mask is fine given that we modify it only for pad tokens on the `-2` dimension. Softmax is computed on the `-1` dimension, and thus there is no change for the relevant non-padding tokens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26572/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26572", "html_url": "https://github.com/huggingface/transformers/pull/26572", "diff_url": "https://github.com/huggingface/transformers/pull/26572.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26572.patch", "merged_at": 1702067894000 }
https://api.github.com/repos/huggingface/transformers/issues/26571
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26571/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26571/comments
https://api.github.com/repos/huggingface/transformers/issues/26571/events
https://github.com/huggingface/transformers/issues/26571
1,924,575,154
I_kwDOCUB6oc5ytq-y
26,571
Can't release memory occupied by model after trainer.train() with del model and gc.collect().
{ "login": "hanrui4248", "id": 81265961, "node_id": "MDQ6VXNlcjgxMjY1OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/81265961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hanrui4248", "html_url": "https://github.com/hanrui4248", "followers_url": "https://api.github.com/users/hanrui4248/followers", "following_url": "https://api.github.com/users/hanrui4248/following{/other_user}", "gists_url": "https://api.github.com/users/hanrui4248/gists{/gist_id}", "starred_url": "https://api.github.com/users/hanrui4248/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanrui4248/subscriptions", "organizations_url": "https://api.github.com/users/hanrui4248/orgs", "repos_url": "https://api.github.com/users/hanrui4248/repos", "events_url": "https://api.github.com/users/hanrui4248/events{/privacy}", "received_events_url": "https://api.github.com/users/hanrui4248/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "WDYT @muellerzr @pacman100 ?", "I'm not sure we really *can* reduce it all to zero, due to PyTorch itself. Take the below example, which removes all major transformers and accelerate code and does everything in pure python, bare freeing of memory (it's the same thing as what you do manually there). **No matter what**, we are still left with 8.125 in allocated, 20.0 in reserved. Also: **this only shows up after getting an output from the model**.\r\n\r\nScript in question:\r\n\r\n```python\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers import AutoTokenizer, AutoModel\r\nfrom accelerate.utils import release_memory, send_to_device\r\n\r\nconfig = {\"lr\": 2e-5, \"num_epochs\": 3, \"seed\": 42, \"batch_size\": 16}\r\n\r\nMAX_GPU_BATCH_SIZE = 16\r\nEVAL_BATCH_SIZE = 32\r\n\r\ndef memory_stats():\r\n return torch.cuda.memory_summary()\r\n\r\n\r\ndef get_dataloader(batch_size: int = 16):\r\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n train_dataset = load_dataset(\"glue\", \"mrpc\", split=\"train[:64]\")\r\n\r\n def tokenize_function(examples):\r\n outputs = tokenizer(examples[\"sentence1\"], examples[\"sentence2\"], truncation=True, max_length=None)\r\n return outputs\r\n\r\n tokenized_datasets = train_dataset.map(\r\n tokenize_function,\r\n batched=True,\r\n remove_columns=[\"idx\", \"sentence1\", \"sentence2\"],\r\n )\r\n\r\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n def collate_fn(examples):\r\n max_length = None\r\n pad_to_multiple_of = None\r\n\r\n return tokenizer.pad(\r\n examples,\r\n padding=\"longest\",\r\n max_length=max_length,\r\n pad_to_multiple_of=pad_to_multiple_of,\r\n return_tensors=\"pt\",\r\n )\r\n\r\n train_dataloader = DataLoader(\r\n tokenized_datasets, shuffle=True, collate_fn=collate_fn, batch_size=batch_size, drop_last=True\r\n )\r\n\r\n return train_dataloader\r\n\r\nbatch_size = int(config[\"batch_size\"])\r\ntrain_dataloader = get_dataloader(batch_size)\r\nmodel = AutoModel.from_pretrained(\"bert-base-cased\")\r\nmodel = model.to(\"cuda\")\r\nmodel.eval()\r\nwith torch.inference_mode():\r\n batch = next(iter(train_dataloader))\r\n batch = batch.to(\"cuda\")\r\n out = model.forward(batch[\"input_ids\"], batch[\"attention_mask\"])\r\n out = send_to_device(out, \"cpu\")\r\n\r\nmodel.cpu()\r\n\r\nmodel, batch = release_memory(model, batch)\r\nprint(\r\n f\"Memory allocated: {torch.cuda.memory_allocated()/1024**2}\\nMemory reserved: {torch.cuda.memory_reserved()/1024**2}\"\r\n)\r\n```\r\n\r\nTo make sure this is actually pytorch and not something to do with transformers, I then checked with a basic pytorch model:\r\n\r\n```python\r\nimport torch\r\nfrom accelerate.utils import release_memory\r\n\r\ndef memory_stats():\r\n return torch.cuda.memory_summary()\r\n\r\nclass TinyModel(torch.nn.Module):\r\n\r\n def __init__(self):\r\n super(TinyModel, self).__init__()\r\n\r\n self.linear1 = torch.nn.Linear(100, 200)\r\n self.activation = torch.nn.ReLU()\r\n self.linear2 = torch.nn.Linear(200, 10)\r\n self.softmax = torch.nn.Softmax(dim=0)\r\n\r\n def forward(self, x):\r\n x = self.linear1(x)\r\n x = self.activation(x)\r\n x = self.linear2(x)\r\n x = self.softmax(x)\r\n return x\r\n \r\nmodel = TinyModel().cuda()\r\nbatch = torch.rand(64,100).cuda()\r\n_ = model(batch)\r\nmodel, batch = release_memory(model, batch)\r\nprint(\r\n f\"Memory allocated: {torch.cuda.memory_allocated()/1024**2}\\nMemory reserved: {torch.cuda.memory_reserved()/1024**2}\"\r\n)\r\n```\r\n\r\nIf you run this you will find that yet again, we have a similar leftover memory allocation.\r\n\r\nSo I'm not 100% convinced that this is a problem we can solve. \r\n\r\nIf you can release *those* memory allocations then we can work with that solution, but after extensive research it is impossible I have found to free up all of it entirely *after the model has been ran on an input*. Likely this is some intermediate activations that somehow are still able to be allocated and never be freed. \r\n\r\nNote: including `inference_mode/no_grad` and `model.eval()` did not change those end allocation results", "Official response from the torch team:\r\n\r\n> The memory used by the CUDA Context itself will still be there. So you won't be able to get the GPU back to 0 I'm afraid.\r\n\r\nSo, that 16/40 will always remain and there isn't anything we can do else aside from that", "> Official response from the torch team:\r\n> \r\n> > The memory used by the CUDA Context itself will still be there. So you won't be able to get the GPU back to 0 I'm afraid.\r\n> \r\n> So, that 16/40 will always remain and there isn't anything we can do else aside from that\r\n\r\nThank you for your response!\r\nI ran your code and obtained the same result. It's completely acceptable to have 16/40 of the memory remaining. However, as I mentioned, I can't release the memory occupied by the model itself after executing trainer.train(). To do so, I would need to customize the source code by deleting a specific line in `transformers/trainer.py` at line [1988](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/trainer.py#L1988C4-L1988C4) as as shown below: \r\n```\r\nif self.control.should_epoch_stop or self.control.should_training_stop:\r\n break\r\n```\r\nWith the inclusion of this line, I'm unable to release the memory occupied by the model:\r\n```\r\nmemory after release:\r\nmemory allocated: 223.77490234375\r\nmemory reserved: 288.0\r\n``` \r\nAfter deleting this line I got the expected behavior:\r\n```\r\nmemory after release:\r\nmemory allocated: 16.25\r\nmemory reserved: 40\r\n``` \r\n@muellerzr Could you please provide any suggestions or make changes to `trainer.py` if this is indeed a bug? It will be much appreciated.\r\n", "You need to fully remove the model off CUDA, yes", "> You need to fully remove the model off CUDA, yes\r\n\r\nDo you mean using `model.cpu()` transition the model from CUDA to CPU? While this did free up more memory, but it wasn't sufficient. When I scaled the model to the XXL version and then applied `model.cpu()` along with `release_memory`, it still has about 8000 memory allocated remaining.\r\n\r\nHowever, when I delete line [1988](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/trainer.py#L1988C4-L1988C4) in `trainer.py `and then call `release_memory`, it always has only 16.25 memory allocated remaining no matter the size of model.\r\n\r\nHow can I fully remove the model off CUDA? Could you please recheck this? Thank you! @muellerzr ", "@hanrui4248 I was successful after the following:\r\n\r\n```python\r\n...\r\ntrainer.train()\r\ndel model, trainer\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n```\r\nThis got me to the tiny amount of memory allocated after. My full script:\r\n\r\n```python\r\nimport gc\r\nimport torch\r\n\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\nfrom transformers import DataCollatorWithPadding\r\n\r\nfrom transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer\r\n\r\ndef memory_stats():\r\n return f\"Memory allocated: {torch.cuda.memory_allocated()/1024**2}\\nMemory reserved: {torch.cuda.memory_reserved()/1024**2}\"\r\n\r\nimdb = load_dataset(\"imdb\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\r\n\r\n\r\ndef preprocess_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\ntokenized_imdb = imdb.map(preprocess_function, batched=True)\r\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n\r\n\r\nid2label = {0: \"NEGATIVE\", 1: \"POSITIVE\"}\r\nlabel2id = {\"NEGATIVE\": 0, \"POSITIVE\": 1}\r\n\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n \"distilbert-base-uncased\", num_labels=2, id2label=id2label, label2id=label2id\r\n)\r\n\r\nprint('Model memory:',memory_stats())\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"my_awesome_model\",\r\n learning_rate=2e-5,\r\n per_device_train_batch_size=16,\r\n gradient_accumulation_steps=1,\r\n max_steps=10,\r\n weight_decay=0.01,\r\n save_strategy=\"no\",\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_imdb[\"train\"],\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n)\r\n\r\ntrainer.train()\r\n\r\nprint('\\nMemory stats before release:',memory_stats())\r\n\r\ndel trainer\r\ndel model\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n\r\nprint('\\nMemory stats after release:',memory_stats())\r\n```\r\n\r\nPrint statements:\r\n```\r\nMemory stats before release: Memory allocated: 786.18212890625\r\nMemory reserved: 6290.0\r\n```\r\n```\r\nMemory stats after release: Memory allocated: 17.13671875\r\nMemory reserved: 44.0\r\n```\r\n", "> @hanrui4248 I was successful after the following:\r\n> \r\n> ```python\r\n> ...\r\n> trainer.train()\r\n> del model, trainer\r\n> gc.collect()\r\n> torch.cuda.empty_cache()\r\n> ```\r\n\r\nThank you!\r\n\r\nBut it didn't work with my script. Could this be because I used peft and lora in it? I think I've tried every possible way to release the memory, but I still can't free up the entire model's memory. After lot of tries, here are my conclusions:\r\n1.only use del and `gc.collect()` doesn't work\r\n2.Combining del and `gc.collect()` with `model.cpu()` can release more memory, but a significant amount of memory still remains .\r\n3.By deleting line [1988](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/trainer.py#L1988C4-L1988C4) in `trainer.py ` and then do the same memory release operation in 1. can completely remove the model from CUDA.\r\n\r\nHere is my script. Could you confirm my conclusion by executing it?@muellerzr Thank you!\r\n```\r\nimport torch\r\nimport gc\r\nfrom functools import partial\r\nfrom datasets import Dataset, DatasetDict, IterableDataset, IterableDatasetDict, load_dataset\r\nfrom datasets.formatting.formatting import LazyBatch\r\nfrom transformers import (\r\n AutoModelForSeq2SeqLM, \r\n AutoTokenizer, \r\n DataCollatorForSeq2Seq, \r\n Seq2SeqTrainingArguments, \r\n Seq2SeqTrainer, \r\n PreTrainedTokenizer, \r\n PreTrainedTokenizerFast\r\n)\r\nfrom peft import prepare_model_for_int8_training, LoraConfig, get_peft_model\r\nfrom metrics import compute_metrics\r\n\r\ndef memory_stats():\r\n print(\"memory allocated: \", torch.cuda.memory_allocated()/1024**2)\r\n print(\"memory reserved: \", torch.cuda.memory_reserved()/1024**2)\r\n\r\ndef get_processed_ordalie_dataset(\r\n tokenizer: PreTrainedTokenizer | PreTrainedTokenizerFast,\r\n max_length: int,\r\n seed: int,\r\n) -> DatasetDict | Dataset | IterableDatasetDict | IterableDataset:\r\n # load dataset\r\n dataset = load_dataset(\"OrdalieTech/baby-ordalie\")\r\n # since this dataset doesn't have validation split, create it manually.\r\n test_val_split = dataset[\"train\"].train_test_split(test_size=len(dataset[\"test\"]), seed=seed)\r\n dataset[\"train\"] = test_val_split[\"train\"]\r\n dataset[\"validation\"] = test_val_split[\"test\"]\r\n\r\n # Process data\r\n def process_data_to_model_inputs(examples: LazyBatch) -> LazyBatch:\r\n model_inputs = tokenizer(\r\n examples[\"input\"],\r\n max_length=max_length,\r\n truncation=True,\r\n )\r\n labels = tokenizer(examples[\"output\"])\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n\r\n tokenized_datasets = dataset.map(process_data_to_model_inputs, batched=True)\r\n tokenized_datasets.set_format(type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"labels\"])\r\n\r\n # Remove unnecessary columns\r\n tokenized_datasets = tokenized_datasets.remove_columns(dataset[\"train\"].column_names)\r\n\r\n return tokenized_datasets\r\n\r\n\r\nmodel_name = \"google/flan-t5-small\"\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_name, load_in_8bit=True)\r\n\r\nprint(\"model's memory:\")\r\nmemory_stats()\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nlora_config = LoraConfig(\r\n r=16, lora_alpha=32, target_modules=[\"q\", \"v\"], lora_dropout=0.05, bias=\"none\", task_type=\"SEQ_2_SEQ_LM\"\r\n)\r\n\r\nmodel = prepare_model_for_int8_training(model)\r\n\r\nmodel = get_peft_model(model, lora_config)\r\n\r\nargs = Seq2SeqTrainingArguments(\r\n \"temp\", \r\n evaluation_strategy=\"epoch\",\r\n learning_rate=5.6e-5,\r\n gradient_accumulation_steps=12,\r\n per_device_train_batch_size=64,\r\n per_device_eval_batch_size=64,\r\n num_train_epochs=1,\r\n save_strategy = \"no\",\r\n predict_with_generate=True, \r\n)\r\n\r\ndata_collator = DataCollatorForSeq2Seq(tokenizer, model=model)\r\n\r\ndataset = get_processed_ordalie_dataset(\r\n tokenizer,\r\n 512,\r\n 42,\r\n )\r\n\r\ntrainer = Seq2SeqTrainer(\r\n model,\r\n args,\r\n train_dataset=dataset[\"train\"],\r\n eval_dataset=dataset[\"validation\"],\r\n data_collator=data_collator,\r\n tokenizer=tokenizer,\r\n compute_metrics=partial(compute_metrics, tokenizer=tokenizer),\r\n)\r\n\r\n\r\ntrainer.train()\r\n\r\nprint(\"memory before release:\")\r\nmemory_stats()\r\n\r\ndel trainer\r\ndel model\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\nprint(\"memory after release:\")\r\n\r\nmemory_stats()\r\n```\r\n\r\n\r\n", "In that case the issue stems from peft, so I'd recommend migrating/opening this issue to there as I'm not sure what it could be :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
### System Info - torch==2.0.1 - transformers==4.31.0 - peft==0.4.0 - accelerate==0.20.3 - bitsandbytes==0.41.1 - gpu : Quadro RTX 8000 ### Who can help? @muellerzr @pacman1 @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm using lora and flan-t5 small model for summarization task, and I want to release memory occupied by model. However, it didn't work even though I tried using `del model` and `gc.collect()`, following is my code: ``` import torch import gc from functools import partial from peft import prepare_model_for_int8_training from peft import LoraConfig, get_peft_model from transformers import Seq2SeqTrainingArguments from transformers import Seq2SeqTrainer from transformers import DataCollatorForSeq2Seq from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from metrics import compute_metrics from ordalie_dataset import get_processed_ordalie_dataset def memory_stats(): print("memory allocated: ", torch.cuda.memory_allocated()/1024**2) print("memory reserved: ", torch.cuda.memory_reserved()/1024**2) model_name = "google/flan-t5-small" model = AutoModelForSeq2SeqLM.from_pretrained(model_name, load_in_8bit=True) print("model's memory:") memory_stats() tokenizer = AutoTokenizer.from_pretrained(model_name) lora_config = LoraConfig( r=16, lora_alpha=32, target_modules=["q", "v"], lora_dropout=0.05, bias="none", task_type="SEQ_2_SEQ_LM" ) model = prepare_model_for_int8_training(model) model = get_peft_model(model, lora_config) args = Seq2SeqTrainingArguments( "temp", evaluation_strategy="epoch", learning_rate=5.6e-5, gradient_accumulation_steps=12, per_device_train_batch_size=64, per_device_eval_batch_size=64, num_train_epochs=1, save_strategy = "no", predict_with_generate=True, ) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) dataset = get_processed_ordalie_dataset( tokenizer, 512, 42, ) trainer = Seq2SeqTrainer( model, args, train_dataset=dataset["train"], eval_dataset=dataset["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=partial(compute_metrics, tokenizer=tokenizer), ) trainer.train() print("memory before release:") memory_stats() del model del data_collator del trainer gc.collect() torch.cuda.empty_cache() print("memory after release:") memory_stats() ``` Output: ``` model's memory: memory allocated: 130.3193359375 memory reserved: 136.0 memory before release: memory allocated: 244.77490234375 memory reserved: 18390.0 memory after release: memory allocated: 223.77490234375 memory reserved: 288.0 ``` output of nvidia-smi after training looks like: ``` Tue Oct 3 12:16:25 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Quadro RTX 8000 On | 00000000:14:00.0 Off | Off* | | 34% 25C P8 16W / 260W | 497MiB / 49152MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 15268 C ...i.huang/.conda/envs/llms/bin/python 494MiB | +---------------------------------------------------------------------------------------+ ``` ### Expected behavior The output after release should be 0 for both allocated and reserved memory. I also tried move the release memory operations before the `trainer.train() `, as shown below: ``` trainer = Seq2SeqTrainer( model, args, train_dataset=dataset["train"], eval_dataset=dataset["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=partial(compute_metrics, tokenizer=tokenizer), ) del model del data_collator del trainer print("memory after release:") memory_stats() trainer.train() ``` The memory is successfully released after I made this change: ``` memory after release: memory allocated: 0.0 memory reserved: 0.0 ``` This leads me to suspect that there might be some internal references to the model within trainer.train(). So I delved into the source code of trainer.train() and, by copying the entire original method and removing certain lines, I identified potential places that could cause memory leakage: 1. ` transformers/trainer.py` line [1988](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1988) ``` if self.control.should_epoch_stop or self.control.should_training_stop: break ``` 2. ` transformers/trainer.py` line [1891](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/trainer.py#L1891C17-L1891C21) ``` with self.accelerator.accumulate(model): tr_loss_step = self.training_step(model, inputs) ``` After removing only place 1, and placing the memory release operations after `trainer.train()`, the output is: ``` memory after release: memory allocated: 16.25 memory reserved: 40 ``` After removing both place 1 and place 2, and placing the memory release operations after `trainer.train()`, the output is:: ``` memory after release: memory allocated: 0.0 memory reserved: 0.0 ``` I'm trying to understand what's causing this behavior, but it seems so magical. Is this a bug? How can I fully release the memory? I need to instantiate and train the model multiple times, but I don't have enough memory to instantiate the XXL model twice (without releasing memory in between).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26571/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26570
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26570/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26570/comments
https://api.github.com/repos/huggingface/transformers/issues/26570/events
https://github.com/huggingface/transformers/pull/26570
1,924,563,689
PR_kwDOCUB6oc5b0Dvc
26,570
[`Tokenizer`] Fix slow and fast serialization
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26570). All of your documentation changes will be reflected on that endpoint.", "I ran into the error below\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".../src/train_flash_attn_2.py\", line 11, in <module>\r\n train()\r\n File \".../src/train.py\", line 157, in train\r\n tokenizer = transformers.AutoTokenizer.from_pretrained(\r\n File \".../lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 751, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \".../lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 2017, in from_pretrained\r\n return cls._from_pretrained(\r\n File \".../lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 2243, in _from_pretrained\r\n init_kwargs[key] = added_tokens_map.get(init_kwargs[key], init_kwargs[key])\r\nTypeError: unhashable type: 'dict'\r\n```\r\n\r\nSo I added some prints and get this intermediate values:\r\n\r\n```\r\ncls.SPECIAL_TOKENS_ATTRIBUTES: (list)['bos_token', 'eos_token', 'unk_token', 'sep_token', 'pad_token', 'cls_token', 'mask_token', 'additional_special_tokens']\r\nadded_tokens_map: (dict){'<unk>': AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), '<s>': AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), '</s>': AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True)}\r\ninit_kwargs: (dict){'add_bos_token': True, 'add_eos_token': False, 'bos_token': {'__type': 'AddedToken', 'content': '<s>', 'lstrip': False, 'normalized': True, 'rstrip': False, 'single_word': False}, 'clean_up_tokenization_spaces': False, 'eos_token': {'__type': 'AddedToken', 'content': '</s>', 'lstrip': False, 'normalized': True, 'rstrip': False, 'single_word': False}, 'legacy': None, 'model_max_length': 1024, 'pad_token': None, 'sp_model_kwargs': {}, 'unk_token': {'__type': 'AddedToken', 'content': '<unk>', 'lstrip': False, 'normalized': True, 'rstrip': False, 'single_word': False}, 'vocab_file': '.../tokenizer.model', 'tokenizer_file': '.../tokenizer.json', 'name_or_path': '...'}\r\nkey: (streos_token\r\ninit_kwargs[key]: (dict){'__type': 'AddedToken', 'content': '</s>', 'lstrip': False, 'normalized': True, 'rstrip': False, 'single_word': False}\r\n```\r\n\r\nAccording to the output, I made a fix, which seemed to work out:\r\n\r\n```diff\r\n# Passing AddedTokens and not strings to the class to prevent it from casting the string to a different AddedToken\r\nfor key in cls.SPECIAL_TOKENS_ATTRIBUTES & init_kwargs.keys():\r\n if added_tokens_map != {} and init_kwargs[key] is not None:\r\n if key != \"additional_special_tokens\":\r\n # >>> debug\r\n def print_info(name, obj):\r\n print(f\"{name}: ({type(obj).__name__}){obj}\")\r\n print_info(\"cls.SPECIAL_TOKENS_ATTRIBUTES\", cls.SPECIAL_TOKENS_ATTRIBUTES)\r\n print_info(\"added_tokens_map\", added_tokens_map)\r\n print_info(\"init_kwargs\", init_kwargs)\r\n print_info(\"key\", key)\r\n print_info(\"init_kwargs[key]\", init_kwargs[key])\r\n # <<< debug\r\n- init_kwargs[key] = added_tokens_map.get(init_kwargs[key], init_kwargs[key])\r\n+ init_kwargs[key] = added_tokens_map.get(key, init_kwargs[key]) # fix\r\n```", "Could you share a reproducer? Would help me a lot as well! \r\n", "> Could you share a reproducer? Would help me a lot as well!\r\n\r\nSorry that I'm too busy to do so right now 😭\r\n\r\nBut this only happened when I loaded the tokenizer of [Llemma-7B](https://huggingface.co/EleutherAI/llemma_7b).\r\n\r\nI hope this description could help you reproduce the error." ]
1,696
1,698
1,697
COLLABORATOR
null
# What does this PR do? - sets the defaults for AddedToken instances where needed to match what is pushed to the hub - sets the default for AddedToken to not strip left and right to match the fast tokenizers - fixes the `added_tokens.json` file: a recent push made it save all the added tokens encoder, but it should only save the indexes greater than the vocab size for forward compatibility. - fixes the list of `additionnal_special_tokens` that were added twice / overwritten - fixes `add_tokens` : if the added tokens is a string we check if it's not already in the added vocab instead of always defaulting to strip left or right. - fixes saving: the added_tokens_decoder should not add a `"__type ":"AddedToken"` field to the added tokens otherwise the previous versions of transformers will try to load them. fixes #26732, fixes #26775, fixes #26773, fixes #26768, fixes #26859
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26570/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26570", "html_url": "https://github.com/huggingface/transformers/pull/26570", "diff_url": "https://github.com/huggingface/transformers/pull/26570.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26570.patch", "merged_at": 1697639454000 }
https://api.github.com/repos/huggingface/transformers/issues/26569
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26569/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26569/comments
https://api.github.com/repos/huggingface/transformers/issues/26569/events
https://github.com/huggingface/transformers/issues/26569
1,924,555,866
I_kwDOCUB6oc5ytmRa
26,569
text-generation results inconsistent depending on padding side
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the report! Feel free to investigate @fxmarty, pinging @gante for when he's back from leave", "This is also a duplicate of #25921, and #26380! ", "@fxmarty The results are supposed to be different on all `generate`-compatible, decoder-only models :D \r\n\r\nIn the attention layer, the token that is used for the `query` to generate the next token is different -- `<s>` for right-padding and `!` for left-padding (using the original text example, a padded `Hi there!`). Different `query` -> different attention values -> different logits -> very likely different tokens.\r\n\r\nLlama 2 kinda works with right-padding, the results are sensible. Most models throw gibberish if left-padding is not used at generation time. Left-padding is our default recommendation for `generate`", "@gante Thank you! Not sure to get it. With left padding for `Hi there!`:\r\n\r\n```\r\n{\r\n 'input_ids': tensor([\r\n [1, 20628, 306, 626, 297, 3681, 322],\r\n [2, 2, 2, 1, 6324, 727, 29991]\r\n ]),\r\n 'attention_mask': tensor([\r\n [1, 1, 1, 1, 1, 1, 1],\r\n [0, 0, 0, 1, 1, 1, 1]\r\n ])\r\n}\r\n```\r\n\r\nWith right padding:\r\n\r\n```\r\n{\r\n 'input_ids': tensor([\r\n [1, 20628, 306, 626, 297, 3681, 322],\r\n [1, 6324, 727, 29991, 2, 2, 2]\r\n ]),\r\n 'attention_mask': tensor([\r\n [1, 1, 1, 1, 1, 1, 1],\r\n [1, 1, 1, 1, 0, 0, 0]\r\n ])\r\n}\r\n```\r\n\r\nWe can see that the attended tokens are identical.", "@fxmarty the attended tokens are equal according to the `attention_mask`, yes. \r\n\r\nHowever, what gets masked is the matmul product between the `query` (from the last token) and the `key` (from all tokens in the sequence) -- [here is the corresponding llama code](https://github.com/huggingface/transformers/blob/e893b1efbbce80b7eaaf24f9e0134450820782b5/src/transformers/models/llama/modeling_llama.py#L384). In other words, what gets masked is `query * key` in the positions where the attention mask is `0`.\r\n\r\nThis means that, despite `attention_mask` masking the position corresponding to the same tokens, we will observe different results depending on the padding side, as the `query` will be different in the two cases. The model is trained without padding halfway through a sentence, and this is why left-padding produces no distribution shift, and thus better generation results :)\r\n\r\n(Note: this is not a Llama-only property, it's a general property of decoder-only LLMs. We have it as one of the most common pitfalls when using LLMs in our [docs](https://huggingface.co/docs/transformers/llm_tutorial#wrong-padding-side).)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,700
1,700
COLLABORATOR
null
### System Info transformers main (but same issue on transformers 4.33.3) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, this issue is probably a duplicate but at least here's a simple reproduction. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Llama-2-7b-chat-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) with torch.device("cuda"): model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer.padding_side = "right" inp = tokenizer(["Today I am in Paris and", "Hi there!"], padding=True, return_tensors="pt").to("cuda") res = model.generate(**inp, num_beams=1, do_sample=False, min_new_tokens=10, max_new_tokens=10) print(tokenizer.batch_decode(res)) ``` With `tokenizer.padding_side = "right"`: `['<s> Today I am in Paris and I am feeling very grateful for this opportunity to explore', "<s> Hi there!</s></s></s><s>\n\nI'm a software engineer with"]` With `tokenizer.padding_side = "left"`: `['<s> Today I am in Paris and I am feeling very grateful for this opportunity to explore', "</s></s></s><s> Hi there! I'm a software engineer with a passion for"]` You can see that `\n\n` is generated with padding side right, but not with padding side left. The same issue exist for GPTJ, so probably for many models: ``` ['Today I am in Paris and I am going to share with you my experience of', 'Hi there!<|endoftext|><|endoftext|><|endoftext|>Q: I have a question about the new version'] ['Today I am in Paris and I am going to share with you my experience of', "<|endoftext|><|endoftext|><|endoftext|>Hi there! I'm a newbie to the forum and I"] ``` ### Expected behavior No different tokens generated
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26569/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/26569/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26568
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26568/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26568/comments
https://api.github.com/repos/huggingface/transformers/issues/26568/events
https://github.com/huggingface/transformers/pull/26568
1,924,510,056
PR_kwDOCUB6oc5bz4R0
26,568
#26566 swin2 sr allow in out channels
{ "login": "marvingabler", "id": 51857438, "node_id": "MDQ6VXNlcjUxODU3NDM4", "avatar_url": "https://avatars.githubusercontent.com/u/51857438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marvingabler", "html_url": "https://github.com/marvingabler", "followers_url": "https://api.github.com/users/marvingabler/followers", "following_url": "https://api.github.com/users/marvingabler/following{/other_user}", "gists_url": "https://api.github.com/users/marvingabler/gists{/gist_id}", "starred_url": "https://api.github.com/users/marvingabler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marvingabler/subscriptions", "organizations_url": "https://api.github.com/users/marvingabler/orgs", "repos_url": "https://api.github.com/users/marvingabler/repos", "events_url": "https://api.github.com/users/marvingabler/events{/privacy}", "received_events_url": "https://api.github.com/users/marvingabler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Good point, yes lets do that! Let me update the PR soon :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26568). All of your documentation changes will be reflected on that endpoint.", "Just realized that there are a couple of more changes required, as the `Swin2SRForImageSuperResolution` denormalizes based on the input images, while for the case of mapping from multiband images to single band, the mean&stds of inputs and outputs differ. Will add the changes soon." ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? This PR adds the feature of accepting arbitary number of input and output channels when using the Swin2SR model. This allows to perform super resolution from greyscale (1 channel) to color (rgb), or from low resolution multi band satellite to high resolution rgb satellite. All examples and pretrained models are running as expected based on my tests. No new dependencies have been added. Just use it like ```python from transformers import Swin2SRForImageSuperResolution, Swin2SRConfig import torch Swin2SRConfig = ( num_channels_in=1, num_channels_out=3 ) model = Swin2SRForImageSuperResolution(Swin2SRConfig) with torch.no_grad(): # or use the image preprocessor per default out = model({"pixel_values":torch.randn((1,1,264,264))}) ``` Fixes #26566. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes [here](https://github.com/huggingface/transformers/issues/26566) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No, test where there already. ## Tagging the reviewers - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26568/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26568", "html_url": "https://github.com/huggingface/transformers/pull/26568", "diff_url": "https://github.com/huggingface/transformers/pull/26568.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26568.patch", "merged_at": 1696512039000 }
https://api.github.com/repos/huggingface/transformers/issues/26567
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26567/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26567/comments
https://api.github.com/repos/huggingface/transformers/issues/26567/events
https://github.com/huggingface/transformers/issues/26567
1,924,397,304
I_kwDOCUB6oc5ys_j4
26,567
V4.34 tokenizer incompatible with mistral
{ "login": "edmondja", "id": 11833428, "node_id": "MDQ6VXNlcjExODMzNDI4", "avatar_url": "https://avatars.githubusercontent.com/u/11833428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edmondja", "html_url": "https://github.com/edmondja", "followers_url": "https://api.github.com/users/edmondja/followers", "following_url": "https://api.github.com/users/edmondja/following{/other_user}", "gists_url": "https://api.github.com/users/edmondja/gists{/gist_id}", "starred_url": "https://api.github.com/users/edmondja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmondja/subscriptions", "organizations_url": "https://api.github.com/users/edmondja/orgs", "repos_url": "https://api.github.com/users/edmondja/repos", "events_url": "https://api.github.com/users/edmondja/events{/privacy}", "received_events_url": "https://api.github.com/users/edmondja/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @edmondja, if you run the following command, do you have an issue?\r\n\r\n```\r\npip install transformers==4.34.0 tokenizers==0.14.0\r\n```\r\n", "`transformers` version v4.34.0 brings a lot of significant improvements for tokenizers, but therefore requires tokenizers >= 0.14.0 ", "> Hey @edmondja, if you run the following command, do you have an issue?\r\n> \r\n> ```\r\n> pip install transformers==4.34.0 tokenizers==0.14.0\r\n> ```\r\n\r\n\r\nYes this did the job thanks\r\n\r\n", "Great :raised_hands:" ]
1,696
1,696
1,696
NONE
null
Hello, I dont know how much my problem is related to https://github.com/huggingface/transformers/issues/26455 but when I install the last version of transformers I have the correct version of tokenizers disappearing, like they can not coexist : ` Requirement already satisfied: transformers in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (4.33.3) Collecting transformers Using cached transformers-4.34.0-py3-none-any.whl (7.7 MB) Requirement already satisfied: safetensors>=0.3.1 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (0.3.1) Requirement already satisfied: regex!=2019.12.17 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (2022.10.31) Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (0.16.4) Requirement already satisfied: pyyaml>=5.1 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (6.0) Requirement already satisfied: tqdm>=4.27 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (4.64.1) Requirement already satisfied: filelock in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (3.9.0) Requirement already satisfied: requests in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (2.27.1) Requirement already satisfied: packaging>=20.0 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (23.1) Requirement already satisfied: numpy>=1.17 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from transformers) (1.26.0) Collecting tokenizers<0.15,>=0.14 Using cached tokenizers-0.14.0-cp39-none-win_amd64.whl (2.2 MB) Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (4.4.0) Requirement already satisfied: fsspec in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from huggingface-hub<1.0,>=0.16.4->transformers) (2022.11.0) Requirement already satisfied: colorama in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from tqdm>=4.27->transformers) (0.4.5) Requirement already satisfied: idna<4,>=2.5 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from requests->transformers) (2.10) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from requests->transformers) (1.26.12) Requirement already satisfied: charset-normalizer~=2.0.0 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from requests->transformers) (2.0.4) Requirement already satisfied: certifi>=2017.4.17 in c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages (from requests->transformers) (2022.12.7) Installing collected packages: tokenizers, transformers Attempting uninstall: tokenizers Found existing installation: tokenizers 0.13.4rc2 Uninstalling tokenizers-0.13.4rc2: Successfully uninstalled tokenizers-0.13.4rc2 WARNING: Ignoring invalid distribution -rotobuf (c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages) WARNING: Ignoring invalid distribution -rotobuf (c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages) WARNING: Ignoring invalid distribution -rotobuf (c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages) WARNING: Ignoring invalid distribution -rotobuf (c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages) ERROR: Could not install packages due to an OSError: [WinError 5] Accès refusé: 'C:\\Users\\Ext.Edmond_Jacoupeau\\Anaconda3\\envs\\py39\\Lib\\site-packages\\~%kenizers\\tokenizers.cp39-win_amd64.pyd' Consider using the `--user` option or check the permissions. WARNING: Ignoring invalid distribution -rotobuf (c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages) WARNING: Ignoring invalid distribution -rotobuf (c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages) WARNING: Ignoring invalid distribution -rotobuf (c:\users\ext.edmond_jacoupeau\anaconda3\envs\py39\lib\site-packages) ` ### Reproduction install tokenizers 0.13.4rc2, then install transformers 4.34.0 ### Expected behavior tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") should work
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26567/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26566
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26566/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26566/comments
https://api.github.com/repos/huggingface/transformers/issues/26566/events
https://github.com/huggingface/transformers/issues/26566
1,924,280,297
I_kwDOCUB6oc5ysi_p
26,566
SWIN2SR: Allow to choose number of in_channels and out_channels
{ "login": "marvingabler", "id": 51857438, "node_id": "MDQ6VXNlcjUxODU3NDM4", "avatar_url": "https://avatars.githubusercontent.com/u/51857438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marvingabler", "html_url": "https://github.com/marvingabler", "followers_url": "https://api.github.com/users/marvingabler/followers", "following_url": "https://api.github.com/users/marvingabler/following{/other_user}", "gists_url": "https://api.github.com/users/marvingabler/gists{/gist_id}", "starred_url": "https://api.github.com/users/marvingabler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marvingabler/subscriptions", "organizations_url": "https://api.github.com/users/marvingabler/orgs", "repos_url": "https://api.github.com/users/marvingabler/repos", "events_url": "https://api.github.com/users/marvingabler/events{/privacy}", "received_events_url": "https://api.github.com/users/marvingabler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,696
1,696
1,696
CONTRIBUTOR
null
### Feature request I'd like to be able to specify a different number of output and input channels for the Swin2sr superresolution model. The current [SWIN2SR](https://github.com/huggingface/transformers/blob/v4.33.3/src/transformers/models/swin2sr/modeling_swin2sr.py) implementation expects input and output images to have the same amount of channels (rgb). It's currently not possible to specify num_channels_in and num_channels_out in the model config. I propose to make in_channels = out_channels as default as most people will require this, but to give the user the possibility to specify a different number of out channels if required. There are some changes in the model logic required. After implementing the feature, the config constructor should change from ```python ### [...] def __init__( self, image_size=64, patch_size=1, num_channels=3, embed_dim=180, depths=[6, 6, 6, 6, 6, 6], num_heads=[6, 6, 6, 6, 6, 6], window_size=8, mlp_ratio=2.0, qkv_bias=True, hidden_dropout_prob=0.0, attention_probs_dropout_prob=0.0, drop_path_rate=0.1, hidden_act="gelu", use_absolute_embeddings=False, initializer_range=0.02, layer_norm_eps=1e-5, upscale=2, img_range=1.0, resi_connection="1conv", upsampler="pixelshuffle", **kwargs, ): super().__init__(**kwargs) self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.embed_dim = embed_dim self.depths = depths self.num_layers = len(depths) self.num_heads = num_heads self.window_size = window_size self.mlp_ratio = mlp_ratio self.qkv_bias = qkv_bias self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.drop_path_rate = drop_path_rate self.hidden_act = hidden_act self.use_absolute_embeddings = use_absolute_embeddings self.layer_norm_eps = layer_norm_eps self.initializer_range = initializer_range self.upscale = upscale self.img_range = img_range self.resi_connection = resi_connection self.upsampler = upsampler ``` to something like ```python ```python ### [...] def __init__( self, image_size=64, patch_size=1, num_channels_in=3, num_channels_out=3, embed_dim=180, depths=[6, 6, 6, 6, 6, 6], num_heads=[6, 6, 6, 6, 6, 6], window_size=8, mlp_ratio=2.0, qkv_bias=True, hidden_dropout_prob=0.0, attention_probs_dropout_prob=0.0, drop_path_rate=0.1, hidden_act="gelu", use_absolute_embeddings=False, initializer_range=0.02, layer_norm_eps=1e-5, upscale=2, img_range=1.0, resi_connection="1conv", upsampler="pixelshuffle", **kwargs, ): super().__init__(**kwargs) self.image_size = image_size self.patch_size = patch_size self.num_channels_in = num_channels_in self.num_channels_out= num_channels_out self.embed_dim = embed_dim self.depths = depths self.num_layers = len(depths) self.num_heads = num_heads self.window_size = window_size self.mlp_ratio = mlp_ratio self.qkv_bias = qkv_bias self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.drop_path_rate = drop_path_rate self.hidden_act = hidden_act self.use_absolute_embeddings = use_absolute_embeddings self.layer_norm_eps = layer_norm_eps self.initializer_range = initializer_range self.upscale = upscale self.img_range = img_range self.resi_connection = resi_connection self.upsampler = upsampler ``` ### Motivation Having in=out in channels is totally fine when working with classical images. However when dealing with super resolution tasks in the context of earth observations, you often want to have different amounts of input and output channels, e.g. when performing super resolution from low res multi band satellite images to high res rgb band visible satellite. Other use cases I see is e.g. to predict from low res grayscale to high res colorscale. ### Your contribution Happy to submit a PR for this one.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26566/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26565
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26565/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26565/comments
https://api.github.com/repos/huggingface/transformers/issues/26565/events
https://github.com/huggingface/transformers/issues/26565
1,924,271,902
I_kwDOCUB6oc5ysg8e
26,565
Why is assisted decoding slower than the paper reported peformance?
{ "login": "cliangyu", "id": 45140242, "node_id": "MDQ6VXNlcjQ1MTQwMjQy", "avatar_url": "https://avatars.githubusercontent.com/u/45140242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cliangyu", "html_url": "https://github.com/cliangyu", "followers_url": "https://api.github.com/users/cliangyu/followers", "following_url": "https://api.github.com/users/cliangyu/following{/other_user}", "gists_url": "https://api.github.com/users/cliangyu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cliangyu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cliangyu/subscriptions", "organizations_url": "https://api.github.com/users/cliangyu/orgs", "repos_url": "https://api.github.com/users/cliangyu/repos", "events_url": "https://api.github.com/users/cliangyu/events{/privacy}", "received_events_url": "https://api.github.com/users/cliangyu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm also confused about this, specially the following piece:\r\n\r\n```python\r\n# 4. Compare the argmax from the original model logits with the assistant forecasted tokens. We can keep\r\n# the assistant forecasted tokens until the first mismatch, or until the max length is reached.\r\ncandidate_new_tokens = candidate_input_ids[:, -candidate_length:]\r\nn_matches = ((~(candidate_new_tokens == selected_tokens[:, :-1])).cumsum(dim=-1) < 1).sum()\r\n```\r\n\r\nWhile it determines `n_matches` based on **the equality between tokens**, the speculative decoding paper determines it based on **the value of logits**.\r\n\r\nTo highlight the difference, we can imagine that we choose the original model itself as the assistant model, so the logits from original model and assistant model are always the same. In this case, The output from assistant model will always be accepted if we implement speculative decoding following the paper, and will be likely rejected because of the randomness in sampling if we implement it following the hf way.", "Hey @cliangyu @daquexian 👋 \r\n\r\nFirst of all, let me start the discussion by highlighting that assisted generation was originally designed without the knowledge of speculative decoding -- our internal development started way before speculative decoding got onto our radar :) The two techniques rest on the same principle, that GPU memory bandwidth is the bottleneck, and using a smaller model to generate candidate sequences can alleviate that. \r\n\r\nWe decided to follow this path because it was the simplest to bring the concept of a smaller model assisting a larger one to life. We also dynamically adapt the query size to the assistant model, where speculative decoding does not -- this is because we noticed high variability of the optimal speculation size across tasks. These are the two main differences between the methods. As a result, speculative decoding is superior for tasks with sampling (because of the better sampling strategy), assisted generation is better for greedy approaches (because of the dynamic size of the assistant query).\r\n\r\nBecause speculative decoding is indeed better with sampling (and thus with most LLM tasks), we want to bring it to our assisted generation + sampling code path. I'm out of bandwidth for the next weeks, but we will gladly accept contributions! 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,701
1,701
NONE
null
### System Info - `transformers` version: 4.34.0.dev0 - Platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.22.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 2 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'cpu', 'zero3_init_flag': False, 'zero_stage': 2} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.We are not sure why the speculative decoding boosting performance as reported in the [blog](https://huggingface.co/blog/assisted-generation) falls largely behind the reported values in the DeepMind and Google papers. 2. The HF implementation uses the argmax token of the large model probability. However, in the papers the authors sampled (instead of argmax). Why is it so? Because argmax is faster than sampling? ### Expected behavior The speculative decoding should be as fast as the reported in papers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26565/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26564
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26564/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26564/comments
https://api.github.com/repos/huggingface/transformers/issues/26564/events
https://github.com/huggingface/transformers/issues/26564
1,924,264,182
I_kwDOCUB6oc5ysfD2
26,564
Flax T5 model - code typo during AutoRegressive decoding?
{ "login": "giganttheo", "id": 71786646, "node_id": "MDQ6VXNlcjcxNzg2NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/71786646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/giganttheo", "html_url": "https://github.com/giganttheo", "followers_url": "https://api.github.com/users/giganttheo/followers", "following_url": "https://api.github.com/users/giganttheo/following{/other_user}", "gists_url": "https://api.github.com/users/giganttheo/gists{/gist_id}", "starred_url": "https://api.github.com/users/giganttheo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giganttheo/subscriptions", "organizations_url": "https://api.github.com/users/giganttheo/orgs", "repos_url": "https://api.github.com/users/giganttheo/repos", "events_url": "https://api.github.com/users/giganttheo/events{/privacy}", "received_events_url": "https://api.github.com/users/giganttheo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @giganttheo - this indeed looks like a bug! The attention mask should be overridden, as in Flax GPT Neo: https://github.com/huggingface/transformers/blob/2aef9a96011133f6b399b598fd69cfeca936eb37/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L219\r\n\r\nWould you like to open a PR to correct this? We can check whether we get different outputs after the fix by running the slow tests: https://github.com/huggingface/transformers/blob/57f44dc4288a3521bd700405ad41e90a4687abc0/tests/models/t5/test_modeling_flax_t5.py#L761\r\n\r\nWhich you can do so with:\r\n```\r\nRUN_SLOW=1 pytest -sv tests/models/t5/test_modeling_flax_t5.py::FlaxT5ModelIntegrationTests\r\n```", "Thank you for your response, I ran the slow tests on google colab with a T4 GPU and the outputs give:\r\n\r\n```python\r\n=============================== warnings summary ===============================\r\n../../usr/local/lib/python3.10/dist-packages/_pytest/config/__init__.py:1373\r\n /usr/local/lib/python3.10/dist-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\ntests/models/t5/test_modeling_flax_t5.py::FlaxT5ModelIntegrationTests::test_small_generation\r\ntests/models/t5/test_modeling_flax_t5.py::FlaxT5ModelIntegrationTests::test_small_generation_bfloat16\r\n /content/transformers/src/transformers/generation/flax_utils.py:322: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )\r\n warnings.warn(\r\n\r\ntests/models/t5/test_modeling_flax_t5.py::FlaxT5ModelIntegrationTests::test_summarization\r\n /content/transformers/src/transformers/models/t5/tokenization_t5.py:238: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.\r\n For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.\r\n - Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.\r\n - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.\r\n - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.\r\n warnings.warn(\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n================== 6 passed, 4 warnings in 242.20s (0:04:02) ===================\r\n```\r\n\r\nI am not very familiar with the contribution process so I am not sure what the next steps are for this, I tried following the huggingface contribution guide (https://huggingface.co/transformers/v4.2.2/contributing.html ), do you have any other resources to learn the basics of contribution guides for open source projects and transformers? I might have other contributions to do in the near future :)", "That looks good - all the slow tests have passed 👍 We can see this from the last line on the print out:\r\n```python\r\n================== 6 passed, 4 warnings in 242.20s (0:04:02) ===================\r\n```\r\n\r\nThat means we get correct results with the changes. The next steps would be [opening a pull request (PR)](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request) with the correction. Feel free to tag me on the PR and I'll get you a review asap. Once reviewed, we can merge into `main`. I would mention on the PR that you made the fix and then subsequently ran the slow tests and they passed - this will inform the reviewer that the correct T5 behaviour is maintained" ]
1,696
1,696
1,696
CONTRIBUTOR
null
### System Info - `transformers` version: 4.33.3 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensorflow version (GPU?): 2.13.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The discussed file is transformers/src/transformers/models/t5/modeling_flax_t5.py at line 408 https://github.com/huggingface/transformers/blob/2aef9a96011133f6b399b598fd69cfeca936eb37/src/transformers/models/t5/modeling_flax_t5.py#L408C1-L409C1 ### Expected behavior Hi, During autoregressive decoding, keys and values are computed one token at a time and cache is used to recover the keys and values from previous calls. In the `_concatenate_to_cache` method of the Attention module, an attention mask is computed in order for the new query to only attend to the previous key positions and not the remaining zero elements. This is what is explained in the comments in this function. However, the new attention mask is not used afterwards, because its name is `attention_attention_mask` and not `attention_mask` which is the one being used in every other line. From my understanding, this is likely a typo and I am not sure how it changes the behavior of the model, if at all.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26564/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26563
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26563/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26563/comments
https://api.github.com/repos/huggingface/transformers/issues/26563/events
https://github.com/huggingface/transformers/issues/26563
1,924,260,807
I_kwDOCUB6oc5ysePH
26,563
FIXED POSITION OF NAVBAR
{ "login": "kiranugale2o", "id": 141510294, "node_id": "U_kgDOCG9Glg", "avatar_url": "https://avatars.githubusercontent.com/u/141510294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiranugale2o", "html_url": "https://github.com/kiranugale2o", "followers_url": "https://api.github.com/users/kiranugale2o/followers", "following_url": "https://api.github.com/users/kiranugale2o/following{/other_user}", "gists_url": "https://api.github.com/users/kiranugale2o/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiranugale2o/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiranugale2o/subscriptions", "organizations_url": "https://api.github.com/users/kiranugale2o/orgs", "repos_url": "https://api.github.com/users/kiranugale2o/repos", "events_url": "https://api.github.com/users/kiranugale2o/events{/privacy}", "received_events_url": "https://api.github.com/users/kiranugale2o/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I would Like to take this issue.", "@kiranugale2o can please assign this issue to me ?", "Might be of interest to @mishig25 ", "i am not working on this anymore, @iamadisinghal would be a better choice for this issue", "Could you elaborate on what do you mean by \"Fixed position in navbar\"?", "> Could you elaborate on what do you mean by \"Fixed position in navbar\"?\n\nSet Navbar position to \nposition:fixed;", "> @kiranugale2o can please assign this issue to me ?\n\nYes", "Could you please assign this to me? I would love to work on it :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
### Feature request FIXED POSITION OF NAVBAR ![Oh My Zsh - a delightful open source framework for Zsh and 5 more pages - Personal - Microsoft​ Edge 03-10-2023 19_48_08](https://github.com/huggingface/transformers/assets/141510294/a8537122-78a4-4bf6-b84a-cdee12b2aa1d) ![Oh My Zsh - a delightful open source framework for Zsh and 5 more pages - Personal - Microsoft​ Edge 03-10-2023 19_48_17](https://github.com/huggingface/transformers/assets/141510294/e62fd861-9077-4290-946e-2df9f19a3af7) ### Motivation FIXED POSITION OF NAVBAR ### Your contribution FIXED POSITION OF NAVBAR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26563/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26562
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26562/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26562/comments
https://api.github.com/repos/huggingface/transformers/issues/26562/events
https://github.com/huggingface/transformers/pull/26562
1,924,258,551
PR_kwDOCUB6oc5bzBq4
26,562
[`Nougat`] from transformers import *
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26562). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Doing `python -c "from transformers import *"` on a fresh env fails
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26562/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26562", "html_url": "https://github.com/huggingface/transformers/pull/26562", "diff_url": "https://github.com/huggingface/transformers/pull/26562.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26562.patch", "merged_at": 1696343533000 }
https://api.github.com/repos/huggingface/transformers/issues/26561
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26561/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26561/comments
https://api.github.com/repos/huggingface/transformers/issues/26561/events
https://github.com/huggingface/transformers/pull/26561
1,923,997,358
PR_kwDOCUB6oc5byIxL
26,561
Fixed inconsistency in several fast tokenizers
{ "login": "Towdo", "id": 13337196, "node_id": "MDQ6VXNlcjEzMzM3MTk2", "avatar_url": "https://avatars.githubusercontent.com/u/13337196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Towdo", "html_url": "https://github.com/Towdo", "followers_url": "https://api.github.com/users/Towdo/followers", "following_url": "https://api.github.com/users/Towdo/following{/other_user}", "gists_url": "https://api.github.com/users/Towdo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Towdo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Towdo/subscriptions", "organizations_url": "https://api.github.com/users/Towdo/orgs", "repos_url": "https://api.github.com/users/Towdo/repos", "events_url": "https://api.github.com/users/Towdo/events{/privacy}", "received_events_url": "https://api.github.com/users/Towdo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks a lot for these changes! Let's add a small test in the test_tokenization_common 😉\r\n\r\nI changed the test_build_inputs_with_special_tokens to test this edge case :)", "Thanks 😉 ", "> Very good thanks, I would just like 1 hard coded expected value, otherwise if both are broken we are not testing anything! Thanks for the fix\r\n\r\nCan you give me some guidance on how to do this?\r\nBecause the behaviour differs from tokenizer to tokenizer, I can't/shouldn't do it in test_tokenization_common, right?\r\nSo instead I hard code one test in each test_tokenization_(tokenizer)?", "Oups my bad on this one! \r\nIt's alright like this 😉 I'll merge " ]
1,696
1,696
1,696
CONTRIBUTOR
null
Fixed case where behavior of BertTokenizer and BertTokenizerFast is different. An empty list will be evaluated to `False` but not to `is None`. (I mistakenly closed my first merge request) Fixes #26123 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26561/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26561", "html_url": "https://github.com/huggingface/transformers/pull/26561", "diff_url": "https://github.com/huggingface/transformers/pull/26561.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26561.patch", "merged_at": 1696581647000 }
https://api.github.com/repos/huggingface/transformers/issues/26560
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26560/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26560/comments
https://api.github.com/repos/huggingface/transformers/issues/26560/events
https://github.com/huggingface/transformers/pull/26560
1,923,939,386
PR_kwDOCUB6oc5bx8Ys
26,560
[`FA2`] Cast to correct dtype
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26560). All of your documentation changes will be reflected on that endpoint.", "I agree we should make all hacks / changes with respect to FA2 modules inside them. However this might introduce multiple patches and other hacks for quantized modules. I think for now this approach is fine, but I agree we should go for a better one, as this would unblock some users for the next release, I left it as a TODO! ", "cc @hiyouga are you able to Fine-tune in bf16 with this branch?", "I also thought that we could retrieve the data type from `self.config.torch_dtype` of `LlamaFlashAttention2`.\r\n\r\nPS. Although it may fail in such an edge case as Lysandre said, we usually have a consistent data type in training.", "Closing this PR in favor of https://github.com/huggingface/transformers/pull/26846" ]
1,696
1,697
1,697
CONTRIBUTOR
null
# What does this PR do? Fixes: https://github.com/huggingface/transformers/issues/26451 Currently performing bf16 fine-tuning with FA-2 leads to hidden states silently being casted in float16 As it is challenging to retrieve the original dtype of the model in case the model is quantized, I propose to store that dtype in a private attribute to be able to retrieve it conveniently without having to perform any sort of hack that gets the correct dtype if the model is quantized
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26560/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26560", "html_url": "https://github.com/huggingface/transformers/pull/26560", "diff_url": "https://github.com/huggingface/transformers/pull/26560.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26560.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26559
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26559/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26559/comments
https://api.github.com/repos/huggingface/transformers/issues/26559/events
https://github.com/huggingface/transformers/pull/26559
1,923,868,100
PR_kwDOCUB6oc5bxtBp
26,559
[`PEFT`] Final fixes
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@younesbelkada thanks for this, does this allow pushing 4-bit nf4 models to hub? thanks", "Hi @RonanKMcGovern \r\nFor pushing 4-bit weights on the Hub, please refer to this PR: https://github.com/huggingface/transformers/pull/26037 \r\nThe current PR enables pushing adapter weights with 4bit base models" ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? This PR fixes multiple bugs with PEFT and some corner cases such as: - loading an adapter model with `token` argument that currently fails - logger.warning that errors out since it does not seem to accept the argument `FutureWarning` - Saving that fails with 4-bit quantized models Added some nice tests cc @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26559/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26559", "html_url": "https://github.com/huggingface/transformers/pull/26559", "diff_url": "https://github.com/huggingface/transformers/pull/26559.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26559.patch", "merged_at": 1696337589000 }
https://api.github.com/repos/huggingface/transformers/issues/26558
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26558/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26558/comments
https://api.github.com/repos/huggingface/transformers/issues/26558/events
https://github.com/huggingface/transformers/pull/26558
1,923,851,456
PR_kwDOCUB6oc5bxpcj
26,558
[WIP] Adding BLIVA model
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26558). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
null
CONTRIBUTOR
null
# What does this PR do? Adds BLIVA to transformers. * Original repo: https://github.com/mlpc-ucsd/BLIVA * Paper: https://arxiv.org/abs/2308.09936 Fixes #26629 - issue with new model request
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26558/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26558", "html_url": "https://github.com/huggingface/transformers/pull/26558", "diff_url": "https://github.com/huggingface/transformers/pull/26558.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26558.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26557
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26557/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26557/comments
https://api.github.com/repos/huggingface/transformers/issues/26557/events
https://github.com/huggingface/transformers/issues/26557
1,923,825,147
I_kwDOCUB6oc5yqz37
26,557
Native support of `torch.nn.functionnal.scaled_dot_product_attention`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[ { "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }, { "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false } ]
[ "@younesbelkada, is there a reason why `torch.nn.functionnal.scaled_dot_product_attention` is not always integrated ? For instance the `LlamaAttention` class does not use it (see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L264))", "Hi @SimJeg !\r\nYou can benefit from it already through the `BetterTransformer` API\r\n\r\n```bash\r\npip install transformers optimum\r\n```\r\n\r\nThen once you load the model call:\r\n```python\r\nmodel = model.to_bettertransformer()\r\n```\r\n\r\nThe goal in the future, as mentioned in the issue is to add a native support of SDPA", "Here is a WIP PR https://github.com/huggingface/transformers/pull/26572\r\n\r\n@SimJeg I think it is mostly about Transformers handling padding with a padding mask, which PyTorch SDPA used to not support (until recently) for the optimized paths. Having the code offloaded at first was probably a way to showcase that SDPA indeed works well and that a native integration is worth it!", "@younesbelkada @patrickvonplaten - Hi team, I was looking at the attention implementation in transformers for the various LLMs vs. the attention implementation in diffusers and am a bit confused by the use (or lack of use) with PyTorch SDPA.\r\n\r\nIs it correct that the transformers is not using PyTorch SDPA because it cannot not handle padded inputs? If so, how are we able to use [Pytorch SDPA in diffusers](https://github.com/huggingface/diffusers/blob/7271f8b7170923944e193dac5b8513bbbb3b883f/src/diffusers/models/attention_processor.py#L1028-L1032) without running into the same issues? \r\n\r\nMy understanding is that padding isn't necessary for the self-attention layers of common text-to-image models like Stable Diffusion, but is likely being used in the cross-attention layers, since text prompts are of differing lengths.", "> SDPA makes model inference faster and more memory efficient, and supports multiple hardwares (CPU, GPU, CUDA, AMD...)\r\n\r\nIs SDPA inference only, or could it be used during training as an alternative to something like Flash Attention or xformers for the folks who use ROCm? The FA2-ROCm is still a WIP and CDNA2 only.", "@xzuyn SDPA is a wrapper around xformers and Flash Attention kernels, so yes, it can be used for training as well (and is probably even more interesting there). Unfortunately, as far as my knowledge goes, FA is not upstreamed in PyTorch on RoCm systems as of PyTorch 2.1. I believe AMD folks are working towards that though, feel free to open an issue in PyTorch repo to track the progress.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "not stale", "Fixed in https://github.com/huggingface/transformers/pull/26572, see the release notes https://github.com/huggingface/transformers/releases/tag/v4.36.0 & https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention" ]
1,696
1,702
1,702
CONTRIBUTOR
null
### Feature request PyTorch has released `torch.nn.functionnal.scaled_dot_product_attention` since its 2.0 version that supports more memory efficient attention computation Official documentation [here](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html). Currently three implementations are available in that method, making it possible to dispatch the SDPA kernel to - C++ math implementation - Flash Attention 1 - xformers memory efficient attention In addition to that, in the next versions, PyTorch will add support for Flash Attention 2: https://github.com/pytorch/pytorch/pull/105602 that is already available in the PyTorch nightlies. SDPA makes model inference faster and more memory efficient, and supports multiple hardwares (CPU, GPU, CUDA, AMD...) Users can already benefit from SDPA through the `BetterTransformer` API of optimum ```python # pip install optimum model = model.to_bettertransformer() ``` As SDPA is already quite stable and performant, we should migrate the `BetterTransformer` API to the native transformers codebase to support OTB model acceleration and memory efficiency. cc @LysandreJik @fxmarty ### Motivation Make LLMs faster, out of the box by just updating PyTorch version ### Your contribution Help implementing this in the next versions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26557/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26557/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26556
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26556/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26556/comments
https://api.github.com/repos/huggingface/transformers/issues/26556/events
https://github.com/huggingface/transformers/issues/26556
1,923,738,390
I_kwDOCUB6oc5yqesW
26,556
Training failure with `--use_cpu` option when GPU memory is saturated
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "It still needs to be addressed ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "It's not a big deal, but it's still an issue that needs to be addressed.\r\nI can quickly submit a PR if I have approval.", "A PR with the solution would be great! However I think the solution eventually should be on the Accelerate side since we handle the dataloader creation eventually/wrapping and we can do this under the hood when accelerate-ing the dataloaders", "In fact, it seems that the proposed solution solves the issue in both cases (using accelerate or not):\r\n\r\n(Context: GPU memory saturated)\r\nWithout the fix:\r\n```\r\naccelerate launch 26556.py --output_dir tmp --use_cpu # fails\r\n26556.py --output_dir tmp --use_cpu # fails\r\n```\r\n\r\nWith the fix:\r\n```\r\naccelerate launch 26556.py --output_dir tmp --use_cpu # ok\r\n26556.py --output_dir tmp --use_cpu # ok\r\n```\r\n\r\n`26556.py`:\r\n```python\r\nfrom transformers import HfArgumentParser, Trainer, TrainingArguments\r\nfrom torch import nn\r\nfrom datasets import Dataset\r\nimport numpy as np\r\n\r\n\r\nclass MyModel(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.linear = nn.Linear(100_000, 2)\r\n\r\n def forward(self, x, y):\r\n z = self.linear(x)\r\n loss = nn.CrossEntropyLoss()(z, y)\r\n return {\"loss\": loss}\r\n\r\n\r\ndef main():\r\n parser = HfArgumentParser(TrainingArguments)\r\n training_args = parser.parse_args_into_dataclasses()[0]\r\n model = MyModel()\r\n dataset = Dataset.from_dict({\"x\": np.random.rand(100, 100_000), \"y\": np.random.randint(0, 2, (100,))})\r\n trainer = Trainer(model=model, args=training_args, train_dataset=dataset)\r\n trainer.train()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n", "Trainer uses accelerate under the hood, so that’s not surprising. Go ahead and open a PR and cc me @qgallouedec if you’d like, and I’ll look at handling it on the accelerate side!" ]
1,696
1,701
1,701
CONTRIBUTOR
null
### System Info Transformers: 4.33.2 Torch 2.0.1 Python 3.9.12 ### Who can help? @muellerzr and @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The issue arises in a very specific context but needs to be reported nonetheless: - The GPU memory is saturated. - Another training is launched with the `--use_cpu` argument. in this setting, the second training fails, raising a "CUDA out of memory" error. This outcome is unexpected since the `--use_cpu` option is supposed to enforce the use of the CPU for training. **Root Cause:** This issue stems from the fact that, by default, `--dataloader_pin_memory` is set to `True`. At a certain point, the dataloader attempts to pin the batch to the GPU, even if the device is set to CPU. Roughly equivalent to: ```python import torchvision.transforms as transforms import torchvision.datasets as datasets from torch.utils.data import DataLoader # Download and load the training data dataset = datasets.MNIST("~/.pytorch/MNIST_data/", download=True, train=True, transform=transforms.ToTensor()) # Define a dataloader with pin_memory=True dataloader = DataLoader(dataset, pin_memory=True) # Get one batch of images and labels images, labels = next(iter(dataloader)) ``` ### Expected behavior To resolve the issue, the best solution would be to enforce `pin_memory` to `False` in `TrainingArguments.__post_init__` when `--use_cpu` is set to `True` ```python if self.use_cpu: self.dataloader_pin_memory = False ``` ### Alternative The alternative is of course to write in the docs that `--use_cpu` should be used with `dataloader_pin_memory` set to False, but AFAIK there is never a situation where you want to use pinned memory and a cpu device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26556/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/26556/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26555
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26555/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26555/comments
https://api.github.com/repos/huggingface/transformers/issues/26555/events
https://github.com/huggingface/transformers/pull/26555
1,923,725,192
PR_kwDOCUB6oc5bxOUY
26,555
Enable testing against mi250
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "[**Update**]\r\n\r\nI might be able to do minimal changes to make the slack report work with the CI having 2 GPU favors like this PR. I will have to try though.\r\n\r\n----------------------------------------------\r\n\r\n\r\nAs testing itself, this change is OK.\r\n\r\nHowever, this will **break the slack reporting**, as both `mi210` and `mi250` jobs will produce the same artifacts. The file in `utils/notification_service.py` (slack reproting) is designed to work with a workflow run that has run the CI once. (In this PR, it's kinda twice).\r\n\r\nAnd even if we have different artifact names (together some changes in `utils/notification_service.py` so we can handle them), we still need to **reproduce the tables for both** `mi210` and `mi250` together with all the **failing tests as slack replies**. This is going to difficult to navigate.\r\n\r\n- It also makes the workflow run (a bit) harder to navigate as now it contains twice more jobs\r\n- Make change to `utils/notification_service.py` will make that file being harder to understand/maintain.\r\n\r\nA quick, simple way (despite not ideal) is to create another workflow for `mi250`. I know it's duplicated (and we don't like), but everything has advantage and disadvantage.\r\n\r\n@mfuntowicz We will have 3 GPU favors or even more?\r\n@LysandreJik : WDYK? Duplicated workflow file so we don't need to rework on slack reporting script, or another direction?\r\n\r\n", "Yes 3 flavors, nothing more expected so far 🙂\r\n\r\nFrom my POV it seems a bit overkill to duplicate the whole workflow file \"just\" to handle multiple instance type (i'm pretty sure in the future you'll also want to test T4/H100/etc.). Also the increased maintenance cost and accounting for the probability to forgot to update one of the file 😁.\r\n\r\nMy 2cents 🙂.", "Your points are also valid :-) I have to make the changes to the slack reporting script. Could this PR wait a bit?\r\n\r\n", "~~@mfuntowicz In order not to block this PR, let's go for duplicating. Eventually, I can rework when we decide a change is necessary at some point.~~\r\n\r\nI might have some way to do it better. Let me check", "For @mfuntowicz to check #26634. If it is fine, we can incoperate the changes into this PR.", " @mfuntowicz any comment for #26634 ?", "Just approved it, thanks a lot @ydshieh 🙏🏻.\r\n\r\nLMK if there is anything i need to take a look at to merge these PRs :) ", "Close in favor of #26634. Will add mfuntowicz as the contributor.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26555). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,697
1,697
MEMBER
null
Currently we are testing only against mi210 AMD devices, this PR enables testing on mi250 as well 🤗
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26555/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26555", "html_url": "https://github.com/huggingface/transformers/pull/26555", "diff_url": "https://github.com/huggingface/transformers/pull/26555.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26555.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26554
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26554/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26554/comments
https://api.github.com/repos/huggingface/transformers/issues/26554/events
https://github.com/huggingface/transformers/pull/26554
1,923,369,656
PR_kwDOCUB6oc5bwDOo
26,554
Bump urllib3 from 1.26.9 to 1.26.17 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
CONTRIBUTOR
null
[//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.9 to 1.26.17. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p> <blockquote> <h2>1.26.17</h2> <ul> <li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (GHSA-v845-jxx5-vc9f)</li> </ul> <h2>1.26.16</h2> <ul> <li>Fixed thread-safety issue where accessing a <code>PoolManager</code> with many distinct origins would cause connection pools to be closed while requests are in progress (<a href="https://redirect.github.com/urllib3/urllib3/issues/2954">#2954</a>)</li> </ul> <h2>1.26.15</h2> <ul> <li>Fix socket timeout value when HTTPConnection is reused (<a href="https://redirect.github.com/urllib3/urllib3/issues/2645">urllib3/urllib3#2645</a>)</li> <li>Remove &quot;!&quot; character from the unreserved characters in IPv6 Zone ID parsing (<a href="https://redirect.github.com/urllib3/urllib3/issues/2899">urllib3/urllib3#2899</a>)</li> <li>Fix IDNA handling of 'x80' byte (<a href="https://redirect.github.com/urllib3/urllib3/issues/2901">urllib3/urllib3#2901</a>)</li> </ul> <h2>1.26.14</h2> <ul> <li>Fixed parsing of port 0 (zero) returning None, instead of 0 (<a href="https://redirect.github.com/urllib3/urllib3/issues/2850">#2850</a>)</li> <li>Removed deprecated <code>HTTPResponse.getheaders()</code> calls in <code>urllib3.contrib</code> module.</li> </ul> <h2>1.26.13</h2> <ul> <li>Deprecated the <code>HTTPResponse.getheaders()</code> and <code>HTTPResponse.getheader()</code> methods.</li> <li>Fixed an issue where parsing a URL with leading zeroes in the port would be rejected even when the port number after removing the zeroes was valid.</li> <li>Fixed a deprecation warning when using cryptography v39.0.0.</li> <li>Removed the <code>&lt;4</code> in the <code>Requires-Python</code> packaging metadata field.</li> </ul> <h2>1.26.12</h2> <ul> <li>Deprecated the <code>urllib3[secure]</code> extra and the <code>urllib3.contrib.pyopenssl</code> module. Both will be removed in v2.x. See this <a href="https://redirect.github.com/urllib3/urllib3/issues/2680">GitHub issue</a> for justification and info on how to migrate.</li> </ul> <h2>1.26.11</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <ul> <li>Fixed an issue where reading more than 2 GiB in a call to HTTPResponse.read would raise an OverflowError on Python 3.9 and earlier.</li> </ul> <h2>1.26.10</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <p>:closed_lock_with_key: <strong>This is the first release to be signed with Sigstore!</strong> You can verify the distributables using the <code>.sig</code> and <code>.crt</code> files included on this release.</p> <ul> <li>Removed support for Python 3.5</li> <li>Fixed an issue where a <code>ProxyError</code> recommending configuring the proxy as HTTP instead of HTTPS could appear even when an HTTPS proxy wasn't configured.</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p> <blockquote> <h1>1.26.17 (2023-10-02)</h1> <ul> <li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (<code>[#3139](https://github.com/urllib3/urllib3/issues/3139) &lt;https://github.com/urllib3/urllib3/pull/3139&gt;</code>_)</li> </ul> <h1>1.26.16 (2023-05-23)</h1> <ul> <li>Fixed thread-safety issue where accessing a <code>PoolManager</code> with many distinct origins would cause connection pools to be closed while requests are in progress (<code>[#2954](https://github.com/urllib3/urllib3/issues/2954) &lt;https://github.com/urllib3/urllib3/pull/2954&gt;</code>_)</li> </ul> <h1>1.26.15 (2023-03-10)</h1> <ul> <li>Fix socket timeout value when <code>HTTPConnection</code> is reused (<code>[#2645](https://github.com/urllib3/urllib3/issues/2645) &lt;https://github.com/urllib3/urllib3/issues/2645&gt;</code>__)</li> <li>Remove &quot;!&quot; character from the unreserved characters in IPv6 Zone ID parsing (<code>[#2899](https://github.com/urllib3/urllib3/issues/2899) &lt;https://github.com/urllib3/urllib3/issues/2899&gt;</code>__)</li> <li>Fix IDNA handling of '\x80' byte (<code>[#2901](https://github.com/urllib3/urllib3/issues/2901) &lt;https://github.com/urllib3/urllib3/issues/2901&gt;</code>__)</li> </ul> <h1>1.26.14 (2023-01-11)</h1> <ul> <li>Fixed parsing of port 0 (zero) returning None, instead of 0. (<code>[#2850](https://github.com/urllib3/urllib3/issues/2850) &lt;https://github.com/urllib3/urllib3/issues/2850&gt;</code>__)</li> <li>Removed deprecated getheaders() calls in contrib module. Fixed the type hint of <code>PoolKey.key_retries</code> by adding <code>bool</code> to the union. (<code>[#2865](https://github.com/urllib3/urllib3/issues/2865) &lt;https://github.com/urllib3/urllib3/issues/2865&gt;</code>__)</li> </ul> <h1>1.26.13 (2022-11-23)</h1> <ul> <li>Deprecated the <code>HTTPResponse.getheaders()</code> and <code>HTTPResponse.getheader()</code> methods.</li> <li>Fixed an issue where parsing a URL with leading zeroes in the port would be rejected even when the port number after removing the zeroes was valid.</li> <li>Fixed a deprecation warning when using cryptography v39.0.0.</li> <li>Removed the <code>&lt;4</code> in the <code>Requires-Python</code> packaging metadata field.</li> </ul> <h1>1.26.12 (2022-08-22)</h1> <ul> <li>Deprecated the <code>urllib3[secure]</code> extra and the <code>urllib3.contrib.pyopenssl</code> module. Both will be removed in v2.x. See this <code>GitHub issue &lt;https://github.com/urllib3/urllib3/issues/2680&gt;</code>_ for justification and info on how to migrate.</li> </ul> <h1>1.26.11 (2022-07-25)</h1> <ul> <li>Fixed an issue where reading more than 2 GiB in a call to <code>HTTPResponse.read</code> would raise an <code>OverflowError</code> on Python 3.9 and earlier.</li> </ul> <h1>1.26.10 (2022-07-07)</h1> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/urllib3/urllib3/commit/c9016bf464751a02b7e46f8b86504f47d4238784"><code>c9016bf</code></a> Release 1.26.17</li> <li><a href="https://github.com/urllib3/urllib3/commit/01220354d389cd05474713f8c982d05c9b17aafb"><code>0122035</code></a> Backport GHSA-v845-jxx5-vc9f (<a href="https://redirect.github.com/urllib3/urllib3/issues/3139">#3139</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/e63989f97d206e839ab9170c8a76e3e097cc60e8"><code>e63989f</code></a> Fix installing <code>brotli</code> extra on Python 2.7</li> <li><a href="https://github.com/urllib3/urllib3/commit/2e7a24d08713a0131f0b3c7197889466d645cc49"><code>2e7a24d</code></a> [1.26] Configure OS for RTD to fix building docs</li> <li><a href="https://github.com/urllib3/urllib3/commit/57181d6ea910ac7cb2ff83345d9e5e0eb816a0d0"><code>57181d6</code></a> [1.26] Improve error message when calling urllib3.request() (<a href="https://redirect.github.com/urllib3/urllib3/issues/3058">#3058</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/3c0148048a523325819377b23fc67f8d46afc3aa"><code>3c01480</code></a> [1.26] Run coverage even with failed jobs</li> <li><a href="https://github.com/urllib3/urllib3/commit/d94029b7e2193ff47b627906a70e06377a09aae8"><code>d94029b</code></a> Release 1.26.16</li> <li><a href="https://github.com/urllib3/urllib3/commit/18e92145e9cddbabdf51c98f54202aa37fd5d4c8"><code>18e9214</code></a> Use trusted publishing for PyPI</li> <li><a href="https://github.com/urllib3/urllib3/commit/d25cf83bbae850a290fe34ed1610ae55c0558b36"><code>d25cf83</code></a> [1.26] Fix invalid test_ssl_failure_midway_through_conn</li> <li><a href="https://github.com/urllib3/urllib3/commit/25cca389496b86ee809c21e5b641aeaa74809263"><code>25cca38</code></a> [1.26] Fix test_ssl_object_attributes</li> <li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/1.26.9...1.26.17">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=1.26.9&new-version=1.26.17)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26554/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26554", "html_url": "https://github.com/huggingface/transformers/pull/26554", "diff_url": "https://github.com/huggingface/transformers/pull/26554.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26554.patch", "merged_at": 1696316112000 }
https://api.github.com/repos/huggingface/transformers/issues/26553
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26553/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26553/comments
https://api.github.com/repos/huggingface/transformers/issues/26553/events
https://github.com/huggingface/transformers/issues/26553
1,923,358,618
I_kwDOCUB6oc5ypB-a
26,553
Implement StreamingLLM/Windowed Attention with Attention Sinks
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @tomaarsen, very cool feature and implementation!\r\n\r\nThis definitely looks like a good fit for `transformers`, or at least it should be of very high value for the community to have access to attention sinks very easily.\r\n\r\nKeeping a drop-in implementation up to date on the long term is hard to do, so I would recommend we move towards a utility function for now that could eventually be upstreamed into `transformers` once it has developed a bit more.\r\n\r\nSo instead of the current\r\n\r\n```py\r\nfrom attention_sinks import AutoModel\r\n\r\nmodel = AutoModel.from_pretrained(\"meta-llama/Llama-2-7b-hf\", device_map=\"auto\")\r\n```\r\n\r\nhow about something like\r\n\r\n```python\r\nfrom attention_sinks import convert_model_attention_to_sinks\r\nfrom transformers import AutoModel\r\n\r\nmodel = AutoModel.from_pretrained(\"meta-llama/Llama-2-7b-hf\", device_map=\"auto\")\r\nmodel = convert_model_attention_to_sinks(model)\r\n```\r\n?\r\n\r\n---\r\n\r\nEventually, the API could take two different directions:\r\n1. Either we develop it similarly to the existing `BetterTransformers` support -> It depends on the `optimum` library being installed in the environment, and offers the method `model.to_bettertransformers()` to convert the model to the right format\r\n2. Either we add support for it directly in the `from_pretrained` method like we do for Flash Attention: `AutoModel.from_pretrained(\"meta-llama/Llama-2-7b-hf\", use_flash_attention_2=True)`\r\n\r\nThe first path is likely the most scalable; we would work a bit on the model definition to enable \"plugins\" from third-party library, enabling support for many third-party tools. The second one would offer support in core transformers directly, but for this we would really want validation across many models first.\r\n\r\ncc @ArthurZucker @younesbelkada @patrickvonplaten @ydshieh ", "Hello!\r\n\r\nI'm glad that you've open to the idea of adding this to `transformers`! I think it would be of enormous value.\r\nFirst of all, I agree that the implementation of \r\n```python\r\nfrom attention_sinks import AutoModel\r\n\r\nmodel = AutoModel.from_pretrained(\"meta-llama/Llama-2-7b-hf\", device_map=\"auto\")\r\n```\r\nis not workable long-term, but I think a solution similar to BetterTransformers in `optimum` is viable. People can approach the third-party application (e.g. optimum in the case of BetterTransformers, or `attention_sinks`), and propose the conversion method to add Attention Sinks to whatever architecture isn't supported yet. The goal/scope of the third party would then essentially be to act as a dictionary mapping architectures to conversion functions (rather than also providing `AutoModel`, `AutoModelForCausalLM`, `LlamaModel`, etc.).\r\n\r\nOn `transformers` we would have a conversion method (e.g. `add_attention_sinks`), which applies the conversion from the third party, if it exists. This might be preferable from an API perspective to your option 2, as this method can be given args and kwargs, such as the `attention_sink_size` (e.g. the first 4 tokens) and `window_size` (e.g. 1020 tokens). Adding more args and kwargs is more scalable in this way, as we can't just willy-nilly add these kwargs to `transformers` `AutoModel.from_pretrained`. This is important to consider, as the research on this is extremely new - so we might require more arguments in the future as the research expands.\r\n\r\nI'm curious to hear your thoughts on this.\r\n\r\nFor reference, today I will be adding support for MPT, Falcon, Pythia alongside my existing Llama support to [`attention_sinks`](https://github.com/tomaarsen/attention_sinks).\r\n\r\n- Tom Aarsen", "[Brainstorming] I'm wondering whether we could use this issue as a catalyst to improve our cache / past key value design we have in Transformers as it needs to be updated anyways soon (cc @gante as well).\r\n\r\n@tomaarsen do you think we could support StreamingLLM to every model just by defining a \"StreamingLLM/AttentionSink\" cache that can be passed to the forward method (as `past_key_values`) and that would then take care of correctly creating the past key values at each step.\r\n\r\nHere a GitHub gist of what I'm thinking of: https://gist.github.com/patrickvonplaten/7411f84b8a2cca3bc8e63df315d7d618\r\n\r\nIn short, this would entail some more fundamental changes to Transformers (essentially that every attention layer would call `cache.update(...)` if past_key_values is an object of type Cache), but I think this is something we want to do anyways to allow for torch.compile to work better. Also we would then give generate a new function argument `generate(..., cache=cache)` that can be optionally be passed.\r\n\r\nWould be curious to hear what you think about this idea! At this stage is definitely still pure brainstorming, but I think this could be a cool long-term solution that would also be quite easy to implement", "@patrickvonplaten \r\n\r\nI'm afraid that your proposal would not quite be enough to implement the AttentionSink approach in all models. In addition to the cache, the approach requires that the position IDs are shifted in the window. To give a toy example: 4 attention sink tokens, window size of 6, and the text is just a space separated alphabet, then the model sees:\r\n```\r\nA\r\nA B\r\nA B C\r\nA B C D\r\nA B C D E\r\nA B C D E F\r\nA B C D E F G\r\nA B C D E F G H \r\nA B C D E F G H I\r\nA B C D E F G H I J\r\nA B C D F G H I J K\r\nA B C D G H I J K L\r\nA B C D H I J K L M\r\n...\r\n```\r\nWith these position IDs:\r\n```\r\n0\r\n0 1\r\n0 1 2\r\n0 1 2 3\r\n0 1 2 3 4\r\n0 1 2 3 4 5\r\n0 1 2 3 4 5 6\r\n0 1 2 3 4 5 6 7\r\n0 1 2 3 4 5 6 7 8\r\n0 1 2 3 4 5 6 7 8 9\r\n0 1 2 3 4 5 6 7 8 9\r\n0 1 2 3 4 5 6 7 8 9\r\n0 1 2 3 4 5 6 7 8 9\r\n...\r\n```\r\ni.e. the position IDs get shifted (or rather, they don't get shifted) as the window moves. \r\n\r\nOr from the paper itself (Section 3.2, page 5):\r\n> When determining the relative distance and adding positional information to tokens, StreamingLLM\r\nfocuses on positions within the cache rather than those in the original text. This distinction is crucial\r\nfor StreamingLLM’s performance. For instance, if the current cache has tokens [0, 1, 2, 3, 6, 7, 8]\r\nand is in the process of decoding the 9th token, the positions assigned are [0, 1, 2, 3, 4, 5, 6, 7], rather\r\nthan the positions in the original text, which would be [0, 1, 2, 3, 6, 7, 8, 9].\r\n\r\n---\r\n\r\nIn practice, this is somewhat simple. For Mistral it requires changing this rotary position embedding application here:\r\nhttps://github.com/huggingface/transformers/blob/2c7b26f5083becb429bdae4c919feca28fdf5699/src/transformers/models/mistral/modeling_mistral.py#L273\r\nInto one that only updates the query_states, e.g.\r\n```python\r\nquery_states = apply_rotary_pos_emb_single(query_states, cos, sin, position_ids)\r\n```\r\nwith\r\n```python\r\ndef apply_rotary_pos_emb_single(x, cos, sin, position_ids):\r\n # The first two dimensions of cos and sin are always 1, so we can `squeeze` them.\r\n cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]\r\n sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]\r\n cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\n sin = sin[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\n x_embed = (x * cos) + (rotate_half(x) * sin)\r\n return x_embed\r\n```\r\nThen, we update the key and value states using the cache, followed by an update to the cache. Only after that's done, do we update the `key_states` with \"faked\" position IDs:\r\n```python\r\nkey_position_ids = torch.arange(kv_seq_len, device=position_ids.device).unsqueeze(0)\r\nkey_states = apply_rotary_pos_emb_single(key_states, cos, sin, key_position_ids)\r\n```\r\n\r\nI took these snippets from my `attention_sinks` [here](https://github.com/tomaarsen/attention_sinks/blob/c6f78f3d88cb151915cd744059091ee7949f10af/attention_sinks/models/mistral/pos_shift.py#L51-L66) and [here](https://github.com/tomaarsen/attention_sinks/blob/c6f78f3d88cb151915cd744059091ee7949f10af/attention_sinks/models/mistral/pos_shift.py#L17C1-L24C19). I'd recommend checking out these sources as these snippets might be confusing without their context.\r\n\r\n---\r\n\r\nThe tl:dr is essentially that we need 2 changes to implement Attention Sink correctly:\r\n1. Update the model architecture to shift the position IDs.\r\n2. Update the Attention Sink KV Cache using the `past_key_values` from every `...Model.forward` call.\r\n\r\nYour proposal would be a very elegant solution for the second part of the implementation, but not yet the former. I do the former in my `pos_shift.py` files for [Mistral](https://github.com/tomaarsen/attention_sinks/blob/c6f78f3d88cb151915cd744059091ee7949f10af/attention_sinks/models/mistral/pos_shift.py), [Falcon](https://github.com/tomaarsen/attention_sinks/blob/c6f78f3d88cb151915cd744059091ee7949f10af/attention_sinks/models/falcon/pos_shift.py), [GPT-NeoX](https://github.com/tomaarsen/attention_sinks/blob/c6f78f3d88cb151915cd744059091ee7949f10af/attention_sinks/models/gpt_neox/pos_shift.py) and [Llama](https://github.com/tomaarsen/attention_sinks/blob/c6f78f3d88cb151915cd744059091ee7949f10af/attention_sinks/models/llama/pos_shift.py).\r\n\r\nSidenote: I added support for Mistral, GPT-NeoX, Falcon and MPT to [`attention_sinks`](https://github.com/tomaarsen/attention_sinks) 🎉\r\nIf the model perplexities are anything to go by, then it works great for everything that I've tried:\r\n\r\n<details><summary>Perplexity & VRAM plots</summary>\r\n\r\n| Llama 2 7B | Falcon 7B |\r\n|:-------------:|:-------------:|\r\n| ![llama_2_7b_ppl_vram_plotted](https://github.com/tomaarsen/attention_sinks/assets/37621491/8d2e5b88-7158-41ac-8b3a-5a7abe38020d) | ![falcon_7b_ppl_vram_plotted](https://github.com/tomaarsen/attention_sinks/assets/37621491/1be07370-6de7-4a7e-b5ab-3092a5ecb412) |\r\n| **MPT 7B** | **Pythia 6.9B** |\r\n| ![mpt_7b_ppl_vram_plotted](https://github.com/mit-han-lab/streaming-llm/assets/37621491/c96cff66-92a3-43ab-bc21-40232f2740a0) | ![pythia_6 8b_ppl_vram_plotted](https://github.com/tomaarsen/attention_sinks/assets/37621491/b0fee168-fa5a-457d-9e27-8395eb6dfb38) |\r\n| **Mistral 7B** | |\r\n| ![mistral_7b_ppl_vram_plotted](https://github.com/microsoft/torchscale/assets/37621491/3a4c5634-cc1b-42d1-a35a-afb376a4f970) | |\r\n</details>\r\n\r\n- Tom Aarsen", "Great point about the position_ids, I indeed didn't think about this enough. \r\n\r\nAlso super nice to see that the other LLMs also work great with StreamingLLM's approach! Very encouraging! \r\n\r\nTaking a step back here, I guess there are different levels of support we could offer for StreamingLLM:\r\n\r\n---------------------\r\n\r\n- 1.) No real native support in Transformers\r\nThis corresponds a bit to what we have now and what is proposed [here](https://github.com/huggingface/transformers/issues/26553#issuecomment-1744384031) . The advantage is that we don't need to do any changes to Transformers and that your package can nicely be leveraged. However, keeping it up to date is challenging.\r\n\r\n---------------------\r\n- 2.) Native support in Transformers but only for a \"model.forward-level\". In the end of the day `generate` is just a method that calls forward multiple times and many libraries that depend on Transformers implement their own generate method.\r\nSo in a first step it would be great to support StreamingLLM for every model's forward method so that one only has to change the generate method. \r\n\r\nWe can achieve this by following the design as described [here](https://github.com/huggingface/transformers/issues/26553#issuecomment-1745349158) \r\n\r\nfor the cache, *i.e.*:\r\n\r\n> 2. Update the Attention Sink KV Cache using the past_key_values from every ...Model.forward call.\r\n\r\nNow for:\r\n\r\n> 1. Update the model architecture to shift the position IDs.\r\n\r\nit's indeed trickier!\r\n\r\nOne approach to handle this here could be to add an optional `key_position_ids` function argument here: https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/models/falcon/modeling_falcon.py#L429 \r\n\r\nThis would then propagate all the way to `apply_rotary_pos_emb`: https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/models/llama/modeling_llama.py#L207 \r\n\r\nthat would default to `position_ids` if not specified. This way the user could at every forward call for generate pass the correct, but different position_ids for query and key respectively. For the user this could then look as follows:\r\n\r\n```py\r\ncache = SinkCache(window_length=256, num_sink_tokens=3)\r\n\r\nquery_pos_ids = ...\r\nkey_pos_ids = ....\r\nmodel(input_ids, position_ids=query_pos_ids, key_position_ids=key_pos_ids, cache=cache)\r\n```\r\n\r\n---------------------\r\n\r\n- 3.) The last step would then be to support streamingLLM natively in Transformers with `generate`. If we have finished 2.) This could also be done relatively easily e.g. we could just allow the user to pass:\r\n\r\n```py\r\ncache = SinkCache(window_length=256, num_sink_tokens=3)\r\n```\r\n\r\nto generate:\r\n\r\n```py\r\nmodel.generate(\"....\", cache=cache)\r\n```\r\n\r\nand regarding the position_ids it would also require only a small change in https://github.com/huggingface/transformers/blob/2f3ea08a077ba3133fa8a604b22436cad250b055/src/transformers/models/llama/modeling_llama.py#L1093 to correct the position ids for generation. \r\n\r\n**Questions**:\r\n\r\nChanging all the position_ids logic just for StreamingLLM might be a tough sell given that the method is still relatively new, but if it can be nicely done I think it'd be ok to extend the forward and `prepare_inputs_for_generation` method. \r\n\r\nWhat do you think here @tomaarsen ? \r\n\r\nIn that regard some questions:\r\n- a) It would be great to also give the user a simple way to support Windowed Attention and Sliding Windowed Attention with recomputation in Transformes - do you know how the `position_ids` would need to be treated here? In a similar way as it's done for StreamingLLM? \r\n- b) As far as I can see streaming LLM is implemented only for models that use RoPE position scaling. Do you think it should also work with Alibi (Falcon's default is Alibi position ids) or GPT2 (vanilla position ids)? \r\nIt would be cool to gauge if the changes to the position ids needed here could be solved in a similar design as described above.", "Apologies for the delay, it a busy day at work today. \r\nI'll go over each of your options:\r\n\r\n> 1.) No real native support in Transformers\r\n\r\nAlthough I'm definitely open to maintaining a third party package, it is not feasible for `transformers` as it stands right now. For each architecture I have to:\r\n1. ✅ Wrap the forward of the `...Model` with a cache update, which I can implement fairly elegantly.\r\n2. ❌ Completely replace the entire forward method of all `...Attention` classes to update the position IDs.\r\n\r\nThis requires me to completely pin each `attention_sinks` version to a very specific `transformers` version, which is not really viable, as much as I think it would be fun to maintain a plugin of `transformers`.\r\n\r\n---\r\n\r\n> 2.) Native support in Transformers but only for a \"model.forward-level\".\r\n\r\nThis is my personal preference intuitively. \r\n\r\nBeyond that, generation doesn't work out of the box even with the forward methods correctly updated. I've encountered two noteworthy problems:\r\n1. The `attention_mask` in `_update_model_kwargs_for_generation` grows with a \"1\" for every token generated. Once the Sink KV Cache starts removing samples then this causes a shape mismatch. Easy fix here: https://github.com/tomaarsen/attention_sinks/blob/f46e63101fa74c6095e986c33284217c34a9fd88/attention_sinks/generation/utils.py#L38-L41\r\n\r\n2. The `model.generate` method does not return the `past_key_values`, preventing any form of multi-step generation (which is the primary use case of the Attention Sink approach: being able to keep prompting your model over and over and over without it losing fluency). If we update the cache like discussed prior, then this problem could be resolved by the user passing a Cache instance to `model.generate` which holds the updated `past_key_values`. This cache instance can then be reused for future `model.generate` calls.\r\n\r\n---\r\n\r\nI think that the key_position_ids idea should work. An alternative is that rotating and caching is implemented in a method, so that only this method can be overridden by a third party (i.e. \"attention_sinks\") to provide this functionality.\r\n\r\nEdit: Another alternative is a parameter on the `cache` class for `cache_before_rotate` which determines whether to cache before (like in Attention Sink) or after (normal) rotating.\r\n\r\n---\r\n\r\nAs for your questions, I also invite @Guangxuan-Xiao to answer here, but I'll do my best to answer:\r\n\r\n* a) I've implemented the window attention in the exact same way as attention_sinks, but just with using 0 sink tokens. That said, I'm not confident that this is the correct approach, as there's a chance that the position IDs should not be shifted for window attention. Perhaps @Guangxuan-Xiao can comment on this. \r\nAlso, for Sliding Window Attention, is that the form where each layer of the model can see a slightly different window?\r\n\r\n* b) Yes, it works for ALiBi. In fact, the MPT implementation is simplest of all - I don't need to override any forward method at all, I just need to call the cache update after every MPTModel.forward. Some proof:\r\n> StreamingLLM’ design is versatile and can be seamlessly incorporated into any autoregressive language model that employs relative positional encoding, such as RoPE (Su et al., 2021) and ALiBi (Press et al., 2022).\r\n\r\nAnd some more info on the implementation:\r\n> For encoding like RoPE, we cache the Keys of tokens prior to introducing the rotary transformation. Then, we apply position transformation to the keys in the rolling cache at each decoding phase. On the other hand, integrating with ALiBi is more direct. Here, the contiguous linear bias is applied instead of a ’jumping’ bias to the attention scores. This method of assigning positional embedding within the cache is crucial to StreamingLLM’s functionality, ensuring that the model operates efficiently even beyond its pre-training attention window size.\r\n\r\nFor GPT2, I'd have to have a quick look at the implementation. I see it's a bit different than the modern LLMs, e.g. with `GPT2LMHeadModel` instead of `GPT2ForCausalLM`. However, I think we need rotary embeddings.\r\n\r\n---\r\n\r\nQuasi-related: I've been pointed to a similar paper that does something very similar: https://arxiv.org/abs/2308.16137\r\n> It involves only a Λ-shaped attention mask (to avoid excessive attended tokens) and a distance limit (to avoid unseen distances) while requiring no parameter updates or learning. We find it applicable to a variety of LLMs using relative-position encoding methods. LM-Infinite is computationally efficient with O(n) time and space, and demonstrates consistent text generation fluency and quality to as long as 128k tokens on ArXiv and OpenWebText2 datasets, with 2.72x decoding speedup. We will make the codes publicly available following publication. \r\n\r\nThis \"Λ-shaped attention mask\" is kind of like always attending to the first tokens (i.e. the sink tokens) and \"a distance limit\" sounds like a window size.\r\n\r\n- Tom Aarsen", "Thanks for mentioning our work (https://arxiv.org/abs/2308.16137) \"LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models\" a month ago! I also noticed the striking similarities between the two methods: (1) we both use a $\\Lambda$-shaped attention mask, which is equivalent to \"sink tokens\" + nearest tokens, and (2) we both re-arrange the distance, which we referred to as a \"distance limit\" while they refer to as \"When determining the relative distance and adding positional information to tokens, StreamingLLM focuses on positions within the cache rather than those in the original text\" in Section 3.2.\r\n\r\nWe are happy to share an implementation here: https://github.com/Glaciohound/LM-Infinite, which you might be interested in having a check.\r\n\r\nSomewhat surprisingly, in the StreamingLLM's implementation, even when doing context encoding (such as calculating perplexity of a sequence), they feed tokens one by one (as can be observed [here](https://github.com/tomaarsen/attention_sinks/blob/c6f78f3d88cb151915cd744059091ee7949f10af/benchmark/perplexity.py#L56-L62) and [here](https://github.com/mit-han-lab/streaming-llm/blob/main/examples/eval_long_ppl.py#L76-L81)). In the contrary, our implementation offers a \"sequence\" mode encoding functionality just as normal language models, which avoids looping through the sequence and provide a great computational efficiency. This is thanks to our [specialized attention kernel implementation](https://github.com/Glaciohound/LM-Infinite/blob/main/models/lambda_attention.py).\r\n\r\nI am also very interested in helping to integrate these papers in HuggingFace Transformers. If you need any further information or help from technical side, please do not hesitate to let me know.", "Also @gante ", "@patrickvonplaten I've created a draft PR in #26681 using the `Cache` that you envisioned. The implementation for the Attention Sink Cache should be fairly simple then. \r\n\r\nAlso, I ran more experiments over the weekend:\r\n* The method does not trivially work for vanilla attention, as the next token still needs to have the \"next\" position, which means that the entire KV cache would have to be recomputed with the position IDs shifted to the left to make \"space\" for the next token.\r\n* I have very strong suspicions, though I haven't tested it yet, that Windowed Attention with Attention Sinks/StreamingLLM should work with Flash Attention 2 for Llama 🎉 \r\n\r\nI have a Hugging Face blogpost with more experiments on Attention Sinks coming out soon.\r\n\r\n- Tom Aarsen", "@tomaarsen @patrickvonplaten \r\n\r\nThis is awesome! In this way, the PR provides a general cache module reusable for other models as well, which is of great help to the whole community and future developers for other models.\r\n\r\nWhat is left to be done is compatibility with `backward` and sequence forwarding/classification support for long sequences, which I am more than happy to help on! Current implementation here is optimized for generation. To also let users forwarding and backwarding long sequences (such as encoding long contexts or for classification on long document, an inevitable need when users do large-scale pre-training or deployment) without token-by-token forwards, [our code snippet used in LM-Switch](https://github.com/huggingface/transformers/pull/26667) can serve as a starting point for encoding (happy to merge our codes!). After that, Sinks/StreamingLLM can continue using the cached features (theoretically compatible) for generation.\r\n\r\n- Chi Han", "I'd love to continue working on this.", "Of course @tomaarsen .\r\n\r\nYou can delete the bot comment (I guess you know it 😄 ) - and welcome to the team!\r\n", "Thank you! ❤️", "@tomaarsen @Glaciohound \r\nHi! Thanks for all the efforts you have put into making this work.\r\nI was wondering if there have been any updates regarding this issue, particularly about forwarding long sequences.\r\n\r\nThanks in advance!" ]
1,696
1,704
1,702
MEMBER
null
### Feature request Hello! I would love to see StreamingLLM/ Windowed Attention with Attention Sinks implemented, as proposed in https://arxiv.org/abs/2309.17453. The primary author (@Guangxuan-Xiao) has also released the code here: https://github.com/mit-han-lab/streaming-llm And I've adapted that code to a drop-in replacement of `transformers` to allow people to use it: https://github.com/tomaarsen/attention_sinks (e.g. ```python from attention_sinks import AutoModel model = AutoModel.from_pretrained("meta-llama/Llama-2-7b-hf", device_map="auto") ``` ) --- <img width="1905" alt="schemes" src="https://github.com/huggingface/transformers/assets/37621491/26aae4d4-9f12-4ff1-b746-e3576491f6a6"> The paper shows that adapting windowed attention such that the first 4 tokens of the input sequence are always in the window, allows any tested LLM (Llama 2, MPT, Falcon, Pythia) to scale to endless inputs without catastropic perplexity increases. All without doing any form of retraining. With other words, scaling any pretrained LLM to infinite sequence length is as simple as: 1. Converting the attention to windowed attention. 2. Using a special cache for the windowed attention that always keeps the first 4 (by default) tokens in the cache. Using this elementary approach, the authors were able to keep various LLM models stable when feeding them with (!) 4 million tokens. ![image](https://github.com/huggingface/transformers/assets/37621491/1bb658ee-329b-4631-9114-172ab6a11915) ### Motivation Maximum sequence lengths have been an important topic for a while now, with solutions ranging from RoPE to LongLoRA to YaRN, but each of these have their limits, and some also require retraining/additional training. This windowed attention with attention sinks seems to completely solve this problem, and it would be an extremely valuable addition. I can vouch for the results in the paper. I've gotten these results for Llama 2 7B using [my own implementation](https://github.com/tomaarsen/attention_sinks): ![llama_2_7b_ppl_vram](https://github.com/tomaarsen/attention_sinks/assets/37621491/1b99f29e-8d8d-4677-bef6-6a6e041776f6) ### Your contribution Yes. I would love to help implement this into core transformers rather than in [my drop-in implementation](https://github.com/tomaarsen/attention_sinks). However, I would like to discuss: 1. Whether this feature is a good fit for `transformers`. 2. Where we store the code for converting each model (e.g. Llama, Pythia, Falcon) to windowed attention. See e.g. [this file](https://github.com/tomaarsen/attention_sinks/blob/main/attention_sinks/models/llama/pos_shift.py) for an example. 3. Where we store the code with applying the Attention Sink KV Cache after a forward call. see e.g. [this file](https://github.com/tomaarsen/attention_sinks/blob/main/attention_sinks/models/llama/modeling_llama.py) for an example. The primary author of the paper has also expressed interest in a `transformers` implementation [here](https://github.com/mit-han-lab/streaming-llm/issues/5#issuecomment-1744116440). - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26553/reactions", "total_count": 18, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 7, "rocket": 0, "eyes": 11 }
https://api.github.com/repos/huggingface/transformers/issues/26553/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26552
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26552/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26552/comments
https://api.github.com/repos/huggingface/transformers/issues/26552/events
https://github.com/huggingface/transformers/pull/26552
1,923,298,624
PR_kwDOCUB6oc5bv0ED
26,552
Bump urllib3 from 1.26.5 to 1.26.17 in /examples/research_projects/visual_bert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
CONTRIBUTOR
null
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p> <blockquote> <h2>1.26.17</h2> <ul> <li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (GHSA-v845-jxx5-vc9f)</li> </ul> <h2>1.26.16</h2> <ul> <li>Fixed thread-safety issue where accessing a <code>PoolManager</code> with many distinct origins would cause connection pools to be closed while requests are in progress (<a href="https://redirect.github.com/urllib3/urllib3/issues/2954">#2954</a>)</li> </ul> <h2>1.26.15</h2> <ul> <li>Fix socket timeout value when HTTPConnection is reused (<a href="https://redirect.github.com/urllib3/urllib3/issues/2645">urllib3/urllib3#2645</a>)</li> <li>Remove &quot;!&quot; character from the unreserved characters in IPv6 Zone ID parsing (<a href="https://redirect.github.com/urllib3/urllib3/issues/2899">urllib3/urllib3#2899</a>)</li> <li>Fix IDNA handling of 'x80' byte (<a href="https://redirect.github.com/urllib3/urllib3/issues/2901">urllib3/urllib3#2901</a>)</li> </ul> <h2>1.26.14</h2> <ul> <li>Fixed parsing of port 0 (zero) returning None, instead of 0 (<a href="https://redirect.github.com/urllib3/urllib3/issues/2850">#2850</a>)</li> <li>Removed deprecated <code>HTTPResponse.getheaders()</code> calls in <code>urllib3.contrib</code> module.</li> </ul> <h2>1.26.13</h2> <ul> <li>Deprecated the <code>HTTPResponse.getheaders()</code> and <code>HTTPResponse.getheader()</code> methods.</li> <li>Fixed an issue where parsing a URL with leading zeroes in the port would be rejected even when the port number after removing the zeroes was valid.</li> <li>Fixed a deprecation warning when using cryptography v39.0.0.</li> <li>Removed the <code>&lt;4</code> in the <code>Requires-Python</code> packaging metadata field.</li> </ul> <h2>1.26.12</h2> <ul> <li>Deprecated the <code>urllib3[secure]</code> extra and the <code>urllib3.contrib.pyopenssl</code> module. Both will be removed in v2.x. See this <a href="https://redirect.github.com/urllib3/urllib3/issues/2680">GitHub issue</a> for justification and info on how to migrate.</li> </ul> <h2>1.26.11</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <ul> <li>Fixed an issue where reading more than 2 GiB in a call to HTTPResponse.read would raise an OverflowError on Python 3.9 and earlier.</li> </ul> <h2>1.26.10</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <p>:closed_lock_with_key: <strong>This is the first release to be signed with Sigstore!</strong> You can verify the distributables using the <code>.sig</code> and <code>.crt</code> files included on this release.</p> <ul> <li>Removed support for Python 3.5</li> <li>Fixed an issue where a <code>ProxyError</code> recommending configuring the proxy as HTTP instead of HTTPS could appear even when an HTTPS proxy wasn't configured.</li> </ul> <h2>1.26.9</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p> <blockquote> <h1>1.26.17 (2023-10-02)</h1> <ul> <li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (<code>[#3139](https://github.com/urllib3/urllib3/issues/3139) &lt;https://github.com/urllib3/urllib3/pull/3139&gt;</code>_)</li> </ul> <h1>1.26.16 (2023-05-23)</h1> <ul> <li>Fixed thread-safety issue where accessing a <code>PoolManager</code> with many distinct origins would cause connection pools to be closed while requests are in progress (<code>[#2954](https://github.com/urllib3/urllib3/issues/2954) &lt;https://github.com/urllib3/urllib3/pull/2954&gt;</code>_)</li> </ul> <h1>1.26.15 (2023-03-10)</h1> <ul> <li>Fix socket timeout value when <code>HTTPConnection</code> is reused (<code>[#2645](https://github.com/urllib3/urllib3/issues/2645) &lt;https://github.com/urllib3/urllib3/issues/2645&gt;</code>__)</li> <li>Remove &quot;!&quot; character from the unreserved characters in IPv6 Zone ID parsing (<code>[#2899](https://github.com/urllib3/urllib3/issues/2899) &lt;https://github.com/urllib3/urllib3/issues/2899&gt;</code>__)</li> <li>Fix IDNA handling of '\x80' byte (<code>[#2901](https://github.com/urllib3/urllib3/issues/2901) &lt;https://github.com/urllib3/urllib3/issues/2901&gt;</code>__)</li> </ul> <h1>1.26.14 (2023-01-11)</h1> <ul> <li>Fixed parsing of port 0 (zero) returning None, instead of 0. (<code>[#2850](https://github.com/urllib3/urllib3/issues/2850) &lt;https://github.com/urllib3/urllib3/issues/2850&gt;</code>__)</li> <li>Removed deprecated getheaders() calls in contrib module. Fixed the type hint of <code>PoolKey.key_retries</code> by adding <code>bool</code> to the union. (<code>[#2865](https://github.com/urllib3/urllib3/issues/2865) &lt;https://github.com/urllib3/urllib3/issues/2865&gt;</code>__)</li> </ul> <h1>1.26.13 (2022-11-23)</h1> <ul> <li>Deprecated the <code>HTTPResponse.getheaders()</code> and <code>HTTPResponse.getheader()</code> methods.</li> <li>Fixed an issue where parsing a URL with leading zeroes in the port would be rejected even when the port number after removing the zeroes was valid.</li> <li>Fixed a deprecation warning when using cryptography v39.0.0.</li> <li>Removed the <code>&lt;4</code> in the <code>Requires-Python</code> packaging metadata field.</li> </ul> <h1>1.26.12 (2022-08-22)</h1> <ul> <li>Deprecated the <code>urllib3[secure]</code> extra and the <code>urllib3.contrib.pyopenssl</code> module. Both will be removed in v2.x. See this <code>GitHub issue &lt;https://github.com/urllib3/urllib3/issues/2680&gt;</code>_ for justification and info on how to migrate.</li> </ul> <h1>1.26.11 (2022-07-25)</h1> <ul> <li>Fixed an issue where reading more than 2 GiB in a call to <code>HTTPResponse.read</code> would raise an <code>OverflowError</code> on Python 3.9 and earlier.</li> </ul> <h1>1.26.10 (2022-07-07)</h1> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/urllib3/urllib3/commit/c9016bf464751a02b7e46f8b86504f47d4238784"><code>c9016bf</code></a> Release 1.26.17</li> <li><a href="https://github.com/urllib3/urllib3/commit/01220354d389cd05474713f8c982d05c9b17aafb"><code>0122035</code></a> Backport GHSA-v845-jxx5-vc9f (<a href="https://redirect.github.com/urllib3/urllib3/issues/3139">#3139</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/e63989f97d206e839ab9170c8a76e3e097cc60e8"><code>e63989f</code></a> Fix installing <code>brotli</code> extra on Python 2.7</li> <li><a href="https://github.com/urllib3/urllib3/commit/2e7a24d08713a0131f0b3c7197889466d645cc49"><code>2e7a24d</code></a> [1.26] Configure OS for RTD to fix building docs</li> <li><a href="https://github.com/urllib3/urllib3/commit/57181d6ea910ac7cb2ff83345d9e5e0eb816a0d0"><code>57181d6</code></a> [1.26] Improve error message when calling urllib3.request() (<a href="https://redirect.github.com/urllib3/urllib3/issues/3058">#3058</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/3c0148048a523325819377b23fc67f8d46afc3aa"><code>3c01480</code></a> [1.26] Run coverage even with failed jobs</li> <li><a href="https://github.com/urllib3/urllib3/commit/d94029b7e2193ff47b627906a70e06377a09aae8"><code>d94029b</code></a> Release 1.26.16</li> <li><a href="https://github.com/urllib3/urllib3/commit/18e92145e9cddbabdf51c98f54202aa37fd5d4c8"><code>18e9214</code></a> Use trusted publishing for PyPI</li> <li><a href="https://github.com/urllib3/urllib3/commit/d25cf83bbae850a290fe34ed1610ae55c0558b36"><code>d25cf83</code></a> [1.26] Fix invalid test_ssl_failure_midway_through_conn</li> <li><a href="https://github.com/urllib3/urllib3/commit/25cca389496b86ee809c21e5b641aeaa74809263"><code>25cca38</code></a> [1.26] Fix test_ssl_object_attributes</li> <li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/1.26.5...1.26.17">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=1.26.5&new-version=1.26.17)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26552/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26552", "html_url": "https://github.com/huggingface/transformers/pull/26552", "diff_url": "https://github.com/huggingface/transformers/pull/26552.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26552.patch", "merged_at": 1696316101000 }
https://api.github.com/repos/huggingface/transformers/issues/26551
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26551/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26551/comments
https://api.github.com/repos/huggingface/transformers/issues/26551/events
https://github.com/huggingface/transformers/pull/26551
1,923,293,309
PR_kwDOCUB6oc5bvy6P
26,551
Bump urllib3 from 1.26.5 to 1.26.17 in /examples/research_projects/lxmert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
CONTRIBUTOR
null
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.17. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p> <blockquote> <h2>1.26.17</h2> <ul> <li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (GHSA-v845-jxx5-vc9f)</li> </ul> <h2>1.26.16</h2> <ul> <li>Fixed thread-safety issue where accessing a <code>PoolManager</code> with many distinct origins would cause connection pools to be closed while requests are in progress (<a href="https://redirect.github.com/urllib3/urllib3/issues/2954">#2954</a>)</li> </ul> <h2>1.26.15</h2> <ul> <li>Fix socket timeout value when HTTPConnection is reused (<a href="https://redirect.github.com/urllib3/urllib3/issues/2645">urllib3/urllib3#2645</a>)</li> <li>Remove &quot;!&quot; character from the unreserved characters in IPv6 Zone ID parsing (<a href="https://redirect.github.com/urllib3/urllib3/issues/2899">urllib3/urllib3#2899</a>)</li> <li>Fix IDNA handling of 'x80' byte (<a href="https://redirect.github.com/urllib3/urllib3/issues/2901">urllib3/urllib3#2901</a>)</li> </ul> <h2>1.26.14</h2> <ul> <li>Fixed parsing of port 0 (zero) returning None, instead of 0 (<a href="https://redirect.github.com/urllib3/urllib3/issues/2850">#2850</a>)</li> <li>Removed deprecated <code>HTTPResponse.getheaders()</code> calls in <code>urllib3.contrib</code> module.</li> </ul> <h2>1.26.13</h2> <ul> <li>Deprecated the <code>HTTPResponse.getheaders()</code> and <code>HTTPResponse.getheader()</code> methods.</li> <li>Fixed an issue where parsing a URL with leading zeroes in the port would be rejected even when the port number after removing the zeroes was valid.</li> <li>Fixed a deprecation warning when using cryptography v39.0.0.</li> <li>Removed the <code>&lt;4</code> in the <code>Requires-Python</code> packaging metadata field.</li> </ul> <h2>1.26.12</h2> <ul> <li>Deprecated the <code>urllib3[secure]</code> extra and the <code>urllib3.contrib.pyopenssl</code> module. Both will be removed in v2.x. See this <a href="https://redirect.github.com/urllib3/urllib3/issues/2680">GitHub issue</a> for justification and info on how to migrate.</li> </ul> <h2>1.26.11</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <ul> <li>Fixed an issue where reading more than 2 GiB in a call to HTTPResponse.read would raise an OverflowError on Python 3.9 and earlier.</li> </ul> <h2>1.26.10</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <p>:closed_lock_with_key: <strong>This is the first release to be signed with Sigstore!</strong> You can verify the distributables using the <code>.sig</code> and <code>.crt</code> files included on this release.</p> <ul> <li>Removed support for Python 3.5</li> <li>Fixed an issue where a <code>ProxyError</code> recommending configuring the proxy as HTTP instead of HTTPS could appear even when an HTTPS proxy wasn't configured.</li> </ul> <h2>1.26.9</h2> <p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a>.</strong></p> <p>:warning: <strong>urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p> <blockquote> <h1>1.26.17 (2023-10-02)</h1> <ul> <li>Added the <code>Cookie</code> header to the list of headers to strip from requests when redirecting to a different host. As before, different headers can be set via <code>Retry.remove_headers_on_redirect</code>. (<code>[#3139](https://github.com/urllib3/urllib3/issues/3139) &lt;https://github.com/urllib3/urllib3/pull/3139&gt;</code>_)</li> </ul> <h1>1.26.16 (2023-05-23)</h1> <ul> <li>Fixed thread-safety issue where accessing a <code>PoolManager</code> with many distinct origins would cause connection pools to be closed while requests are in progress (<code>[#2954](https://github.com/urllib3/urllib3/issues/2954) &lt;https://github.com/urllib3/urllib3/pull/2954&gt;</code>_)</li> </ul> <h1>1.26.15 (2023-03-10)</h1> <ul> <li>Fix socket timeout value when <code>HTTPConnection</code> is reused (<code>[#2645](https://github.com/urllib3/urllib3/issues/2645) &lt;https://github.com/urllib3/urllib3/issues/2645&gt;</code>__)</li> <li>Remove &quot;!&quot; character from the unreserved characters in IPv6 Zone ID parsing (<code>[#2899](https://github.com/urllib3/urllib3/issues/2899) &lt;https://github.com/urllib3/urllib3/issues/2899&gt;</code>__)</li> <li>Fix IDNA handling of '\x80' byte (<code>[#2901](https://github.com/urllib3/urllib3/issues/2901) &lt;https://github.com/urllib3/urllib3/issues/2901&gt;</code>__)</li> </ul> <h1>1.26.14 (2023-01-11)</h1> <ul> <li>Fixed parsing of port 0 (zero) returning None, instead of 0. (<code>[#2850](https://github.com/urllib3/urllib3/issues/2850) &lt;https://github.com/urllib3/urllib3/issues/2850&gt;</code>__)</li> <li>Removed deprecated getheaders() calls in contrib module. Fixed the type hint of <code>PoolKey.key_retries</code> by adding <code>bool</code> to the union. (<code>[#2865](https://github.com/urllib3/urllib3/issues/2865) &lt;https://github.com/urllib3/urllib3/issues/2865&gt;</code>__)</li> </ul> <h1>1.26.13 (2022-11-23)</h1> <ul> <li>Deprecated the <code>HTTPResponse.getheaders()</code> and <code>HTTPResponse.getheader()</code> methods.</li> <li>Fixed an issue where parsing a URL with leading zeroes in the port would be rejected even when the port number after removing the zeroes was valid.</li> <li>Fixed a deprecation warning when using cryptography v39.0.0.</li> <li>Removed the <code>&lt;4</code> in the <code>Requires-Python</code> packaging metadata field.</li> </ul> <h1>1.26.12 (2022-08-22)</h1> <ul> <li>Deprecated the <code>urllib3[secure]</code> extra and the <code>urllib3.contrib.pyopenssl</code> module. Both will be removed in v2.x. See this <code>GitHub issue &lt;https://github.com/urllib3/urllib3/issues/2680&gt;</code>_ for justification and info on how to migrate.</li> </ul> <h1>1.26.11 (2022-07-25)</h1> <ul> <li>Fixed an issue where reading more than 2 GiB in a call to <code>HTTPResponse.read</code> would raise an <code>OverflowError</code> on Python 3.9 and earlier.</li> </ul> <h1>1.26.10 (2022-07-07)</h1> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/urllib3/urllib3/commit/c9016bf464751a02b7e46f8b86504f47d4238784"><code>c9016bf</code></a> Release 1.26.17</li> <li><a href="https://github.com/urllib3/urllib3/commit/01220354d389cd05474713f8c982d05c9b17aafb"><code>0122035</code></a> Backport GHSA-v845-jxx5-vc9f (<a href="https://redirect.github.com/urllib3/urllib3/issues/3139">#3139</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/e63989f97d206e839ab9170c8a76e3e097cc60e8"><code>e63989f</code></a> Fix installing <code>brotli</code> extra on Python 2.7</li> <li><a href="https://github.com/urllib3/urllib3/commit/2e7a24d08713a0131f0b3c7197889466d645cc49"><code>2e7a24d</code></a> [1.26] Configure OS for RTD to fix building docs</li> <li><a href="https://github.com/urllib3/urllib3/commit/57181d6ea910ac7cb2ff83345d9e5e0eb816a0d0"><code>57181d6</code></a> [1.26] Improve error message when calling urllib3.request() (<a href="https://redirect.github.com/urllib3/urllib3/issues/3058">#3058</a>)</li> <li><a href="https://github.com/urllib3/urllib3/commit/3c0148048a523325819377b23fc67f8d46afc3aa"><code>3c01480</code></a> [1.26] Run coverage even with failed jobs</li> <li><a href="https://github.com/urllib3/urllib3/commit/d94029b7e2193ff47b627906a70e06377a09aae8"><code>d94029b</code></a> Release 1.26.16</li> <li><a href="https://github.com/urllib3/urllib3/commit/18e92145e9cddbabdf51c98f54202aa37fd5d4c8"><code>18e9214</code></a> Use trusted publishing for PyPI</li> <li><a href="https://github.com/urllib3/urllib3/commit/d25cf83bbae850a290fe34ed1610ae55c0558b36"><code>d25cf83</code></a> [1.26] Fix invalid test_ssl_failure_midway_through_conn</li> <li><a href="https://github.com/urllib3/urllib3/commit/25cca389496b86ee809c21e5b641aeaa74809263"><code>25cca38</code></a> [1.26] Fix test_ssl_object_attributes</li> <li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/1.26.5...1.26.17">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=1.26.5&new-version=1.26.17)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26551/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26551", "html_url": "https://github.com/huggingface/transformers/pull/26551", "diff_url": "https://github.com/huggingface/transformers/pull/26551.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26551.patch", "merged_at": 1696316090000 }
https://api.github.com/repos/huggingface/transformers/issues/26550
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26550/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26550/comments
https://api.github.com/repos/huggingface/transformers/issues/26550/events
https://github.com/huggingface/transformers/issues/26550
1,923,256,580
I_kwDOCUB6oc5yopEE
26,550
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "Lena43", "id": 107714082, "node_id": "U_kgDOBmuWIg", "avatar_url": "https://avatars.githubusercontent.com/u/107714082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lena43", "html_url": "https://github.com/Lena43", "followers_url": "https://api.github.com/users/Lena43/followers", "following_url": "https://api.github.com/users/Lena43/following{/other_user}", "gists_url": "https://api.github.com/users/Lena43/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lena43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lena43/subscriptions", "organizations_url": "https://api.github.com/users/Lena43/orgs", "repos_url": "https://api.github.com/users/Lena43/repos", "events_url": "https://api.github.com/users/Lena43/events{/privacy}", "received_events_url": "https://api.github.com/users/Lena43/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @Lena43, what language would you like to help kickstart a translation in?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26550/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26549
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26549/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26549/comments
https://api.github.com/repos/huggingface/transformers/issues/26549/events
https://github.com/huggingface/transformers/issues/26549
1,922,476,707
I_kwDOCUB6oc5ylqqj
26,549
Update resuming with FSDP to work with different number of nodes
{ "login": "jmzeng", "id": 5641698, "node_id": "MDQ6VXNlcjU2NDE2OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/5641698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmzeng", "html_url": "https://github.com/jmzeng", "followers_url": "https://api.github.com/users/jmzeng/followers", "following_url": "https://api.github.com/users/jmzeng/following{/other_user}", "gists_url": "https://api.github.com/users/jmzeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmzeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmzeng/subscriptions", "organizations_url": "https://api.github.com/users/jmzeng/orgs", "repos_url": "https://api.github.com/users/jmzeng/repos", "events_url": "https://api.github.com/users/jmzeng/events{/privacy}", "received_events_url": "https://api.github.com/users/jmzeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerzr @pacman100 ", "cc @pacman100 ", "Hello @jmzeng, what is the FSDP config and launch command(s) along with the library versions? \r\n\r\nYou can save the whole model ckpt instead of the sharded ckpts and then load it when changing the distributed setup from 1 nodes to 3 nodes for example. Did you try that?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,702
1,702
NONE
null
### Feature request It would be great if it's possible to resume a FSDP run on single node to two or three nodes. When I try it currently, it would give the following warning and OOM: `Didn't find an RNG file for process 8, if you are resuming a training that wasn't launched in a distributed fashion, reproducibility is not guaranteed.` Would it be possible to add a flag to Trainer to redistribute the model if a particular RNG is missing? ### Motivation This would be a great feature for testing on a single machine and then scaling up the run on multiple machines. ### Your contribution I'm currently exploring an approach for this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26549/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26548
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26548/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26548/comments
https://api.github.com/repos/huggingface/transformers/issues/26548/events
https://github.com/huggingface/transformers/issues/26548
1,922,451,593
I_kwDOCUB6oc5ylkiJ
26,548
Trainer errors out when concatenating different sequence length batches with distributed training and IterableDataset
{ "login": "ssharpe42", "id": 8136905, "node_id": "MDQ6VXNlcjgxMzY5MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/8136905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ssharpe42", "html_url": "https://github.com/ssharpe42", "followers_url": "https://api.github.com/users/ssharpe42/followers", "following_url": "https://api.github.com/users/ssharpe42/following{/other_user}", "gists_url": "https://api.github.com/users/ssharpe42/gists{/gist_id}", "starred_url": "https://api.github.com/users/ssharpe42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ssharpe42/subscriptions", "organizations_url": "https://api.github.com/users/ssharpe42/orgs", "repos_url": "https://api.github.com/users/ssharpe42/repos", "events_url": "https://api.github.com/users/ssharpe42/events{/privacy}", "received_events_url": "https://api.github.com/users/ssharpe42/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "FYI, I think this was a regression introduced between transformers 4.30.x and 4.31.x (I haven't bisected the specific release, but 4.30.x will run the example here correctly)", "Think I found it, this is where the IterableDatasetShard option is removed for IterableDatasets and it just relies on accelerate prepare. https://github.com/huggingface/transformers/commit/ebd94b0f6f215f6bc0f70e61eba075eb9196f9ef", "This issue (https://github.com/huggingface/transformers/issues/26541) also occurs due to the same changes. ", "Thanks @ssharpe42 and @dwyatte for flagging and diagnosing the issue here. I just wanted to echo that this is a major issue -- this problem is surprisingly hard to diagnose, not documented, and likely to affect many users who have no idea it is even a problem (for example [here](https://discuss.huggingface.co/t/sizes-of-tensors-must-match-except-in-dimension-0/56333) is one HF post on this topic with no responses as of the time of this post, but seemingly affected by the same problem).\r\n\r\nAlso, it seems quite likely that folks using `Trainer` with distributed training will be using `IterableDataset`s (I've also seen lots of uses of `Trainer` avoiding `IterableDataset`, maybe this is why?) -- the kinds of large training runs that require distributed training will very often also require large datasets that are best processed as `IterableDataset`s.\r\n\r\nThanks to the HF team, and hopefully this can be escalated; otherwise this is a complete blocker for many users!", "also, confirming that downgrading to transformers 4.30 is a temporary workaround (also requires downgrading to accelerate==0.20.3) when using the Trainer API (doesn't appear to work outside the Trainer API with these versions)", "We're aware of it and working towards a solution", "I just ran into this one this evening. Thanks for staying on top of it!\r\n\r\nMy project for the weekend was to switch from using Dataset, using explicit sharding , to an IterableDataset for training; the dataset in question is huge, so it's one or the other. Switching to an IterableDataset is needed for switching from DDP to FSDP mode for training a larger model, as I can no longer get away with reinstantiating the Trainer after each shard, while reusing the same optimizer and lr-scheduler.\r\n\r\nAnyhow, a quick and dirty work-around, until this is properly fixed, is to pad everything to the same length. This is hardly ideal, but it appears to work. Hopefully this is useful to someone.", "Another piece of information here: I noticed that IterableDataset + DDP/FSDP works fine with the Trainer API, but not outside the Trainer API in my own training loop. I decided to dig into this a bit more to see what Trainer is doing differently. \r\n\r\nLooking through the Trainer code in transformers version 4.30.2 where it constructs the DataLoader in a distributed setting [(this code block)](https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/trainer.py#L911), I noticed that the Trainer API never calls accelerator.prepare(data_loader)!\r\n\r\nI tried this in my own training loop which was previously failing with the above error, and it works with FSDP over 8 40GB A40 GPUs, for both a tiny model and Llama 2. \r\n\r\nSo, another workaround here: for users not using the Trainer API, *don't call accelerator.prepare() on the DataLoaders* (but still call accelerator.prepare() for everything else; i.e. model, optimizer, scheduler). This probably requires some extra logic to do the things accelerator.prepare might -- especially casting data types etc. -- but maybe it really is this simple, since it's exactly what the Trainer API was doing in transformers==4.30.2...\r\n\r\nI would appreciate if the maintainers could provide any insights here into what the downsides of this approach might be (keeping in mind that this is transformers==4.30.2 and accelerate==0.20.3).", "@jpgard that hasn't been the case for a while? See here where we call prepare: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L829\r\n\r\nRegardless, will be looking into this next week + before the holiday, sorry for the delay!", "Any update right here? I came across the same issue when running DPO trainer if I use streaming datasets that have different sequence length.", "Appreciate for the maintainers' efforts on this issue. Is there any update or temporal solution to fix this issue?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "My temporal solution is to warp the original `get_train_dataloader() `function of transformers.Trainer to skip execute `self.accelerator.prepare() `for iterable datasets:\r\n\r\n```\r\ndef get_train_dataloader(self) -> DataLoader:\r\n print(\"Inject Hacker dataloader prepare\")\r\n if self.train_dataset is None:\r\n raise ValueError(\"Trainer: training requires a train_dataset.\")\r\n train_dataset = self.train_dataset\r\n data_collator = self.data_collator\r\n if isinstance(train_dataset, datasets.Dataset):\r\n train_dataset = self._remove_unused_columns(train_dataset, description=\"training\")\r\n else:\r\n data_collator = self._get_collator_with_removed_columns(data_collator, description=\"training\")\r\n\r\n dataloader_params = {\r\n \"batch_size\": self._train_batch_size,\r\n \"collate_fn\": data_collator,\r\n \"num_workers\": self.args.dataloader_num_workers,\r\n \"pin_memory\": self.args.dataloader_pin_memory,\r\n }\r\n\r\n if not isinstance(train_dataset, torch.utils.data.IterableDataset):\r\n dataloader_params[\"sampler\"] = self._get_train_sampler()\r\n dataloader_params[\"drop_last\"] = self.args.dataloader_drop_last\r\n dataloader_params[\"worker_init_fn\"] = seed_worker\r\n\r\n return self.accelerator.prepare(DataLoader(train_dataset, **dataloader_params))\r\n return DataLoader(train_dataset, **dataloader_params)\r\n```\r\n\r\nIt works well for me. But not sure the potential impact on efficiency for skipping the `accelerator.prepare()`.", "Gently pinging @muellerzr and @pacman100 ", "Hi everyone, the issue is not directly linked to `IterableDataset` but more on the `dispatch_batches` arg that is set to `True` when passing a `IterableDataset`. \r\n\r\nWith `dispatch_batches=True`, we process the data on the main process and broadcast to the other processes. This is more reliable + less compute since we do it on only one process. However, we indeed have an issue when the data doesn't have the same size as we are trying to concat tensors that might not have the same size. \r\n\r\nAs a temporary solution, you can either:\r\n- pass `dispatch_batches=False` in `TrainingArguments`. This is the default behavior of Trainer before accelerate integration. It will use `IterableDatasetShard` instead of `DataLoaderDispatcher`. \r\n- pass `split_batches=True`. It will split a full batch into `self.num_process` parts. So, it requires that the batch size ( `per_device_train_batch_size` ) of the `dataloader` to be a round multiple of the number of processes. For example, if you set `per_device_train_batch_size = 16` and you have 4 processes, each process will have a batch_size of 4. \r\n\r\nRelated code: \r\n\r\n```python \r\nif self.split_batches:\r\n # One batch of the main iterator is dispatched and split.\r\n batch = next(iterator)\r\nelse:\r\n # num_processes batches of the main iterator are concatenated then dispatched and split.\r\n # We add the batches one by one so we have the remainder available when drop_last=False.\r\n batches = []\r\n for _ in range(self.state.num_processes):\r\n batches.append(next(iterator))\r\n # The issue is here since the batches do not necessarily have the same size. \r\n batch = concatenate(batches, dim=0)\r\n```\r\n", "We've released a patch in Accelerate which will give a more clear error on what's going on, and will be doing similar into the Trainer here shortly to give direct instructions on what to do :) ", "thanks for the attention to this!\r\n\r\nit is not entirely clear to me whether this issue is resolved or not in the current versions of transformers/accelerate? should we now expect IterableDataset to work in distributed training (both with Trainer, and without Trainer), by following one of the two options @SunMarc mentioned?", "They always have, there's no change needed in the source code. You need to do either of Marc's two solutions, which have existed for a large number of months :) \r\n\r\nYes. ", "> They always have\r\n\r\nthat is not my experience, and is why the issue exists in the first place :) thanks for pointing us to the workaround\r\n", "@SunMarc Thank you for the explainations and suggested workarounds! \r\nBy the way, may I ask if `per_device_train_batch_size` refers to the batch size per GPU? If it is, when using `split_batches` for a training setting with 4 GPUs and 32 images per GPU, should I set it as 32 or 32*4? I checked `trainer.py` and it seems that the trainer did not handle the different batch size with different values of `split_batches` automatically.", "Yes, from my experiments, you should set it as `per_device_train_batch_size = 32*4`. ", "If you were using dispatch_batches=False and you couldn't do epoch based training, @muellerzr added this so now that route is feasible with accelerate + iterable datasets https://github.com/huggingface/accelerate/pull/2066", "Hello, I'm using the streaming dataloader with dispatch_batches=False. Although the training can run for some steps, it is common that the training process is stuck where 1 GPU's util is 0% but others are 100%. After 30 minutes, training fails with an NCCL timeout error.\r\n\r\nMy batch size per device is 1 but each sequence may have different lengths. Some are very long and some are very short. I found it is very easy to confront the above problem (in contrast, if I pad all sequences to the same length, it becomes rare to confront the problem).\r\n\r\nCould I know the reason? And is there any way to address this problem?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.33.3 - Platform: Linux-5.10.186-179.751.amzn2.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.17 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: A100 - Using distributed or parallel set-up in script?: torchrun --nproc-per-node 2 script.py ### Who can help? @muellerzr, @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from torch.utils.data import IterableDataset from transformers import ( AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling, Trainer, TrainingArguments, ) data = [ { "input_ids": torch.tensor([101, 2040, 2001, 1999, 14936, 102]), "token_type_ids": torch.tensor([0, 0, 0, 0, 0, 0]), "attention_mask": torch.tensor([1, 1, 1, 1, 1, 1]), }, { "input_ids": torch.tensor([101, 2040, 102]), "token_type_ids": torch.tensor([0, 0, 0]), "attention_mask": torch.tensor([1, 1, 1]), }, { "input_ids": torch.tensor([101, 2040, 2001, 1999]), "token_type_ids": torch.tensor([0, 0, 0, 0]), "attention_mask": torch.tensor([1, 1, 1, 1]), }, { "input_ids": torch.tensor([101, 2040, 2001, 1999, 14936, 102]), "token_type_ids": torch.tensor([0, 0, 0, 0, 0, 0]), "attention_mask": torch.tensor([1, 1, 1, 1, 1, 1]), }, { "input_ids": torch.tensor([101]), "token_type_ids": torch.tensor([00]), "attention_mask": torch.tensor([1]), }, { "input_ids": torch.tensor([101]), "token_type_ids": torch.tensor([00]), "attention_mask": torch.tensor([1]), }, ] class ExampleDataset(IterableDataset): def __init__(self, data): super().__init__() self.data = data * 20 def __iter__(self): for x in self.data: yield x def __len__(self): return len(self.data) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") model = AutoModelForMaskedLM.from_pretrained("bert-base-cased") train_args = TrainingArguments( output_dir="output", num_train_epochs=3, per_device_train_batch_size=2, ) dc = DataCollatorForLanguageModeling(tokenizer=tokenizer) trainer = Trainer( train_dataset=ExampleDataset(data), model=model, args=train_args, data_collator=dc, ) trainer.train() ``` I run the above script with the command `torchrun --nproc-per-node 2 script.py`. This results in the following error. ``` Traceback (most recent call last): File "fm_model/data/scratch.py", line 242, in <module> trainer.train() File "/opt/conda/envs/fmmodel/lib/python3.8/site-packages/transformers/trainer.py", line 1556, in train return inner_training_loop( File "/opt/conda/envs/fmmodel/lib/python3.8/site-packages/transformers/trainer.py", line 1816, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/opt/conda/envs/fmmodel/lib/python3.8/site-packages/accelerate/data_loader.py", line 597, in __iter__ next_batch, next_batch_info = self._fetch_batches(main_iterator) File "/opt/conda/envs/fmmodel/lib/python3.8/site-packages/accelerate/data_loader.py", line 528, in _fetch_batches batch = concatenate(batches, dim=0) File "/opt/conda/envs/fmmodel/lib/python3.8/site-packages/accelerate/utils/operations.py", line 496, in concatenate return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()}) File "/opt/conda/envs/fmmodel/lib/python3.8/site-packages/accelerate/utils/operations.py", line 496, in <dictcomp> return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()}) File "/opt/conda/envs/fmmodel/lib/python3.8/site-packages/accelerate/utils/operations.py", line 499, in concatenate return torch.cat(data, dim=dim) RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 1 but got size 6 for tensor number 1 in the list. ``` This is due to the fact that in `Trainer` there are no arguments that can be passed to prepare the dataloader with [split_batches](https://github.com/huggingface/accelerate/blob/48d96319e0033fb8c8979072d97edf3995639029/src/accelerate/data_loader.py#L515) so this errors out when running this [line](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/data_loader.py#L481). This occurs since there is no padding done across batches before these are concatenated together. In order to be able to use an iterable dataset with Trainer, something probably needs to be changed in accelerate or the Trainer to enable distributed dataloading when the batches end up being different lengths. ### Expected behavior 1. Automatic padding in accelerate when the batches produced have different lengths OR 2. A way to specify split_batches where a full batch is produced then split for all the different processes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26548/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/26548/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26547
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26547/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26547/comments
https://api.github.com/repos/huggingface/transformers/issues/26547/events
https://github.com/huggingface/transformers/issues/26547
1,922,422,051
I_kwDOCUB6oc5yldUj
26,547
[SpeechT5] Decode function strips space after special token
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting! This is happening because:\r\n```python \r\n def convert_tokens_to_string(self, tokens):\r\n \"\"\"Converts a sequence of tokens (string) in a single string.\"\"\"\r\n current_sub_tokens = []\r\n out_string = \"\"\r\n for token in tokens:\r\n # make sure that special tokens are not decoded using sentencepiece model\r\n if token in self.all_special_tokens:\r\n out_string += self.sp_model.decode(current_sub_tokens) + token\r\n current_sub_tokens = []\r\n else:\r\n current_sub_tokens.append(token)\r\n out_string += self.sp_model.decode(current_sub_tokens)\r\n return out_string.strip()\r\n```\r\n\r\npasses the inputs to the sentencepiece model after they are split, thus what the `self.sp_model` sees is the following:\r\n1. ['▁', 'a', '▁']\r\n2. ['▁', 'b']\r\nand thus the prefix space will be removed for both. \r\nThis needs a fix 🎐 " ]
1,696
1,705
1,705
CONTRIBUTOR
null
### System Info - `transformers` version: 4.34.0.dev0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.8.1 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. First load the speecht5 tokenizer ```py from transformers import SpeechT5Tokenizer tokenizer = SpeechT5Tokenizer.from_pretrained('microsoft/speecht5_tts') ids = tokenizer.encode("a = b") # [4, 7, 4, 3, 4, 25, 2] (3 = unknown token, 4 = metaspace) ``` 2. Convert ids to tokens, showing that metaspace is added before and after the unknown token ```py tokenizer.convert_ids_to_tokens(ids) # ['▁', 'a', '▁', '<unk>', '▁', 'b', '</s>'] (metaspace before and after unknown) ``` 3. Decode, showing the space being removed after the unknown token. ```py tokenizer.decode(ids) # "a <unk>b</s>" (no space after <unk>) ``` Seems to be caused by this `strip`: https://github.com/huggingface/transformers/blob/9ed538f2e67ee10323d96c97284cf83d44f0c507/src/transformers/models/speecht5/tokenization_speecht5.py#L192 Related to https://github.com/huggingface/tokenizers/issues/826 ### Expected behavior The decoded string should be `"a <unk> b</s>"` (w/ a space after <unk>)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26547/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26546
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26546/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26546/comments
https://api.github.com/repos/huggingface/transformers/issues/26546/events
https://github.com/huggingface/transformers/pull/26546
1,922,338,876
PR_kwDOCUB6oc5bsfUF
26,546
[docs] Update to scripts building index.md
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Super nice, I think it's a lot cleaner like this! ✨\r\n> \r\n> It looks like some models (like Llama 2, Flan T5, DiT) are missing though.\r\n\r\nThe reason they are missing is that the table is based on the model classes. And in case of the models you mentioned, Llama 2 uses the same implementation as Llama, Flan T5 uses the same implementation as T5, and DiT's architecture is equivalent to that of BEiT. \r\nI can check if I can automatically find models like this, if not, I'll add them as \"special case\" constants.", "Yep, I think these models should be part of the table and we should make sure we can easily add more models there that share the same modeling file! ", "I've added a dict of models that use the same config as some \"base\" model. This way the table is complete. \r\nI couldn't find this mapping anywhere else, so it's hardcoded here. \r\nIf these models were included in `transformers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES`, it would not need to be hardcoded here, however, that dictionary is used in so many places, I'm hesitant to modify it. \r\n\r\ncc @stevhliu " ]
1,696
1,696
1,696
CONTRIBUTOR
null
The `docs/index.md` file currently contains two auto-generated parts: the list of models (same as in README), and a table of models with supported frameworks. Due to the number of models available in transformers (200+), the list and the table have become quite large, and there have been internal discussions about removing the list of models from the `index.md`. This PR adds the following changes: - removes the autogenerated model list from the `index.md` and updates the script so it's no longer added - modifies the script that generates the table to make model names links to corresponding model_doc. The model lists in the main README and localized READMEs remain as is.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26546/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26546/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26546", "html_url": "https://github.com/huggingface/transformers/pull/26546", "diff_url": "https://github.com/huggingface/transformers/pull/26546.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26546.patch", "merged_at": 1696515642000 }
https://api.github.com/repos/huggingface/transformers/issues/26545
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26545/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26545/comments
https://api.github.com/repos/huggingface/transformers/issues/26545/events
https://github.com/huggingface/transformers/pull/26545
1,922,248,387
PR_kwDOCUB6oc5bsL1R
26,545
[RFC, Logging] Change warning to info
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
MEMBER
null
# What does this PR do? Change warning to info when adding a new text embedding. The reason is that this warning is trigger every time someone runs: which confuses users: https://github.com/huggingface/diffusers/issues/5212 . I don't think we should use `pad_to_multiple` here as it would mean that we add 8 new tokens every time we call `load_textual_inversion`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26545/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26545", "html_url": "https://github.com/huggingface/transformers/pull/26545", "diff_url": "https://github.com/huggingface/transformers/pull/26545.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26545.patch", "merged_at": 1696316139000 }
https://api.github.com/repos/huggingface/transformers/issues/26544
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26544/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26544/comments
https://api.github.com/repos/huggingface/transformers/issues/26544/events
https://github.com/huggingface/transformers/issues/26544
1,922,208,467
I_kwDOCUB6oc5ykpLT
26,544
RWKV v4 not working with device_map auto and 4 GPUs
{ "login": "Epliz", "id": 63452361, "node_id": "MDQ6VXNlcjYzNDUyMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/63452361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Epliz", "html_url": "https://github.com/Epliz", "followers_url": "https://api.github.com/users/Epliz/followers", "following_url": "https://api.github.com/users/Epliz/following{/other_user}", "gists_url": "https://api.github.com/users/Epliz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Epliz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Epliz/subscriptions", "organizations_url": "https://api.github.com/users/Epliz/orgs", "repos_url": "https://api.github.com/users/Epliz/repos", "events_url": "https://api.github.com/users/Epliz/events{/privacy}", "received_events_url": "https://api.github.com/users/Epliz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe of interest to @SunMarc when back from leave :)", "Hi @Epliz\r\n\r\nI have ran:\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ncheckpoint = \"RWKV/rwkv-4-169m-pile\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, device_map=\"auto\")\r\n\r\n\r\nfrom transformers import pipeline\r\npipe = pipeline(task=\"text-generation\", model=model, tokenizer=tokenizer, do_sample=False,max_new_tokens=1024, use_cache=True)\r\n\r\nprint(pipe(\"What is life?\"))\r\n```\r\nOn 2x NVIDIA T4 GPUs and it seemed to work fine. I am using the same accelerate version as the one you shared. The only difference is that I am using transformers main branch - perhaps you can try to switch to transformers main and see if the issue persists?\r\n\r\nCan you also print `model.hf_device_map` and `set(model.hf_device_map.values())` ?", "Thank you @younesbelkada for commenting.\r\nI think we are getting somewhere.\r\nHere is what I observe: it doesn't work with the released & git versions of HuggingFace Transformers for 4 GPUs.\r\nWhat I can see from the device maps is that the GPU0 is not used? The device maps in both cases are the following:\r\n```\r\n>>>model.hf_device_map\r\n{'rwkv.embeddings': 1, 'rwkv.blocks.0': 1, 'rwkv.blocks.1': 2, 'rwkv.blocks.2': 2, 'rwkv.blocks.3': 2, 'rwkv.blocks.4': 2, 'rwkv.blocks.5': 2, 'rwkv.blocks.6': 2, 'rwkv.blocks.7': 3, 'rwkv.blocks.8': 3, 'rwkv.blocks.9': 3, 'rwkv.blocks.10': 3, 'rwkv.blocks.11': 3, 'rwkv.ln_out': 3, 'head': 3}\r\n\r\n>>>set(model.hf_device_map.values())\r\n{1, 2, 3}\r\n```\r\nWhen I use only 2 GPUs (by changing CUDA_VISIBLE_DEVICES), the generation works, and the device map uses GPU0:\r\n```\r\n>>> model.hf_device_map\r\n{'rwkv.embeddings': 0, 'rwkv.blocks.0': 0, 'rwkv.blocks.1': 0, 'rwkv.blocks.2': 1, 'rwkv.blocks.3': 1, 'rwkv.blocks.4': 1, 'rwkv.blocks.5': 1, 'rwkv.blocks.6': 1, 'rwkv.blocks.7': 1, 'rwkv.blocks.8': 1, 'rwkv.blocks.9': 1, 'rwkv.blocks.10': 1, 'rwkv.blocks.11': 1, 'rwkv.ln_out': 1, 'head': 1}\r\n>>> set(model.hf_device_map.values())\r\n{0, 1}\r\n```\r\nBut it seems to get weirder when trying to use 3 GPUs as the generation works, but the device map seems to show that GPU0 is not used:\r\n```\r\n>>> model.hf_device_map\r\n{'rwkv.embeddings': 1, 'rwkv.blocks.0': 1, 'rwkv.blocks.1': 1, 'rwkv.blocks.2': 1, 'rwkv.blocks.3': 2, 'rwkv.blocks.4': 2, 'rwkv.blocks.5': 2, 'rwkv.blocks.6': 2, 'rwkv.blocks.7': 2, 'rwkv.blocks.8': 2, 'rwkv.blocks.9': 2, 'rwkv.blocks.10': 2, 'rwkv.blocks.11': 2, 'rwkv.ln_out': 2, 'head': 2}\r\n>>> set(model.hf_device_map.values())\r\n{1, 2}\r\n```", "Even if the model is only on device 1 and 2, it should work. This is a problem on pytorch [side](https://github.com/pytorch/pytorch/issues/21819). If the model fits in two GPUs, I would suggest to only use 2 GPUs as there is no reason to split it across 3 GPUs. It will be faster because there will be less communication overhead. ", "Thank you @SunMarc for your reply.\r\n\r\nDo you think it might be a pytorch issue even though it works with transformers==4,29.1 but not 4.29.2 and afterwards (all other dependencies staying the same)?\r\nI see that in between those versions, there were changes to make sure RWKV CUDA kernels were included in the package.\r\n\r\nBest regards,\r\nEpliz", "Thanks for your investigation @Epliz. This is indeed strange. I will try to reproduce this with 4 gpus. Maybe the number of gpus matters. IN the meantime, i would suggest using only two gpus since the model is not that big. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,700
1,700
NONE
null
### System Info - `transformers` version: 4.33.3 - Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.16 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes, 4 GPUs (Nvidia A10) - Using distributed or parallel set-up in script?: nothing custom ### Who can help? @ArthurZucker , @younesbelkada , @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Try the following scripts, either loading the model with the first approach or the second: case 1 (works): ``` from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "RWKV/rwkv-4-169m-pile" tokenizer = AutoTokenizer.from_pretrained(checkpoint) # Case 1: works model = AutoModelForCausalLM.from_pretrained(checkpoint) model.to("cuda") from transformers import pipeline pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, do_sample=False,max_new_tokens=1024, use_cache=True, device=0) pipe("What is life?") ``` case 2 (doesn't work): ``` from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "RWKV/rwkv-4-169m-pile" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto") from transformers import pipeline pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, do_sample=False,max_new_tokens=1024, use_cache=True) pipe("What is life?") ``` error (when setting CUDA_LAUNCH_BLOCKING): ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 4, in time_func File "<stdin>", line 1, in <lambda> File "<DIR>venv38/lib64/python3.8/site-packages/transformers/pipelines/text_generation.py", line 205, in __call__ return super().__call__(text_inputs, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/pipelines/base.py", line 1140, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/pipelines/base.py", line 1147, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/pipelines/base.py", line 1046, in forward model_outputs = self._forward(model_inputs, **forward_params) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/pipelines/text_generation.py", line 268, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/generation/utils.py", line 1602, in generate return self.greedy_search( File "<DIR>venv38/lib64/python3.8/site-packages/transformers/generation/utils.py", line 2450, in greedy_search outputs = self( File "<DIR>venv38/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/models/rwkv/modeling_rwkv.py", line 815, in forward rwkv_outputs = self.rwkv( File "<DIR>venv38/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/models/rwkv/modeling_rwkv.py", line 690, in forward hidden_states, state, attentions = block( File "<DIR>venv38/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/models/rwkv/modeling_rwkv.py", line 384, in forward attention, state = self.attention(self.ln1(hidden), state=state, use_cache=use_cache) File "<DIR>venv38/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "<DIR>venv38/lib64/python3.8/site-packages/transformers/models/rwkv/modeling_rwkv.py", line 320, in forward state[2][:, :, self.layer_id] = layer_state[0] RuntimeError: CUDA error: an illegal memory access was encountered Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ### Expected behavior Both should work
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26544/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26543
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26543/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26543/comments
https://api.github.com/repos/huggingface/transformers/issues/26543/events
https://github.com/huggingface/transformers/issues/26543
1,922,188,156
I_kwDOCUB6oc5ykkN8
26,543
new T5 tokenisation unexpected behaviour immediately after added special tokens
{ "login": "mickeymickeymonkey", "id": 142171154, "node_id": "U_kgDOCHlcEg", "avatar_url": "https://avatars.githubusercontent.com/u/142171154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mickeymickeymonkey", "html_url": "https://github.com/mickeymickeymonkey", "followers_url": "https://api.github.com/users/mickeymickeymonkey/followers", "following_url": "https://api.github.com/users/mickeymickeymonkey/following{/other_user}", "gists_url": "https://api.github.com/users/mickeymickeymonkey/gists{/gist_id}", "starred_url": "https://api.github.com/users/mickeymickeymonkey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mickeymickeymonkey/subscriptions", "organizations_url": "https://api.github.com/users/mickeymickeymonkey/orgs", "repos_url": "https://api.github.com/users/mickeymickeymonkey/repos", "events_url": "https://api.github.com/users/mickeymickeymonkey/events{/privacy}", "received_events_url": "https://api.github.com/users/mickeymickeymonkey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! This is expected. If you use `tokenizer.tokenize` instead of encode decode you will have a better understanding of what is happening. The `legacy = False` behave properly. And the tokens are by default stripping left and right. \r\n\r\n<details>\r\n <summary> A small snippet: </summary>\r\n\r\n```python \r\nIn [2]: from transformers import AutoTokenizer\r\n ...: tokenizer1 = AutoTokenizer.from_pretrained('t5-base', use_fast=False, legacy=True)\r\n ...: tokenizer2 = AutoTokenizer.from_pretrained('t5-base', use_fast=False, legacy=False)\r\n ...: add_tokens = ['[EXT1]', '[test_test]']\r\n ...: words = ['terminal', 'many', 'sense']\r\n ...: texts = [ '{}', '[EXT1] {} [test_test]', '[EXT1] {} [test_test] {}', '[EXT1] {} [test_test] {} {}', '[EXT1] {}', '{} [EXT1]', '{} [EXT1] {}']\r\n ...: tokenizer1.add_tokens(add_tokens, special_tokens=True)\r\n ...: tokenizer2.add_tokens(add_tokens, special_tokens=True)\r\n ...: for xx, tok in [('Legacy', tokenizer1), ('New', tokenizer2)]:\r\n ...: print(xx + ' tokenizer')\r\n ...: for __text in texts:\r\n ...: for word in words:\r\n ...: try: text = __text.format(word)\r\n ...: except: text = __text.format(word, word, word)\r\n ...: else: text = __text.format(word, word)\r\n ...: print(f'\"{text}\" -->')\r\n ...: print('\\t\\t', tok.tokenize(text))\r\n ...: print()\r\n ...:\r\n/home/arthur/transformers/src/transformers/models/t5/tokenization_t5.py:237: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.\r\nFor now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.\r\n- Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.\r\n- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.\r\n- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.\r\n warnings.warn(\r\nLegacy tokenizer\r\n\"terminal\" -->\r\n\t\t ['▁terminal']\r\n\"many\" -->\r\n\t\t ['▁many']\r\n\"sense\" -->\r\n\t\t ['▁sense']\r\n\"[EXT1] terminal [test_test]\" -->\r\n\t\t ['[EXT1]', '▁terminal', '[test_test]']\r\n\"[EXT1] many [test_test]\" -->\r\n\t\t ['[EXT1]', '▁many', '[test_test]']\r\n\"[EXT1] sense [test_test]\" -->\r\n\t\t ['[EXT1]', '▁sense', '[test_test]']\r\n\"[EXT1] terminal [test_test] terminal\" -->\r\n\t\t ['[EXT1]', '▁terminal', '[test_test]', '▁terminal']\r\n\"[EXT1] many [test_test] many\" -->\r\n\t\t ['[EXT1]', '▁many', '[test_test]', '▁many']\r\n\"[EXT1] sense [test_test] sense\" -->\r\n\t\t ['[EXT1]', '▁sense', '[test_test]', '▁sense']\r\n\"[EXT1] terminal [test_test] terminal terminal\" -->\r\n\t\t ['[EXT1]', '▁terminal', '[test_test]', '▁terminal', '▁terminal']\r\n\"[EXT1] many [test_test] many many\" -->\r\n\t\t ['[EXT1]', '▁many', '[test_test]', '▁many', '▁many']\r\n\"[EXT1] sense [test_test] sense sense\" -->\r\n\t\t ['[EXT1]', '▁sense', '[test_test]', '▁sense', '▁sense']\r\n\"[EXT1] terminal\" -->\r\n\t\t ['[EXT1]', '▁terminal']\r\n\"[EXT1] many\" -->\r\n\t\t ['[EXT1]', '▁many']\r\n\"[EXT1] sense\" -->\r\n\t\t ['[EXT1]', '▁sense']\r\n\"terminal [EXT1]\" -->\r\n\t\t ['▁terminal', '[EXT1]']\r\n\"many [EXT1]\" -->\r\n\t\t ['▁many', '[EXT1]']\r\n\"sense [EXT1]\" -->\r\n\t\t ['▁sense', '[EXT1]']\r\n\"terminal [EXT1] terminal\" -->\r\n\t\t ['▁terminal', '[EXT1]', '▁terminal']\r\n\"many [EXT1] many\" -->\r\n\t\t ['▁many', '[EXT1]', '▁many']\r\n\"sense [EXT1] sense\" -->\r\n\t\t ['▁sense', '[EXT1]', '▁sense']\r\n\r\nNew tokenizer\r\n\"terminal\" -->\r\n\t\t ['▁terminal']\r\n\"many\" -->\r\n\t\t ['▁many']\r\n\"sense\" -->\r\n\t\t ['▁sense']\r\n\"[EXT1] terminal [test_test]\" -->\r\n\t\t ['[EXT1]', 'termin', 'al', '[test_test]']\r\n\"[EXT1] many [test_test]\" -->\r\n\t\t ['[EXT1]', 'man', 'y', '[test_test]']\r\n\"[EXT1] sense [test_test]\" -->\r\n\t\t ['[EXT1]', 's', 'ense', '[test_test]']\r\n\"[EXT1] terminal [test_test] terminal\" -->\r\n\t\t ['[EXT1]', 'termin', 'al', '[test_test]', 'termin', 'al']\r\n\"[EXT1] many [test_test] many\" -->\r\n\t\t ['[EXT1]', 'man', 'y', '[test_test]', 'man', 'y']\r\n\"[EXT1] sense [test_test] sense\" -->\r\n\t\t ['[EXT1]', 's', 'ense', '[test_test]', 's', 'ense']\r\n\"[EXT1] terminal [test_test] terminal terminal\" -->\r\n\t\t ['[EXT1]', 'termin', 'al', '[test_test]', 'termin', 'al', '▁terminal']\r\n\"[EXT1] many [test_test] many many\" -->\r\n\t\t ['[EXT1]', 'man', 'y', '[test_test]', 'man', 'y', '▁many']\r\n\"[EXT1] sense [test_test] sense sense\" -->\r\n\t\t ['[EXT1]', 's', 'ense', '[test_test]', 's', 'ense', '▁sense']\r\n\"[EXT1] terminal\" -->\r\n\t\t ['[EXT1]', 'termin', 'al']\r\n\"[EXT1] many\" -->\r\n\t\t ['[EXT1]', 'man', 'y']\r\n\"[EXT1] sense\" -->\r\n\t\t ['[EXT1]', 's', 'ense']\r\n\"terminal [EXT1]\" -->\r\n\t\t ['▁terminal', '[EXT1]']\r\n\"many [EXT1]\" -->\r\n\t\t ['▁many', '[EXT1]']\r\n\"sense [EXT1]\" -->\r\n\t\t ['▁sense', '[EXT1]']\r\n\"terminal [EXT1] terminal\" -->\r\n\t\t ['▁terminal', '[EXT1]', 'termin', 'al']\r\n\"many [EXT1] many\" -->\r\n\t\t ['▁many', '[EXT1]', 'man', 'y']\r\n\"sense [EXT1] sense\" -->\r\n\t\t ['▁sense', '[EXT1]', 's', 'ense']\r\n```\r\n</details>\r\n\r\nReally inviting you to read the documentation on what is happening here! 🤗 \r\nA fix? \r\n```python \r\nfrom transformers import AutoTokenizer, AddedToken\r\ntokenizer1 = AutoTokenizer.from_pretrained('t5-base', use_fast=False, legacy=True)\r\ntokenizer2 = AutoTokenizer.from_pretrained('t5-base', use_fast=False, legacy=False)\r\nadd_tokens = [AddedToken('[EXT1]', rstrip = False, lstrip = False), AddedToken('[test_test]', rstrip = False, rstrip= False)]\r\ntokenizer1.add_tokens(add_tokens, special_tokens=True)\r\ntokenizer2.add_tokens(add_tokens, special_tokens=True)\r\n```", "ok this is fixed. you are right that the new T5 tokenisation works as expected. the problem is having sentencepiece installed at the same time. \r\n\r\nyour fix did not work for me at first, but when I tried on another environment it was ok. after looking, I found out that the 2nd environment where it worked did not have sentencepiece package installed. \r\n\r\nwhen I installed sentencepiece it asked for protobuf installation, and then the unexpected behaviour happened again. \r\n\r\nin summary: to solve i had to make sure I did not have sentencepiece installed. ", "That is very strange. If you do not have sentencepiece, you cannot use the `slow ` tokenizers. If you use the `fast` tokenizer, you will still have the issue (it's expected for now). \r\n`use_fast = False` would only work if you have sentencepiece, and the `legacy = False` is also only for sentencepiece. ", "you're right. I did not notice that. without sentencepiece, the use_fast = False argument is ignored and the fast tokenizer is loaded. \r\n\r\nthe AddedToken method is not working for me. it is a good suggestion and seems to be logical way to get such control if this behaviour for special tokens is desired. \r\n\r\nit seems to be because in tokenization_utils.py the Trie's split method returns the word coming immediately after added special tokens without the \"▁\" already, so the AddedToken check for rstrip/lstrip does not hit. Here's what I mean, e.g. \r\n\r\n```\r\n561 tokens = self.tokens_trie.split(text)\r\n562 # added\r\n563 print('\\t\\t Text:', text)\r\n564 print('\\t\\t Tokens:', tokens)\r\n``` \r\n\r\n```\r\nInput: \"terminal [EXT1] terminal\" -->\r\n\t\tText: ▁terminal [EXT1] terminal\r\n\t\tTokens: ['▁terminal ', '[EXT1]', ' terminal']\r\n\r\nInput: \"many [EXT1] many\" -->\r\n\t\tText: ▁many [EXT1] many\r\n\t\tTokens: ['▁many ', '[EXT1]', ' many']\r\n\r\nInput: \"sense [EXT1] sense\" -->\r\n\t\tText: ▁sense [EXT1] sense\r\n\t\tTokens: ['▁sense ', '[EXT1]', ' sense']\r\n```\r\n\r\n", "No when you use a fast tokenizer this tire is not used and the stripping / splitting logic is done in rust! But this issue was fixed by #26538 !", "(Note that T5 will always tokenize `'▁'` as `''` so you won't get the space before exist even if you do not strip", "ok I will install from source and try it. thank you for the explanations and the work on the tokenizer, you're amazing! 🥳", "🤗 thanks for bearing with the changes!", "no worries, great work 🤗. just tested with installation from source and it works for me (with slow, and legacy=False). thank you! 🙏" ]
1,696
1,696
1,696
NONE
null
### System Info System Info transformers version: 4.33.1 Python version: 3.8.16 PyTorch version (GPU?): 2.0.1+cu118 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer tokenizer1 = AutoTokenizer.from_pretrained('t5-base', use_fast=False, legacy=True) tokenizer2 = AutoTokenizer.from_pretrained('t5-base', use_fast=False, legacy=False) add_tokens = ['[EXT1]', '[test_test]'] words = ['terminal', 'many', 'sense'] texts = [ '{}', '[EXT1] {} [test_test]', '[EXT1] {} [test_test] {}', '[EXT1] {} [test_test] {} {}', '[EXT1] {}', '{} [EXT1]', '{} [EXT1] {}'] tokenizer1.add_tokens(add_tokens, special_tokens=True) tokenizer2.add_tokens(add_tokens, special_tokens=True) for xx, tok in [('Legacy', tokenizer1), ('New', tokenizer2)]: print(xx + ' tokenizer') for __text in texts: for word in words: try: text = __text.format(word) except: text = __text.format(word, word, word) else: text = __text.format(word, word) print(f'"{text}" -->') print('\t\t', [tok.decode(t) for t in tok.encode(text)]) print() ``` ``` Legacy tokenizer "terminal" --> ['terminal', '</s>'] "many" --> ['many', '</s>'] "sense" --> ['sense', '</s>'] "[EXT1] terminal [test_test]" --> ['[EXT1]', 'terminal', '[test_test]', '</s>'] "[EXT1] many [test_test]" --> ['[EXT1]', 'many', '[test_test]', '</s>'] "[EXT1] sense [test_test]" --> ['[EXT1]', 'sense', '[test_test]', '</s>'] "[EXT1] terminal [test_test] terminal" --> ['[EXT1]', 'terminal', '[test_test]', 'terminal', '</s>'] "[EXT1] many [test_test] many" --> ['[EXT1]', 'many', '[test_test]', 'many', '</s>'] "[EXT1] sense [test_test] sense" --> ['[EXT1]', 'sense', '[test_test]', 'sense', '</s>'] "[EXT1] terminal [test_test] terminal terminal" --> ['[EXT1]', 'terminal', '[test_test]', 'terminal', 'terminal', '</s>'] "[EXT1] many [test_test] many many" --> ['[EXT1]', 'many', '[test_test]', 'many', 'many', '</s>'] "[EXT1] sense [test_test] sense sense" --> ['[EXT1]', 'sense', '[test_test]', 'sense', 'sense', '</s>'] "[EXT1] terminal" --> ['[EXT1]', 'terminal', '</s>'] "[EXT1] many" --> ['[EXT1]', 'many', '</s>'] "[EXT1] sense" --> ['[EXT1]', 'sense', '</s>'] "terminal [EXT1]" --> ['terminal', '[EXT1]', '</s>'] "many [EXT1]" --> ['many', '[EXT1]', '</s>'] "sense [EXT1]" --> ['sense', '[EXT1]', '</s>'] "terminal [EXT1] terminal" --> ['terminal', '[EXT1]', 'terminal', '</s>'] "many [EXT1] many" --> ['many', '[EXT1]', 'many', '</s>'] "sense [EXT1] sense" --> ['sense', '[EXT1]', 'sense', '</s>'] New tokenizer "terminal" --> ['terminal', '</s>'] "many" --> ['many', '</s>'] "sense" --> ['sense', '</s>'] "[EXT1] terminal [test_test]" --> ['[EXT1]', 'termin', 'al', '[test_test]', '</s>'] "[EXT1] many [test_test]" --> ['[EXT1]', 'man', 'y', '[test_test]', '</s>'] "[EXT1] sense [test_test]" --> ['[EXT1]', 's', 'ense', '[test_test]', '</s>'] "[EXT1] terminal [test_test] terminal" --> ['[EXT1]', 'termin', 'al', '[test_test]', 'termin', 'al', '</s>'] "[EXT1] many [test_test] many" --> ['[EXT1]', 'man', 'y', '[test_test]', 'man', 'y', '</s>'] "[EXT1] sense [test_test] sense" --> ['[EXT1]', 's', 'ense', '[test_test]', 's', 'ense', '</s>'] "[EXT1] terminal [test_test] terminal terminal" --> ['[EXT1]', 'termin', 'al', '[test_test]', 'termin', 'al', 'terminal', '</s>'] "[EXT1] many [test_test] many many" --> ['[EXT1]', 'man', 'y', '[test_test]', 'man', 'y', 'many', '</s>'] "[EXT1] sense [test_test] sense sense" --> ['[EXT1]', 's', 'ense', '[test_test]', 's', 'ense', 'sense', '</s>'] "[EXT1] terminal" --> ['[EXT1]', 'termin', 'al', '</s>'] "[EXT1] many" --> ['[EXT1]', 'man', 'y', '</s>'] "[EXT1] sense" --> ['[EXT1]', 's', 'ense', '</s>'] "terminal [EXT1]" --> ['terminal', '[EXT1]', '</s>'] "many [EXT1]" --> ['many', '[EXT1]', '</s>'] "sense [EXT1]" --> ['sense', '[EXT1]', '</s>'] "terminal [EXT1] terminal" --> ['terminal', '[EXT1]', 'termin', 'al', '</s>'] "many [EXT1] many" --> ['many', '[EXT1]', 'man', 'y', '</s>'] "sense [EXT1] sense" --> ['sense', '[EXT1]', 's', 'ense', '</s>'] ``` ### Expected behavior the non-legacy T5 tokenizer has unexpected behaviour for words immediately after added special tokens. these were one full token in the legacy tokenizer. e.g. "many", "terminal", "sense", but now become sequence of subwords. but after the first word following special token, it returns to expected behaviour, see the examples. Expected behaviour: tokenization should be the same for words after special tokens. this new behaviour affects ability to use the pretrained knowledge of the LM. at least a warning should be added for people who want to use their own special tokens seperators.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26543/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26542
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26542/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26542/comments
https://api.github.com/repos/huggingface/transformers/issues/26542/events
https://github.com/huggingface/transformers/pull/26542
1,922,073,470
PR_kwDOCUB6oc5brmAs
26,542
fix CLIPImageProcessor returns NaNs/Infs when input is a float tensor…
{ "login": "mtuan4i", "id": 13715393, "node_id": "MDQ6VXNlcjEzNzE1Mzkz", "avatar_url": "https://avatars.githubusercontent.com/u/13715393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mtuan4i", "html_url": "https://github.com/mtuan4i", "followers_url": "https://api.github.com/users/mtuan4i/followers", "following_url": "https://api.github.com/users/mtuan4i/following{/other_user}", "gists_url": "https://api.github.com/users/mtuan4i/gists{/gist_id}", "starred_url": "https://api.github.com/users/mtuan4i/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtuan4i/subscriptions", "organizations_url": "https://api.github.com/users/mtuan4i/orgs", "repos_url": "https://api.github.com/users/mtuan4i/repos", "events_url": "https://api.github.com/users/mtuan4i/events{/privacy}", "received_events_url": "https://api.github.com/users/mtuan4i/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "would you have a look at this @amyeroberts? or would you please let me know why it was ignored", "Hi @mtuan4i, apologies for the delay. I was off last month and so didn't see this PR. I'll review now. ", "Thank you. I think your fix is more reasonable so I will close this" ]
1,696
1,698
1,698
NONE
null
… or np.array filled with 0s or 1s, and do_rescale=False # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes a bug that causes image processors like CLIPImageProcessor return NaN/Inf values when an input image is a float tensor/np.array filled with 0s or 1s, and `do_rescale=False` Reproduction code: ``` from transformers import CLIPImageProcessor import numpy as np processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14") image = np.random.randint(0,2,(3,3,3)).astype(np.float32) print(processor(image, do_rescale=False)) ``` With such an input, `image_transform.resize` will return a uint8 image. Due to `do_rescale=False`, the input image for `image_transform.normalize` is also uint8, which makes `std=[0,0,0]` at https://github.com/huggingface/transformers/blob/1b8decb04c246ec8e1c4ba7f2749043d0876d24e/src/transformers/image_transforms.py#L391 This leads to division-by-zero when the image is normalized This PR fixes this bug simply by making `image_transform.resize` return a float image for this case ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Could you please review this PR @amyeroberts?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26542/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26542", "html_url": "https://github.com/huggingface/transformers/pull/26542", "diff_url": "https://github.com/huggingface/transformers/pull/26542.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26542.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26541
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26541/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26541/comments
https://api.github.com/repos/huggingface/transformers/issues/26541/events
https://github.com/huggingface/transformers/issues/26541
1,922,003,916
I_kwDOCUB6oc5yj3PM
26,541
Using IterableDataset, Trainer never calls `set_epoch` to increment and resets the epochs to 0 at the beginning of each epoch.
{ "login": "ssharpe42", "id": 8136905, "node_id": "MDQ6VXNlcjgxMzY5MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/8136905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ssharpe42", "html_url": "https://github.com/ssharpe42", "followers_url": "https://api.github.com/users/ssharpe42/followers", "following_url": "https://api.github.com/users/ssharpe42/following{/other_user}", "gists_url": "https://api.github.com/users/ssharpe42/gists{/gist_id}", "starred_url": "https://api.github.com/users/ssharpe42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ssharpe42/subscriptions", "organizations_url": "https://api.github.com/users/ssharpe42/orgs", "repos_url": "https://api.github.com/users/ssharpe42/repos", "events_url": "https://api.github.com/users/ssharpe42/events{/privacy}", "received_events_url": "https://api.github.com/users/ssharpe42/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "Hi, thanks for your patience. https://github.com/huggingface/accelerate/pull/2057 will be aiming to fix this" ]
1,696
1,698
1,698
NONE
null
### System Info I am using a Huggingface implementation of IterableDataset with the `set_epoch` method with the standard Trainer class. However, during training the `_epoch` attribute of the dataset is never changed.(https://github.com/huggingface/datasets/blob/0cc77d7f45c73698c31eab4f8cfff901044d0020/src/datasets/iterable_dataset.py#L1829) In the Trainer docs, it says for an IterableDataset to "have a `set_epoch()` method that internally sets the seed of the RNGs used". Im not sure how to use this if Trainer doesn't internally call this at every epoch. Additionally, the IterableDatasetShard implementation in `accelerate` (https://github.com/huggingface/accelerate/blob/48d96319e0033fb8c8979072d97edf3995639029/src/accelerate/data_loader.py#L220) is different from `transformers` (https://github.com/huggingface/transformers/blob/bffac926ca6bc6c965a92bfbfd00c567a2c0fb90/src/transformers/trainer_pt_utils.py#L731) in the fact that it doesn't have the seed setting at the beginning of the iterator call. ``` if ( not hasattr(self.dataset, "set_epoch") and hasattr(self.dataset, "generator") and isinstance(self.dataset.generator, torch.Generator) ): self.dataset.generator.manual_seed(self.seed + self.epoch) ``` `set_epoch` used to be called in the Trainer [here](https://github.com/huggingface/transformers/blob/75b13f82e91d03bed88bf6cf0e2efb85346fb311/src/transformers/trainer.py#L1354C20-L1354C20) until this recent change by @muellerzr in June https://github.com/huggingface/transformers/commit/ebd94b0f6f215f6bc0f70e61eba075eb9196f9ef This official example assumes it is taken care of in the Trainer https://github.com/huggingface/transformers/blob/v4.33.3/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_streaming.py @pacman100 ### Who can help? @pacman100 @muellerzr ### Information - [x] The official example scripts - [X] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` import torch from torch.utils.data import IterableDataset from transformers import ( AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling, Trainer, TrainingArguments, ) data = [ { "input_ids": torch.tensor([101, 2040, 2001, 1999, 14936, 102]), "token_type_ids": torch.tensor([0, 0, 0, 0, 0, 0]), "attention_mask": torch.tensor([1, 1, 1, 1, 1, 1]), } ] class ExampleDataset(IterableDataset): def __init__(self, data): super().__init__() self.data = data * 20 self.epoch = 0 def set_epoch(self, epoch): self.epoch = epoch def __iter__(self): print("\nThis is the epoch: ", self.epoch, "\n") for x in self.data: yield x def __len__(self): return len(self.data) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") model = AutoModelForMaskedLM.from_pretrained("bert-base-cased") train_args = TrainingArguments( output_dir="output", num_train_epochs=3, per_device_train_batch_size=5, # gradient_accumulation_steps=1, # dataloader_num_workers=3, ) dc = DataCollatorForLanguageModeling(tokenizer=tokenizer) trainer = Trainer( train_dataset=ExampleDataset(data), model=model, args=train_args, data_collator=dc, ) trainer.train() ``` Output: ``` This is the epoch: 0 You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. /opt/conda/envs/fmmodel/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 33%|██████████████████████████████████████████████ | 7/21 [00:02<00:02, 4.75it/s] This is the epoch: 0 62%|████████████████████████████████████████████████████████████████████████████████████▊ | 13/21 [00:02<00:00, 9.72it/s] This is the epoch: 0 ``` ### Expected behavior Based on the docstring in trainer describing the IterableDataset behavior I would expect "This is the epoch: 0" to be incremented to "This is the epoch: 1" and "This is the epoch: 2"
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26541/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26540
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26540/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26540/comments
https://api.github.com/repos/huggingface/transformers/issues/26540/events
https://github.com/huggingface/transformers/pull/26540
1,921,918,241
PR_kwDOCUB6oc5brEdu
26,540
typo: list to tuple
{ "login": "pavloshushkov", "id": 23094949, "node_id": "MDQ6VXNlcjIzMDk0OTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/23094949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pavloshushkov", "html_url": "https://github.com/pavloshushkov", "followers_url": "https://api.github.com/users/pavloshushkov/followers", "following_url": "https://api.github.com/users/pavloshushkov/following{/other_user}", "gists_url": "https://api.github.com/users/pavloshushkov/gists{/gist_id}", "starred_url": "https://api.github.com/users/pavloshushkov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pavloshushkov/subscriptions", "organizations_url": "https://api.github.com/users/pavloshushkov/orgs", "repos_url": "https://api.github.com/users/pavloshushkov/repos", "events_url": "https://api.github.com/users/pavloshushkov/events{/privacy}", "received_events_url": "https://api.github.com/users/pavloshushkov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @pavloshushkov \r\n\r\nWhat is this change?", "@LysandreJik Using \"in\" with sets is more efficient than using it with lists or other non-set containers.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
Using a tuple in this case is more optimal
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26540/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26540", "html_url": "https://github.com/huggingface/transformers/pull/26540", "diff_url": "https://github.com/huggingface/transformers/pull/26540.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26540.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26539
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26539/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26539/comments
https://api.github.com/repos/huggingface/transformers/issues/26539/events
https://github.com/huggingface/transformers/issues/26539
1,921,884,940
I_kwDOCUB6oc5yjaMM
26,539
Chat Template Upgrades
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "can u assign it to me", "I'll probably be taking this PR myself, @draksham, but thanks for the offer!", "Here's my current best plan:\r\n\r\n1) Add a `add_generation_prompt` kwarg to `apply_chat_template`. \r\n2) Update our default chat templates to use this kwarg. When the kwarg is `True`, we add a prompt for a bot message (like `<|im_start|>bot`) at the end of the formatted string\r\n3) Update our existing `default_chat_template` settings to use the kwarg - we can check in Jinja is the variable is defined, and default to `False` if it isn't to retain backward compatibility.\r\n4) Update the documentation to explain the kwarg and encourage users to include support for it in their templates.", "Hi. Thanks for adding this. \r\n\r\nWhen I seem to use `add_generation_prompt` for a tokenizer, it seems to have no effect. Example below:\r\n\r\n```\r\n>>> from transformers import AutoTokenizer\r\n>>> chat_history = [\r\n {\"role\": \"user\", \"content\": \"Hi, this is a user message.\"},\r\n {\"role\": \"assistant\", \"content\": \"Hi, this is a bot reply.\"},\r\n {\"role\": \"user\", \"content\": \"Oh I see!\"}\r\n]\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"HuggingFaceH4/zephyr-7b-alpha\")\r\n>>> tokenizer.apply_chat_template(chat_history, tokenize=False, add_generation_prompt=True)\r\n\"\"\"<|user|>\r\nHi, this is a user message.</s>\r\n<|assistant|>\r\nHi, this is a bot reply.</s>\r\n<|user|>\r\nOh I see!</s>\r\n\"\"\"\r\n```\r\n\r\nWhen I checked the chat_template of the tokenizer, I got:\r\n```\r\n\"{% for message in messages %}\\n{% if message['role'] == 'user' %}\\n{{ '<|user|>\\n' + message['content'] + eos_token }}\\n{% elif message['role'] == 'system' %}\\n{{ '<|system|>\\n' + message['content'] + eos_token }}\\n{% elif message['role'] == 'assistant' %}\\n{{ '<|assistant|>\\n' + message['content'] + eos_token }}\\n{% endif %}\\n{% if loop.last and add_generation_prompt %}\\n{{ '<|assistant|>' }}\\n{% endif %}\\n{% endfor %}\"\r\n```\r\n\r\nSo, the logic for `add_generation_prompt` seems to be there. But somehow it is not getting used.", "Hi @AdirthaBorgohain, this feature has only been added very recently and is only working on `main` right now. We're working on a patch release ASAP, but in the meantime, you can install from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git`" ]
1,696
1,697
1,696
MEMBER
null
Hey all, I was thinking about the chat templates API and I realized something important. When you're generating responses from the model, you want the prompt to include the message history, but **also the tokens that indicate the start of a bot response**. That makes sure that the model actually replies to you, instead of continuing the user response or something like that. However, there are other cases when we don't want the template to do that. For example, when formatting messages for training you don't want to add any extra generation prompt at the end. A big goal of chat templates was to be useful for both generation and training, so it's important we have some way to support both use-cases! ## Example Consider the standard ChatML template: ``` <|im_start|>user Hi, this is a user message. <|im_start|>bot Hi, this is a bot reply. <|im_start|>user Hi, this is the next user message. ``` If the user wants to generate a bot reply, however, the actual input they prompt with should end with `<|im_start|>bot`: ``` <|im_start|>user Hi, this is a user message. <|im_start|>bot Hi, this is a bot reply. <|im_start|>user Hi, this is the next user message. <|im_start|>bot ``` The reason for this is you want the bot to **write a bot response** and not continue a user message, or write some other special tokens, or any other weird thing like that. However, when using `apply_chat_template` to format chat data for training, you don't want to add `<|im_start|>bot to the end, because you don't want to generate further text. Because the prompt that indicates the start of a bot message varies between models, it has to be part of a template, and we can't just hardcode it in `ConversationPipeline` or something like that. Therefore, we need some way for our templates to flexibly include this prompt or not! ## Possible solutions 1) We add a kwarg to `apply_chat_template` to indicate whether the bot message start tokens should be appended at the end: ```python >>> tokenizer.apply_chat_template(messages, tokenize=False) """<|im_start|>user Hi, this is a user message. <|im_start|>bot Hi, this is a bot reply. <|im_start|>user Hi, this is the next user message.""" >>> tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) """<|im_start|>user Hi, this is a user message. <|im_start|>bot Hi, this is a bot reply. <|im_start|>user Hi, this is the next user message. <|im_start|>bot""" ``` 2: We assume that chat histories that end in a **user** message are for generation, and chat histories that end in a **bot** message are for training, since this is usually how it works. In this option, we could automatically add the bot message start tokens at the end of any input when the final message has `role == 'user'`. ```python >>> chat_history = [ {"role": "user", "content": "Hi, this is a user message."}, {"role": "bot", "content": "Hi, this is a bot reply."} ] >>> tokenizer.apply_chat_template(chat_history) """<|im_start|>user Hi, this is a user message. <|im_start|>bot Hi, this is a bot reply.""" >>> chat_history.append( {"role": "user", "content": "Hi, this is the next user message."} ) # Now the conversation ends with a user message, so a generation prompt is added >>> tokenizer.apply_chat_template(chat_history) """<|im_start|>user Hi, this is a user message. <|im_start|>bot Hi, this is a bot reply. <|im_start|>user Hi, this is the next user message. <|im_start|>bot""" ``` Alternatively, we could combine those solutions - add the kwarg from option 1, but with a default value of `None`, and if the kwarg is set to `None` then we follow the rules in option 2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26539/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26539/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26538
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26538/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26538/comments
https://api.github.com/repos/huggingface/transformers/issues/26538/events
https://github.com/huggingface/transformers/pull/26538
1,921,857,897
PR_kwDOCUB6oc5bq3dQ
26,538
Nit-added-tokens
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "A small benchmark on the `get_added_vocab()`:\r\n```python \r\nfrom transformers import AutoTokenizer\r\nimport time \r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/nllb-moe-54b\")\r\nstart = time.time();tokenizer.get_added_vocab();print(time.time()-start)\r\n>>> 0.17021536827087402\r\n\r\nstart = time.time();{k.content: v for v, k in sorted(tokenizer.added_tokens_decoder.items(), key=lambda item: item[0])};print(time.time()-start)\r\n>>> 0.0054759979248046875\r\n\r\nstart = time.time();tokenizer.added_tokens_decoder;print(time.time()-start)\r\n0.0007669925689697266\r\n```\r\nwill update rust to make `tokenizer.added_tokens_encoder` available. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26538). All of your documentation changes will be reflected on that endpoint." ]
1,696
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fixes #26500, fixes #26536
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26538/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26538", "html_url": "https://github.com/huggingface/transformers/pull/26538", "diff_url": "https://github.com/huggingface/transformers/pull/26538.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26538.patch", "merged_at": 1696328627000 }
https://api.github.com/repos/huggingface/transformers/issues/26537
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26537/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26537/comments
https://api.github.com/repos/huggingface/transformers/issues/26537/events
https://github.com/huggingface/transformers/pull/26537
1,921,794,233
PR_kwDOCUB6oc5bqprU
26,537
[`PEFT`] Protect `adapter_kwargs` check
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,696
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? Protect `adapter_kwargs` check in case it is excplicitly None Addresses: https://github.com/huggingface/transformers/pull/26488#issuecomment-1742821190 cc @LysandreJik The fix could be also to pop with an empty dict here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L467
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26537/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26537", "html_url": "https://github.com/huggingface/transformers/pull/26537", "diff_url": "https://github.com/huggingface/transformers/pull/26537.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26537.patch", "merged_at": 1696251565000 }
https://api.github.com/repos/huggingface/transformers/issues/26536
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26536/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26536/comments
https://api.github.com/repos/huggingface/transformers/issues/26536/events
https://github.com/huggingface/transformers/issues/26536
1,921,765,113
I_kwDOCUB6oc5yi875
26,536
Tokenizer AddedToken load from file bug
{ "login": "kai01ai", "id": 140378742, "node_id": "U_kgDOCF4Cdg", "avatar_url": "https://avatars.githubusercontent.com/u/140378742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kai01ai", "html_url": "https://github.com/kai01ai", "followers_url": "https://api.github.com/users/kai01ai/followers", "following_url": "https://api.github.com/users/kai01ai/following{/other_user}", "gists_url": "https://api.github.com/users/kai01ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/kai01ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kai01ai/subscriptions", "organizations_url": "https://api.github.com/users/kai01ai/orgs", "repos_url": "https://api.github.com/users/kai01ai/repos", "events_url": "https://api.github.com/users/kai01ai/events{/privacy}", "received_events_url": "https://api.github.com/users/kai01ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey8 Thanks for reporting! That's indeed a bug. It appears in \r\n```python\r\n # 4. If some of the special tokens are not part of the vocab, we add them, at the end.\r\n # the order of addition is the same as self.SPECIAL_TOKENS_ATTRIBUTES following `tokenizers`\r\n self._add_tokens(self.all_special_tokens_extended, special_tokens=True)\r\n```\r\nsince the `self.all_special_tokens_extended` is populated with `['<|im_start|>', '<|im_end|>']` which is a list of string converted to the default AddedTokens structure, they are back to `r/lstrip = True`. Will open a PR for a fix ! " ]
1,696
1,696
1,696
CONTRIBUTOR
null
### System Info - `transformers` version: 4.34.0.dev0 - Platform: macOS-13.6-arm64-arm-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import tokenizers from transformers import ( AutoTokenizer ) START_TOKEN = "<|im_start|>" END_TOKEN = "<|im_end|>" tokenizer_path = 'model/Llama-2-7b-hf' complete_tokenizer_path = 'model/complete_tokenizer' use_fast = False legacy = True tokenizer = AutoTokenizer.from_pretrained( tokenizer_path, use_fast=use_fast, legacy=legacy, ) tokenizer.add_tokens([ tokenizers.AddedToken(START_TOKEN, lstrip=False, rstrip=False, normalized=False, special=True), tokenizers.AddedToken(END_TOKEN, lstrip=False, rstrip=False, normalized=False, special=True), ]) before = tokenizer.encode(f'{START_TOKEN}\n') tokenizer.save_pretrained('model/complete_tokenizer') tokenizer = AutoTokenizer.from_pretrained( complete_tokenizer_path, use_fast=use_fast, legacy=legacy, model_max_length=2048, ) after = tokenizer.encode(f'{START_TOKEN}\n') assert before == after, (before, after) ``` ### Expected behavior The saved tokenizer files looks good but after `from_pretrained`, the customized AddedToken settings for rstrp and lstrip set to True are not being recognized as expected. The saved tokenizer_config file: ``` json { "add_bos_token": true, "add_eos_token": false, "added_tokens_decoder": { "0": { "content": "<unk>", "lstrip": false, "normalized": true, "rstrip": false, "single_word": false, "special": true }, "1": { "content": "<s>", "lstrip": false, "normalized": true, "rstrip": false, "single_word": false, "special": true }, "2": { "content": "</s>", "lstrip": false, "normalized": true, "rstrip": false, "single_word": false, "special": true }, "32000": { "content": "<|im_start|>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true }, "32001": { "content": "<|im_end|>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true } }, "additional_special_tokens": [ "<|im_start|>", "<|im_end|>" ], "bos_token": "<s>", "clean_up_tokenization_spaces": false, "eos_token": "</s>", "legacy": true, "model_max_length": 1600, "pad_token": null, "padding_side": "right", "sp_model_kwargs": {}, "spaces_between_special_tokens": false, "tokenizer_class": "LlamaTokenizer", "tokenizer_file": null, "unk_token": "<unk>", "use_default_system_prompt": true } ``` print(tokenizer) will show : ``` LlamaTokenizer(name_or_path='model/complete_tokenizer', vocab_size=32000, model_max_length=2048, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False), added_tokens_decoder={ 0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 32000: AddedToken("<|im_start|>", rstrip=True, lstrip=True, single_word=False, normalized=False, special=True), 32001: AddedToken("<|im_end|>", rstrip=True, lstrip=True, single_word=False, normalized=False, special=True), } ``` This only effect slow tokenizer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26536/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26535
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26535/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26535/comments
https://api.github.com/repos/huggingface/transformers/issues/26535/events
https://github.com/huggingface/transformers/pull/26535
1,921,705,529
PR_kwDOCUB6oc5bqWtS
26,535
bnb_8bit, gptq generation tests updated
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "in issue discussion https://github.com/huggingface/transformers/issues/26533 @younesbelkada suggested that this could be caused by Ampere vs Turing architecture differences. \r\nWhile this fix is not really pressing, it may make life easier for developers with Ampere cards, so that they see fewer failed tests and focus on important issues.", "@poedator thanks! I agree this would make life easier for developpers\r\nDo you know if there is a way to check with any torch utility method (or whatever) the architecture of the GPU? If that's the case you could maybe convert this PR to a simple checks that keeps the current expected generations for turing GPUs and yours for ampere. What do you think?", "> @younesbelkada wrote: \r\n> Do you know if there is a way to check with any torch utility method (or whatever) the architecture of the GPU?\r\n\r\nthere is `torch.cuda.get_device_capability()` which returns `8.x` for Ampere and `7.x` for Turing. [More details here at nVidia site.](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities). But separating tests by compute level may be an overkill. How likely is it to have an error in generation code ran on Ampere that gives some output expected from Turing? I see that this package already allows [multiple correct answers in GPTQ tests.](https://github.com/huggingface/transformers/blob/6824461f2a35546a3d781fe60576e00f6db7bedf/tests/quantization/gptq/test_gptq.py#L87-L93). Besides, there may be more factors than just hardware, like torch version. With too few results seen bu myself in the generation tests, I am hesitant to claim that mine are the only correct ones for Ampere cards. Could you still consider this PR as is? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
CONTRIBUTOR
null
## What does this PR do? Updating tests to (presumably) adjust for minor packages/models changes elsewhere. See Issue #26533 for details. ## Who can review? - generate: @gante - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada ## Testing I tested this PR locally with RUN_SLOW=1 and two GPUs. Only ran quantization tests. The generation tests now pass, nothing is broken (unless broken before, see the issue for the [remaining problem in model class mismatch](https://github.com/huggingface/transformers/issues/26533#issuecomment-1742764417)).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26535/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26535", "html_url": "https://github.com/huggingface/transformers/pull/26535", "diff_url": "https://github.com/huggingface/transformers/pull/26535.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26535.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26534
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26534/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26534/comments
https://api.github.com/repos/huggingface/transformers/issues/26534/events
https://github.com/huggingface/transformers/pull/26534
1,921,602,709
PR_kwDOCUB6oc5bqAfp
26,534
Add CLIP resources
{ "login": "eenzeenee", "id": 71638597, "node_id": "MDQ6VXNlcjcxNjM4NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/71638597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eenzeenee", "html_url": "https://github.com/eenzeenee", "followers_url": "https://api.github.com/users/eenzeenee/followers", "following_url": "https://api.github.com/users/eenzeenee/following{/other_user}", "gists_url": "https://api.github.com/users/eenzeenee/gists{/gist_id}", "starred_url": "https://api.github.com/users/eenzeenee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eenzeenee/subscriptions", "organizations_url": "https://api.github.com/users/eenzeenee/orgs", "repos_url": "https://api.github.com/users/eenzeenee/repos", "events_url": "https://api.github.com/users/eenzeenee/events{/privacy}", "received_events_url": "https://api.github.com/users/eenzeenee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "LGTM! Thanks for adding resources for CLIP. \r\nBy the way, please fix `Part of #20555` to `Part of #20055` since 20055 is the issue of `Model resources contribution`.", "Thank you for reviewing!! I fixed it!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26534). All of your documentation changes will be reflected on that endpoint.", "> Thanks for fixing, can you also include the [training script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) and [blog post](https://huggingface.co/blog/fine-tune-clip-rsicd) @NielsRogge linked to?\r\n\r\nSorry for the late reply. It seems to appear in 86 and 87 lines. Do you want me to change the description to something else?", "> Do you want me to change the description to something else?\r\n\r\nShould be good then! 👍" ]
1,696
1,697
1,697
CONTRIBUTOR
null
# What does this PR do? Adds resources of CLIP according to [this issue](https://github.com/huggingface/transformers/issues/20055.) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Part of #20055 ## Before submitting - [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stevhliu, @jungnerd, @wonhyeongseo may you please review this PR? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26534/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26534/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26534", "html_url": "https://github.com/huggingface/transformers/pull/26534", "diff_url": "https://github.com/huggingface/transformers/pull/26534.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26534.patch", "merged_at": 1697220780000 }
https://api.github.com/repos/huggingface/transformers/issues/26533
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26533/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26533/comments
https://api.github.com/repos/huggingface/transformers/issues/26533/events
https://github.com/huggingface/transformers/issues/26533
1,921,570,774
I_kwDOCUB6oc5yiNfW
26,533
Broken generation tests for quantized models
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "### Model class mismatch: possible causes and solutions\r\n[TODO] research on causes and propose solution\r\nError message with class names side-by-side:\r\n```\r\n(model has \r\n<class 'transformers_modules.mosaicml.mpt-7b.0b57768f52b7775563f7cc78c4724e407b39593b.configuration_mpt.MPTConfig'> \r\nand you passed \r\n<class 'transformers_modules.mosaicml.mpt-7b.72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7.configuration_mpt.MPTConfig'>.\r\n```\r\nthere is a mismatch in the snapshot hashes. And my cache has both:\r\n```\r\n*********$ ls ~/.cache/huggingface/hub/models--mosaicml--mpt-7b/snapshots/ -lah\r\ntotal 16K\r\ndrwxrwxr-x 4 optimus dpt_ext_searchportal_dep45091 4.0K Oct 1 16:41 .\r\ndrwxrwxr-x 5 optimus dpt_ext_searchportal_dep45091 4.0K Oct 1 17:01 ..\r\ndrwxrwxr-x 2 optimus dpt_ext_searchportal_dep45091 4.0K Oct 1 17:01 0b57768f52b7775563f7cc78c4724e407b39593b\r\ndrwxrwxr-x 2 optimus dpt_ext_searchportal_dep45091 4.0K Oct 1 16:41 72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7\r\n```\r\nAfter deleting cache, both snapshots re-appear and the error happens again.\r\n\r\nApparently, the model version was updated recently [link to commit 0b5776](https://huggingface.co/mosaicml/mpt-7b/commit/0b57768f52b7775563f7cc78c4724e407b39593b) and the changes only included an extra link in readme. So for this issue purposes it may be enough to update the model commit in the tests. With the commit change, the tests pass OK. \r\n\r\nUPDATE: the fix has been made in https://github.com/huggingface/transformers/pull/26431 - see below\r\nYet it leaves open the question why `revision=` argument caused error in the first place. It may deserve a separate issue to be opened.", "Hi @poedator \r\n\r\nThanks a lot for the deep dive and the proposed fix;\r\nwith respect to the failing MPT test, the fix has been made in https://github.com/huggingface/transformers/pull/26431 \r\n\r\nRegarding the other issues, I can see that our test report did not contain the failing tests you have mentioned, I can also see that you are using an A100. in my experience you can have some differences when using bnb on an A100 and T4 (ampere vs turing) - note the tests are run on a T4 GPU. Some other factors can play a role here but as long as new failing tests do not pop on our daily CI report we don't consider that a fix is needed.\r\n \r\nRegarding your specific usecase, I believe you can continue developing with the changes values for generation tests to match generations on a A100, and before merging any of your PR that touches quantization related stuff I will run the slow tests myself on the same hardware that we use for testing\r\n\r\nThanks!", "Hi, @younesbelkada \r\nThanks for the feedback. I will stop worrying about these tests while developing the quantization-related stuff. But if you see more developers seeing same results with Ampere, please consider updating the tests then.\r\n\r\nOn MPT test - I removed that commit from my PR.", "OK perfect, thank you @poedator ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
CONTRIBUTOR
null
### System Info Ubuntu-20, A100 Python 3.10.13, pytest-7.4.2, pluggy-1.3.0, torch==2.0.1, cuda=11.7 quite fresh transformers from repo ### Who can help - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada - generate: @gante ### problem description When working on PR on quantization, I noticed that some tests fail. Those tests also failed in `main` branch. The tests are: - 5 generation-related tests in BNB with slight deviations from the target phrases. - 2 tests with `config_class` mismatch in BnB [FIXED - see below] - 11 generation-related tests in GPTQ with slight deviations from the target phrases. ``` =========================== BNB TESTS - GENERATION ============================ 7b.0b57768f52b7775563f7cc78c4724e407b39593b.configuration_mpt.MPTConfig'> a... FAILED tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_generate_quality - AssertionError: "Hello my name is John Doe, and I'm a fan of the" != "Hello my name is John Doe, and I'm a big fan of" FAILED tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_generate_quality_config - AssertionError: "Hello my name is John Doe, and I'm a fan of the" != "Hello my name is John Doe, and I'm a big fan of" 7b.0b57768f52b7775563f7cc78c4724e407b39593b.configuration_mpt.MPTConfig'> a... FAILED tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_int8_from_pretrained - AssertionError: "Hello my name is John Doe, and I'm a fan of the" != "Hello my name is John Doe, and I'm a big fan of" FAILED tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_int8_serialization - AssertionError: "Hello my name is John Doe, and I'm a fan of the" != "Hello my name is John Doe, and I'm a big fan of" FAILED tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_int8_serialization_sharded - AssertionError: "Hello my name is John Doe, and I'm a fan of the" != "Hello my name is John Doe, and I'm a big fan of" ============ BNB MODEL CLASS MISMATCH ============= FAILED tests/quantization/bnb/test_mixed_int8.py::MixedInt8Test::test_get_keys_to_not_convert - ValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.mosaicml.mpt- FAILED tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_get_keys_to_not_convert - ValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.mosaicml.mpt- =========================== GPTQ TESTS ============================ FAILED tests/quantization/gptq/test_gptq.py::GPTQTest::test_change_loading_attributes - AssertionError: 'Hello my name is A.I. and I am a student of' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my name is ... FAILED tests/quantization/gptq/test_gptq.py::GPTQTest::test_generate_quality - AssertionError: 'Hello my name is Alyson and I am a beautiful, sexy' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my n... FAILED tests/quantization/gptq/test_gptq.py::GPTQTest::test_serialization - AssertionError: 'Hello my name is Alyson and I am a beautiful, sexy' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my n... FAILED tests/quantization/gptq/test_gptq.py::GPTQTest::test_serialization_big_model_inference - AssertionError: 'Hello my name is Alyson and I am a beautiful, sexy' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my n... FAILED tests/quantization/gptq/test_gptq.py::GPTQTestDeviceMap::test_change_loading_attributes - AssertionError: 'Hello my name is A.I.I.I.I.' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my name is John, I am a stu... FAILED tests/quantization/gptq/test_gptq.py::GPTQTestDeviceMap::test_generate_quality - AssertionError: 'Hello my name is Alyson and I am a beautiful, sexy' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my n... FAILED tests/quantization/gptq/test_gptq.py::GPTQTestDeviceMap::test_serialization - AssertionError: 'Hello my name is Alyson and I am a beautiful, sexy' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my n... FAILED tests/quantization/gptq/test_gptq.py::GPTQTestDeviceMap::test_serialization_big_model_inference - AssertionError: 'Hello my name is Alyson and I am a beautiful, sexy' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my n... FAILED tests/quantization/gptq/test_gptq.py::GPTQTestDeviceMapExllama::test_generate_quality - AssertionError: 'Hello my name is Alyson and I am a beautiful, sexy' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my n... FAILED tests/quantization/gptq/test_gptq.py::GPTQTestDeviceMapExllama::test_serialization - AssertionError: 'Hello my name is A.I. and I am a student of' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my name is ... FAILED tests/quantization/gptq/test_gptq.py::GPTQTestDeviceMapExllama::test_serialization_big_model_inference - AssertionError: 'Hello my name is A.I.I.I.I.' not found in {'Hello my name is Alyson, I am a student in the', 'Hello my name is John, I am a professional photographer and I', 'Hello my name is Alyson and I am a very sweet,', 'Hello my name is John, I am a stu... ``` ### Generation: possible causes and solutions I tend to believe that generation problems are caused by slight modifications in models and packets versions or some other random factors, not controlled for. [See how in GPTQ tests there are already several valid answers](https://github.com/huggingface/transformers/blob/6824461f2a35546a3d781fe60576e00f6db7bedf/tests/quantization/gptq/test_gptq.py#L87-L93). I propose to validate the new generation results. See my PR #26535
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26533/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26532
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26532/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26532/comments
https://api.github.com/repos/huggingface/transformers/issues/26532/events
https://github.com/huggingface/transformers/pull/26532
1,921,513,298
PR_kwDOCUB6oc5bptb0
26,532
changed the import order of the model and configuration classes and a…
{ "login": "madhubabu147", "id": 113609366, "node_id": "U_kgDOBsWKlg", "avatar_url": "https://avatars.githubusercontent.com/u/113609366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madhubabu147", "html_url": "https://github.com/madhubabu147", "followers_url": "https://api.github.com/users/madhubabu147/followers", "following_url": "https://api.github.com/users/madhubabu147/following{/other_user}", "gists_url": "https://api.github.com/users/madhubabu147/gists{/gist_id}", "starred_url": "https://api.github.com/users/madhubabu147/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/madhubabu147/subscriptions", "organizations_url": "https://api.github.com/users/madhubabu147/orgs", "repos_url": "https://api.github.com/users/madhubabu147/repos", "events_url": "https://api.github.com/users/madhubabu147/events{/privacy}", "received_events_url": "https://api.github.com/users/madhubabu147/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "changed the import order of the model and configuration classes and added comment before model initialization line and Add configuration_[model_name].py to utils/documentation_tests.txt #19487", "Hi.\r\n\r\nIt's unclear to me the reason behind the changes in this PR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,696
1,699
1,699
NONE
null
…dded comment before model initialization line and Add configuration_[model_name].py to utils/documentation_tests.txt # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26532/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26532", "html_url": "https://github.com/huggingface/transformers/pull/26532", "diff_url": "https://github.com/huggingface/transformers/pull/26532.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26532.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26531
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26531/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26531/comments
https://api.github.com/repos/huggingface/transformers/issues/26531/events
https://github.com/huggingface/transformers/issues/26531
1,921,483,691
I_kwDOCUB6oc5yh4Or
26,531
Bark Text-to-Speech
{ "login": "ss8319", "id": 72968523, "node_id": "MDQ6VXNlcjcyOTY4NTIz", "avatar_url": "https://avatars.githubusercontent.com/u/72968523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ss8319", "html_url": "https://github.com/ss8319", "followers_url": "https://api.github.com/users/ss8319/followers", "following_url": "https://api.github.com/users/ss8319/following{/other_user}", "gists_url": "https://api.github.com/users/ss8319/gists{/gist_id}", "starred_url": "https://api.github.com/users/ss8319/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ss8319/subscriptions", "organizations_url": "https://api.github.com/users/ss8319/orgs", "repos_url": "https://api.github.com/users/ss8319/repos", "events_url": "https://api.github.com/users/ss8319/events{/privacy}", "received_events_url": "https://api.github.com/users/ss8319/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ylacombe ", "Hey @ss8319, thanks for your message! could you kindly provide a script to replicate your error? I believe that the script you provided here doesn't correspond to your issue:\r\n```python\r\nimport time\r\nfrom transformers import BarkModel, BarkProcessor\r\n\r\n# Load the model and processor\r\nmodel = BarkModel.from_pretrained(\"suno/bark-small\")\r\nprocessor = BarkProcessor.from_pretrained(\"suno/bark-small\", voice_preset=\"v2/en_speaker_3\")\r\n\r\n# Load speaker embeddings (assuming you have defined 'embeddings_dataset' elsewhere)\r\nspeaker_embeddings = torch.tensor(embeddings_dataset[7306][\"xvector\"]).unsqueeze(0)\r\n```\r\nCould you also provide an example of `test_txt` ? \r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I believe this is a duplication of https://github.com/suno-ai/bark/issues/402#issuecomment-1702753684?" ]
1,696
1,701
1,699
NONE
null
### System Info **System Setup** Google Colab **Who can help?** @sanchit-gandhi @gante Hi guys. I am trying to test Bark (TTS) functionalities. I set up the code as from the audio course. https://huggingface.co/learn/audio-course/chapter6/pre-trained_models#bark. I am running into the error as indicated. The code keeps running without producing any speech output until I end it. Could you let me know what to try? ```python # Define the output directory where you want to save the generated speech output_dir = "/content/gdrive/MyDrive/Medical Speech, Transcription, and Intent/Bark" # Iterate through the generated speech and save it to files for i, txt in enumerate(test_txt): start_time = time.time() # Record the start time inputs = processor(text=txt, voice_preset="v2/en_speaker_3", return_tensors="pt") generated_speech = model.generate(**inputs).cpu().numpy() end_time = time.time() # Record the end time # Calculate the elapsed time for inference inference_time = end_time - start_time # Define the output file path (you can use a naming convention based on 'i' or 'txt' as needed) output_file_path = f"{output_dir}/generated_speech{i}.wav" # Save the generated speech as a WAV file sf.write(output_file_path, generated_speech.squeeze().numpy(), 22050) # Adjust the sample rate as needed print(f"Inference {i+1} took {inference_time:.2f} seconds. Generated speech saved to {output_file_path}") ``` **Output** The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:10000 for open-end generation. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Install the necessary libraries ```python ! pip install transformers ! pip install sentencepiece ! pip install datasets ``` 2. Run ```python import time from transformers import BarkModel, BarkProcessor # Load the model and processor model = BarkModel.from_pretrained("suno/bark-small") processor = BarkProcessor.from_pretrained("suno/bark-small", voice_preset="v2/en_speaker_3") # Load speaker embeddings (assuming you have defined 'embeddings_dataset' elsewhere) speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0) ``` ### Expected behavior Speech Output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26531/timeline
completed
null
null