url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28155/comments | https://api.github.com/repos/huggingface/transformers/issues/28155/events | https://github.com/huggingface/transformers/issues/28155 | 2,049,695,852 | I_kwDOCUB6oc56K-Bs | 28,155 | What is the minimum video card with large memory required to run the mixtral-8x7b model | {
"login": "zysNLP",
"id": 45376689,
"node_id": "MDQ6VXNlcjQ1Mzc2Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45376689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zysNLP",
"html_url": "https://github.com/zysNLP",
"followers_url": "https://api.github.com/users/zysNLP/followers",
"following_url": "https://api.github.com/users/zysNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/zysNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zysNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zysNLP/subscriptions",
"organizations_url": "https://api.github.com/users/zysNLP/orgs",
"repos_url": "https://api.github.com/users/zysNLP/repos",
"events_url": "https://api.github.com/users/zysNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/zysNLP/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @zysNLP, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,703 | 1,706 | 1,706 | NONE | null | I mean the model that just came out:mistralai/Mixtral-8x7B-Instruct-v0.1,looks like a lot of parameter files,what is the minimum nvidia graphics card video memory required? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28155/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28154/comments | https://api.github.com/repos/huggingface/transformers/issues/28154/events | https://github.com/huggingface/transformers/issues/28154 | 2,049,630,196 | I_kwDOCUB6oc56Kt_0 | 28,154 | ffmpeg_microphone does not use current input device on Mac/Darwin | {
"login": "ruisilvestre",
"id": 1216164,
"node_id": "MDQ6VXNlcjEyMTYxNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1216164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruisilvestre",
"html_url": "https://github.com/ruisilvestre",
"followers_url": "https://api.github.com/users/ruisilvestre/followers",
"following_url": "https://api.github.com/users/ruisilvestre/following{/other_user}",
"gists_url": "https://api.github.com/users/ruisilvestre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruisilvestre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruisilvestre/subscriptions",
"organizations_url": "https://api.github.com/users/ruisilvestre/orgs",
"repos_url": "https://api.github.com/users/ruisilvestre/repos",
"events_url": "https://api.github.com/users/ruisilvestre/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruisilvestre/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @sanchit-gandhi @ylacombe ",
"Hey @ruisilvestre, sorry for the long delay, I unfortunately don't have a Mac to try this on.\r\nCould you open a PR and we'll discuss this with @sanchit-gandhi ?\r\n\r\nAlso cc @Narsil as you might have better experience with ffmpeg !"
] | 1,703 | 1,708 | null | NONE | null | While going through the HF tutorials for STT [here](https://huggingface.co/learn/audio-course/chapter7/voice-assistant), I found some unexpected behaviour with the ffmpeg_microphone_live function on my Mac. I also just found someone that might be having the same issue [here](https://github.com/huggingface/transformers/issues/25183#issuecomment-1778473797) but it's an issue related to sound in Colab env so I'm creating this separately.
The input device index used is always 0, but that might not match the current system input device. Using the current system input device would be the expected behaviour (also according to the other platforms' code that all specify `default` for input device). E.g. I was working with my laptop closed (just connected to the monitor) and wanted to capture sound with my headphones but couldn't.
The solution seems to be fairly simple. Based on the [ffmpeg devices documentation](https://ffmpeg.org/ffmpeg-devices.html#avfoundation) the value `default` is also supported for audio in avfoundation, and it will match the current system input device.
I've changed this manually in audio_utils.py ffmpeg_microphone(...) and it seems to work as expected.
```
elif system == "Darwin":
format_ = "avfoundation"
input_ = ":default"
```
Here's the [link](https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/audio_utils.py#L68) to the same line in the HF repo.
I can make a PR for it if you want. This could also go with adding a param for the device index to those microphone functions similar to how other audio libraries do for easier customisation, which just falls back to use the `default` input device.
## Additional Info
`transformers-cli env` output
- `transformers` version: 4.35.2
- Platform: macOS-14.2-arm64-arm-64bit
- Python version: 3.10.13
- other info not relevant for this issue
Code to reproduce is the snippet in the voice-assistant tutorial. In case the 0th device is not the one you want to listen with, the code will just fail since it won't capture any audio.
```
import sys
def transcribe(chunk_length_s=5.0, stream_chunk_s=1.0):
sampling_rate = transcriber.feature_extractor.sampling_rate
mic = ffmpeg_microphone_live(
sampling_rate=sampling_rate,
chunk_length_s=chunk_length_s,
stream_chunk_s=stream_chunk_s,
)
print("Start speaking...")
for item in transcriber(mic, generate_kwargs={"max_new_tokens": 128}):
sys.stdout.write("\033[K")
print(item["text"], end="\r")
if not item["partial"][0]:
break
return item["text"]
```
According to [ffmpeg devices documentation](https://ffmpeg.org/ffmpeg-devices.html#Examples) you can print out your system input devices using
`ffmpeg -f avfoundation -list_devices true -i ""`
For me this gives:
```
[...]
[AVFoundation indev @ 0x7fcc33004d00] AVFoundation video devices:
[AVFoundation indev @ 0x7fcc33004d00] [0] FaceTime HD Camera
[AVFoundation indev @ 0x7fcc33004d00] [1] Rui Silvestre’s iPhone Camera
[AVFoundation indev @ 0x7fcc33004d00] [2] Capture screen 0
[AVFoundation indev @ 0x7fcc33004d00] AVFoundation audio devices:
[AVFoundation indev @ 0x7fcc33004d00] [0] MacBook Pro Microphone
[AVFoundation indev @ 0x7fcc33004d00] [1] Rui Silvestre’s iPhone Microphone
[AVFoundation indev @ 0x7fcc33004d00] [2] AirPods Pro
[AVFoundation indev @ 0x7fcc33004d00] [3] Microsoft Teams Audio
```
The audio device at index 0 is my MacBook mic but I currently have my AirPods on and would want to use that as my input device. I've also noticed the indexes change fairly frequently depending on which devices are nearby.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28154/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28153/comments | https://api.github.com/repos/huggingface/transformers/issues/28153/events | https://github.com/huggingface/transformers/issues/28153 | 2,049,517,555 | I_kwDOCUB6oc56KSfz | 28,153 | Annotations not being transformed after padding on Deformable DETR preprocessing | {
"login": "Tengoles",
"id": 26772529,
"node_id": "MDQ6VXNlcjI2NzcyNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/26772529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tengoles",
"html_url": "https://github.com/Tengoles",
"followers_url": "https://api.github.com/users/Tengoles/followers",
"following_url": "https://api.github.com/users/Tengoles/following{/other_user}",
"gists_url": "https://api.github.com/users/Tengoles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tengoles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tengoles/subscriptions",
"organizations_url": "https://api.github.com/users/Tengoles/orgs",
"repos_url": "https://api.github.com/users/Tengoles/repos",
"events_url": "https://api.github.com/users/Tengoles/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tengoles/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
] | [
"/assign",
"Hi @Tengoles, thanks for raising this issue! \r\n\r\nIndeed, there's an issue with the transformations that are happening in deformable detr and other detr models if `do_pad=True` as the box coordinates aren't rescaled to account for the new image height / width cc @NielsRogge. \r\n\r\nI'll open a PR to update these. "
] | 1,703 | 1,707 | 1,707 | NONE | null | ### System Info
@amyeroberts
Maybe I'm missing something but it seems like the annotations are not being transformed accordingly after applying pad to a batch of images:
https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/deformable_detr/image_processing_deformable_detr.py#L1330
Is this dealt with further down the train pipeline? when I render the output annotations of that method (encoded_inputs["labels"]) they are incorrect for the images of the batch that required to be padded.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
encoding = processor(images=imgs, annotations=targets, return_tensors="pt",
do_pad=True)
### Expected behavior
Annotations may require transformation just like they are transformed accordingly when applying resize and rescale on previous lines within the same method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28153/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28152/comments | https://api.github.com/repos/huggingface/transformers/issues/28152/events | https://github.com/huggingface/transformers/pull/28152 | 2,049,484,831 | PR_kwDOCUB6oc5iahED | 28,152 | remove cpu dockerfiles to fix #28148 | {
"login": "evelynmitchell",
"id": 1007591,
"node_id": "MDQ6VXNlcjEwMDc1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1007591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evelynmitchell",
"html_url": "https://github.com/evelynmitchell",
"followers_url": "https://api.github.com/users/evelynmitchell/followers",
"following_url": "https://api.github.com/users/evelynmitchell/following{/other_user}",
"gists_url": "https://api.github.com/users/evelynmitchell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evelynmitchell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evelynmitchell/subscriptions",
"organizations_url": "https://api.github.com/users/evelynmitchell/orgs",
"repos_url": "https://api.github.com/users/evelynmitchell/repos",
"events_url": "https://api.github.com/users/evelynmitchell/events{/privacy}",
"received_events_url": "https://api.github.com/users/evelynmitchell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you @evelynmitchell for this PR.\r\n\r\nHowever, the issue author @ashahba had opened a PR #28148 earlier .",
"Thanks!"
] | 1,703 | 1,703 | 1,703 | NONE | null | # What does this PR do?
Removes unneeded cpu Dockerfiles.
Fixes ##28148
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/28148
- [x ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? - not needed removed unnecessary item.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28152",
"html_url": "https://github.com/huggingface/transformers/pull/28152",
"diff_url": "https://github.com/huggingface/transformers/pull/28152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28152.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28151/comments | https://api.github.com/repos/huggingface/transformers/issues/28151/events | https://github.com/huggingface/transformers/pull/28151 | 2,049,467,164 | PR_kwDOCUB6oc5iadTq | 28,151 | 4D mask documentation updates | {
"login": "poedator",
"id": 24738311,
"node_id": "MDQ6VXNlcjI0NzM4MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poedator",
"html_url": "https://github.com/poedator",
"followers_url": "https://api.github.com/users/poedator/followers",
"following_url": "https://api.github.com/users/poedator/following{/other_user}",
"gists_url": "https://api.github.com/users/poedator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poedator/subscriptions",
"organizations_url": "https://api.github.com/users/poedator/orgs",
"repos_url": "https://api.github.com/users/poedator/repos",
"events_url": "https://api.github.com/users/poedator/events{/privacy}",
"received_events_url": "https://api.github.com/users/poedator/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Feel free to ping me for a review whenever this is ready 🤗 ",
"> Feel free to ping me for a review whenever this is ready 🤗\r\n\r\n@ArthurZucker , I only identified 3 applicable model classes and made changes. Please check my logic in classes selection in my big first message above.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,703 | 1,707 | null | CONTRIBUTOR | null | following https://github.com/huggingface/transformers/pull/27539 this PR adds updates to transformers documentation to reflect possibility of utilizing 4D masks.
Plan:
- add updates for Llama model docstring(s)
- identify other models that can use 4D masks in present form (which requires ability to accept custom `position_ids` argument) and updating their docstrings. Classes that need updates:
- Falcon Model
- [TODO identify more]
- update code comments that may need corrections, like cases where the mask may be either 2D or 4D now. one example is based on [this comment](https://github.com/huggingface/transformers/pull/27539#issuecomment-1863285474) by @shentianxiao
Update 20.12.2023:
to find out which models require docstring changes, I scanned all model classes in transformers insing inspect.
- excluded tf and jax classes
- excluded models without `position_ids` argument in `.forward()` - can't use 4D mask effectively
- excluded models that do not use `_prepare_4d_attention_mask` method - need different code change to use 4D mask
- excluded multi-modal models (clip, clvp, vit, bark, git)
what is left is LlamaModel, FalconModel and XGLMModel.
cc @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28151/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28151",
"html_url": "https://github.com/huggingface/transformers/pull/28151",
"diff_url": "https://github.com/huggingface/transformers/pull/28151.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28151.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28150/comments | https://api.github.com/repos/huggingface/transformers/issues/28150/events | https://github.com/huggingface/transformers/issues/28150 | 2,049,441,164 | I_kwDOCUB6oc56J_2M | 28,150 | Codellama will not stop generating at EOS | {
"login": "bin123apple",
"id": 99925255,
"node_id": "U_kgDOBfS9Bw",
"avatar_url": "https://avatars.githubusercontent.com/u/99925255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bin123apple",
"html_url": "https://github.com/bin123apple",
"followers_url": "https://api.github.com/users/bin123apple/followers",
"following_url": "https://api.github.com/users/bin123apple/following{/other_user}",
"gists_url": "https://api.github.com/users/bin123apple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bin123apple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bin123apple/subscriptions",
"organizations_url": "https://api.github.com/users/bin123apple/orgs",
"repos_url": "https://api.github.com/users/bin123apple/repos",
"events_url": "https://api.github.com/users/bin123apple/events{/privacy}",
"received_events_url": "https://api.github.com/users/bin123apple/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests. As this is more related to the way the model is trained / the dataset it is using, could you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!\r\n\r\nI don't really have valuable tips here, but would probably just make sure to remove the enoftext tokens as they might appear a lot, count them or make sure I properly add </s> in the input ids that are fed to the model with add_special_tokens = True! ",
"OK, I think I have solved the issue. I first add a `</s>` in the end of each of my data pairs and re-finetuned the model. Then, I canceled the `skip_special_tokens` parameter. Then it is good now. \r\n\r\nIt seems that there might be some bugs for the `skip_special_tokens` parameter (_Not very sure_). Because no matter I set it to `True` or `False`, the final output will keep repeating. And it will be good when I deleted this parameter. \r\n\r\nAnd for the `<|endoftext|>` issue, I think it is due to the way that DeepSpeed's handling data method (_Also not very sure_). If there is no `</s>`, it will add a `<|endoftext|>` in the end of the text. So you have to manually add a `</s>` in the end of your text.\r\n\r\nThese comments are just for other people's reference if they meet the same problem.",
"Thanks for sharing your journey 🤗 "
] | 1,703 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.3
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: A100
- Using distributed or parallel set-up in script?: DeepSpeed ZeRO Stage 3; 7 GPUs data parallelism training.
### Who can help?
@ArthurZucker @youn
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hey! Could you help to check the reason for this very weird question? Thanks a lot!
I am using some GPT-4 generated answers to finetune the codellama-13b model.
One data example in my dataset looks like this (Others have the similar format):
` The original fortran code: program DRB093_doall2_collapse_orig_no\n use omp_lib\n use DRB093\n implicit none\n\n integer :: len, i, j\n len = 100\n\n allocate (a(len,len))\n\n !$omp parallel do collapse(2)\n do i = 1, len\n do j = 1, len\n a(i,j) = a(i,j)+1\n end do\n end do\n !$omp end parallel do\nend program. `
`The translated C++ code: #include <stdio.h>\nint a[100][100];\nint main()\n{\n int i,j;\n#pragma omp parallel for collapse(2)\n for (i=0;i<100;i++)\n for (j=0;j<100;j++)\n a[i][j]=a[i][j]+1;\n return 0;\n}\n\n`
I used these the supervised finetuning scripts from deepspeed: https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat/training/ to finetune the codellama-13b.
And my inference script looks like this:
```
from transformers import AutoModelForCausalLM, AutoConfig,CodeLlamaTokenizer
dump_device = f'cuda:{device_num}'
model_config = AutoConfig.from_pretrained(model_name_or_path)
model_class = AutoModelForCausalLM.from_config(model_config)
model = model_class.from_pretrained(model_name_or_path,
from_tf=bool(".ckpt" in model_name_or_path),
config=model_config).to(dump_device)
tokenizer = CodeLlamaTokenizer.from_pretrained(model_name_or_path,fast_tokenizer=True)
model.config.end_token_id = tokenizer.eos_token_id
model.config.pad_token_id = model.config.eos_token_id
model.resize_token_embeddings(len(tokenizer))
tokenizer.pad_token = tokenizer.eos_token
if len(input_prompt) < max_prompt_length:
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# attention_mask = input_ids.ne(tokenizer.pad_token_id).float().to(dump_device)
outputs = model.generate(input_ids, max_length=max_tokens,num_return_sequences=n,eos_token_id=tokenizer.eos_token_id)
generated_ans = []
for i in range(n):
predicted_answer = tokenizer.decode(outputs[i], skip_special_tokens=False)
generated_ans.append(predicted_answer[len(input_prompt):])
else:
print("This task is out of my max prompt length limitation!")
```
It looks that the codellama-13b does get some new knowledge about fortran to C++ translation by learning this dataset. It starts to give the correct translation instead of talking randomly. But it keeps repeating the same thing for its answer. One example looks like this:
My Question:
```
Please help me to translate the following fortran code to C++:
program DRB096_doall2_taskloop_collapse_orig_no\n use omp_lib
use DRB096\n implicit none\n\n integer :: len, i, j\n len = 100\n\n allocate (a(len,len))\n\n !$omp parallel\n !$omp single\n !$omp taskloop collapse(2)\n do i = 1, len\n do j = 1, len\n a(i,j) = a(i,j)+1\n end do\n end do\n !$omp end taskloop\n !$omp end single\n !$omp end parallel\n\n print 100, a(50,50)\n 100 format ('a(50,50) =',i3)\n\nend program
```
Its answer:
```
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nint a[100][100];\nint main()\n{\n int i, j;\n#pragma omp parallel\n {\n#pragma omp single\n {\n#pragma omp taskloop collapse(2)\n for (i = 0; i < 100; i++)\n for (j = 0; j < 100; j++)\n a[i][j]+=1; \n }\n }\n printf ("a[50][50]=%d\n", a[50][50]);\n return 0;\n}\n
<|enoftext|>
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nint a[100][100];\nint main()\n{\n int i, j;\n#pragma omp parallel\n {\n#pragma omp single\n {\n#pragma omp taskloop collapse(2)\n for (i = 0; i < 100; i++)\n for (j = 0; j < 100; j++)\n a[i][j]+=1; \n }\n }\n printf ("a[50][50]=%d\n", a[50][50]);\n return 0;\n}\n
<|enoftext|>
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nin
```
It will include a `<|enoftext|>` at the end of the correct generated answer and keep repeating the answer again and again until reach the `max_length_limitation`.
This is very weird, because actually `<|enoftext|>` is not included inside the llama tokenizer, it is the EOS token for GPT-4. For the llama tokenizer the EOS token is `</s>`. In the beginning, I thought it maybe because my dataset includes a lot of `<|enoftext|>` tokens, but I check the whole dataset, there is actually no `<|enoftext|>` inside.... And even if there are some `<|enoftext|>` inside the dataset, I think the codellama should also generate `</s>` at the suitable place inside of repeating the same answer again and again. Does it mean that I have to add a `</s>` and the end of my dataset while finetuning the model? Or is there anything wrong inside my inference script? And could you help to explain where this `<|enoftext|>` come from? My dataset does not contain this token and it is also not inside the llama tokenizer... I am very confusing about it..
Thanks a lot for all the help!
### Expected behavior
I expect the codellama model stop at the correct place instead of repeating the same answer and include a `<|enoftext|>`
Expected answer:
```
Translated C++ code:
#include <stdio.h>\n#if (_OPENMP<201511)\n#error "An OpenMP 4.5 compiler is needed to compile this test."\n#endif\n\n#include <stdio.h>\nint a[100][100];\nint main()\n{\n int i, j;\n#pragma omp parallel\n {\n#pragma omp single\n {\n#pragma omp taskloop collapse(2)\n for (i = 0; i < 100; i++)\n for (j = 0; j < 100; j++)\n a[i][j]+=1; \n }\n }\n printf ("a[50][50]=%d\n", a[50][50]);\n return 0;\n}\n
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28150/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28149/comments | https://api.github.com/repos/huggingface/transformers/issues/28149/events | https://github.com/huggingface/transformers/pull/28149 | 2,049,424,841 | PR_kwDOCUB6oc5iaUBm | 28,149 | Remove deprecated CPU dockerfiles | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,703 | 1,703 | 1,703 | CONTRIBUTOR | null | This PR fixes #28148
Originally a PR was submitted here: https://github.com/huggingface/transformers/pull/28084 but per @ydshieh 's assessment, those Dockerfiles are no longer being maintained and should be removed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28149/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28149",
"html_url": "https://github.com/huggingface/transformers/pull/28149",
"diff_url": "https://github.com/huggingface/transformers/pull/28149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28149.patch",
"merged_at": 1703047896000
} |
https://api.github.com/repos/huggingface/transformers/issues/28148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28148/comments | https://api.github.com/repos/huggingface/transformers/issues/28148/events | https://github.com/huggingface/transformers/issues/28148 | 2,049,419,124 | I_kwDOCUB6oc56J6d0 | 28,148 | CPU Dockerfile(s) are deprecated and need to be removed. | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,703 | 1,703 | 1,703 | CONTRIBUTOR | null | Please remove deprecated CPU Dockerfile(s) since they cause customer confusion.
_Originally posted by @ydshieh in https://github.com/huggingface/transformers/issues/28084#issuecomment-1862419041_
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28148/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28147/comments | https://api.github.com/repos/huggingface/transformers/issues/28147/events | https://github.com/huggingface/transformers/issues/28147 | 2,049,309,452 | I_kwDOCUB6oc56JfsM | 28,147 | logit too slow compared to generate | {
"login": "enochlev",
"id": 47466848,
"node_id": "MDQ6VXNlcjQ3NDY2ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/47466848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enochlev",
"html_url": "https://github.com/enochlev",
"followers_url": "https://api.github.com/users/enochlev/followers",
"following_url": "https://api.github.com/users/enochlev/following{/other_user}",
"gists_url": "https://api.github.com/users/enochlev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enochlev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enochlev/subscriptions",
"organizations_url": "https://api.github.com/users/enochlev/orgs",
"repos_url": "https://api.github.com/users/enochlev/repos",
"events_url": "https://api.github.com/users/enochlev/events{/privacy}",
"received_events_url": "https://api.github.com/users/enochlev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante ",
"You seem to be generating 10 times 10 tokens with the for generate loop. The forward loop probably computes gradients as you did not wrap it up, and finally, you are doing multinomial sampling which is slower than greedy argmax used by default if you generate and the genration_config does not have `do_sample`. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,703 | 1,705 | 1,705 | NONE | null | ### System Info
I am trying to construct a library for constrained generation. The goal hopfully is to skip generating text if there is only one possible next token.
The problem I am having is the logits function is way too slow to allow constrained generation to be of any use. Is there a way to speed up logits?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
here is an example, that might work (my actual working code is in neuronx).
```
import torch
from transformers import LlamaForCausalLM, AutoTokenizer
import time
# Load the model and tokenizer
model_name = "meta-llama/Llama-2-7b-hf"
model = LlamaForCausalLM.from_pretrained(model_name,device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name)
import time
num_iterations = 10
start_time = time.time()
for _ in range(num_iterations):
logits = generator.neuron_model.forward(torch.tensor(generator.encode(input_prompt), dtype=torch.long)).squeeze()
softmax_probs = torch.nn.functional.softmax(logits, dim=-1)
next_token_index = torch.multinomial(softmax_probs, 1).item()
end_time = time.time()
logits_time = end_time - start_time
print(f"Time taken for generating text using logits: {logits_time / num_iterations} seconds")
# Timing the generation using the generate_text method
start_time = time.time()
generated_text = generator.generate(input_prompt=input_prompt,max_length=10)
end_time = time.time()
generate_time = end_time - start_time
print(f"Time taken for generating text using generate_text: {generate_time / num_iterations} seconds")
```
here is the contrained genertion code
```
neuron_model = LlamaForSampling.from_pretrained(model_path + 'llama-2-7b-vicuna', batch_size=1, tp_degree=6, amp='bf16', context_length_estimate=[4000], n_positions=4000)
neuron_model.to_neuron()
tokenizer = AutoTokenizer.from_pretrained(model_path + 'llama-2-7b-vicuna')
import torch
import torch.nn.functional as F
import numpy as np
class ConstrainedTextGenerator:
def __init__(self, sequences, neuron_model, eos_token_id=2):
self.neuron_model = neuron_model
self.eos_token_id = self.encode("</s>")
self.tree = self.preprocess(sequences)
def preprocess(self, sequences):
tree = {}
for sequence in sequences:
sequence_ids = self.encode(sequence)
current_tree = tree
for token in sequence_ids:
token_item = token.item() # Convert tensor to int
if token_item not in current_tree:
current_tree[token_item] = {}
current_tree = current_tree[token_item]
# Add </s> to mark the end of each sequence
eos_token = self.eos_token_id.item() # Convert tensor to int
if eos_token not in current_tree:
current_tree[eos_token] = {}
return tree
def encode(self, text):
# Replace this with your encoding logic, assuming it returns a list of token_ids
return tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")[0]
def generate_text(self, input_prompt=""):
input_ids_list = [[]]
current_tree = self.tree
# Encode the input prompt
prompt_ids = self.encode(input_prompt)
# Append prompt_ids to input_ids_list
input_ids_list[0].extend(prompt_ids.tolist())
while True:
# Check if there are multiple options at the current position
if len(current_tree) > 1:
# Get the indices of the available tokens
available_indices = [list(current_tree.keys()).index(token) for token in current_tree.keys()]
# Choose the token based on the softmax probabilities
logits = self.neuron_model.forward(torch.tensor(input_ids_list, dtype=torch.long)).squeeze()
softmax_probs = torch.nn.functional.softmax(logits[available_indices], dim=-1)
# Sample from the softmax probabilities
next_token_index = torch.multinomial(softmax_probs, 1).item()
next_token = list(current_tree.keys())[available_indices[next_token_index]]
else:
# If there's only one option, skip forward and fill it in
next_token = list(current_tree.keys())[0]
input_ids_list[-1].append(next_token)
# Check if it's the end of a sequence
if next_token == self.eos_token_id.item():
break
else:
current_tree = current_tree.get(next_token, {})
# Remove the empty sequence at the end, if any
if not input_ids_list[-1]:
input_ids_list.pop()
input_ids = torch.tensor([token for seq in input_ids_list for token in seq], dtype=torch.long)
generated_text = ' '.join(map(str, input_ids.tolist()))
return input_ids
```
### Expected behavior
I expect logits and generate to have the same geneartion speed per token | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28147/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28146/comments | https://api.github.com/repos/huggingface/transformers/issues/28146/events | https://github.com/huggingface/transformers/pull/28146 | 2,049,257,155 | PR_kwDOCUB6oc5iZvBJ | 28,146 | Even more TF test fixes | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This should be ready to go now, and finally fixes the remaining CI issues after the `build()` PR!"
] | 1,703 | 1,703 | 1,703 | MEMBER | null | This PR hopefully fixes the last remaining issues from the `build()` PR and gets the CI back to normal! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28146/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28146/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28146",
"html_url": "https://github.com/huggingface/transformers/pull/28146",
"diff_url": "https://github.com/huggingface/transformers/pull/28146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28146.patch",
"merged_at": 1703171687000
} |
https://api.github.com/repos/huggingface/transformers/issues/28145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28145/comments | https://api.github.com/repos/huggingface/transformers/issues/28145/events | https://github.com/huggingface/transformers/pull/28145 | 2,049,240,601 | PR_kwDOCUB6oc5iZrVp | 28,145 | [docs] Trainer docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Test is passing! \r\n\r\nI rebased to incorporate the changes from #28135, but that didn't work for some reason, so I manually edited `tests/utils/test_doc_samples.py` to reflect the latest changes. Is this still ok since the change already exists on `main`?",
"@stevhliu Yep - it should be fine! "
] | 1,703 | 1,703 | 1,703 | MEMBER | null | Part 2 of #27986 to finish cleaning up the `Trainer` API docs. This includes:
- moving the CUDA extension installation problems to the performance and scalability debugging [doc](https://huggingface.co/docs/transformers/main/en/debugging) where it is more appropriate
- GPU selection has its own section in the multiple GPU training [doc](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_many)
- spin out the FSDP sections into their own docs
- add a link from the Trainer guide to the FSDP guide | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28145",
"html_url": "https://github.com/huggingface/transformers/pull/28145",
"diff_url": "https://github.com/huggingface/transformers/pull/28145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28145.patch",
"merged_at": 1703097443000
} |
https://api.github.com/repos/huggingface/transformers/issues/28144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28144/comments | https://api.github.com/repos/huggingface/transformers/issues/28144/events | https://github.com/huggingface/transformers/pull/28144 | 2,049,149,365 | PR_kwDOCUB6oc5iZXQ_ | 28,144 | Fix ONNX export for causal LM sequence classifiers by removing reverse indexing | {
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @ArthurZucker @amyeroberts. I've unified the Bloom/Falcon/MPT implementations, but doing so triggered what looks to be an unrelated CI failure. Can someone take a look and fix/disable that test if it is indeed unrelated?\r\n\r\n```\r\nFAILED tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithTextInputTest::test_retain_grad_hidden_states_attentions - AttributeError: 'NoneType' object has no attribute 'retain_grad'\r\n```",
"@dwyatte Yep, that's a flaky test. A patch to skip it in the testing suite was recently merged into main to prevent it affecting unrelated PRs like this one :) Could you rebase to include recent updates and trigger a new CI run? ",
"@amyeroberts Hm, [b0db02c](https://github.com/huggingface/transformers/commit/b0db02c395cee1e8b2ea73077c798617d14be2b4) contains the latest commit on `main` (224ab70969d1ac6c549f0beb3a8a71e2222e50f7), so I think `tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithTextInputTest::test_retain_grad_hidden_states_attentions` is still broken/flaking there",
"@dwyatte hm, that's odd. The test shouldn't even be running as [it's explicitly skipped](https://github.com/huggingface/transformers/blob/224ab70969d1ac6c549f0beb3a8a71e2222e50f7/tests/models/seamless_m4t/test_modeling_seamless_m4t.py#L616). In your local, on this branch, do you see this skip condition in `test_modeling_seamless_m4t.py`?",
"@amyeroberts I see what's going on -- the failure is on `SeamlessM4TModelWithTextInputTest` but the explicit skip exists on `SeamlessM4TModelWithSpeechInputTest`. Let me know if I should add the same skip to `SeamlessM4TModelWithTextInputTest` on my branch or if you prefer a different fix/PR",
"@dwyatte Ah! Gotcha. Yes please, could you open another separate PR to skip the retain grad tests for all the SeamlessMT4 models? ",
"Ok @amyeroberts @ArthurZucker, after rebasing on the above, this is ready for merging. Thanks both!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28144). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Why not just have a shared util for this, instead of repeating the code all over the place"
] | 1,703 | 1,707 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
Follow-up to https://github.com/huggingface/transformers/pull/27450 and another step to fixing https://github.com/huggingface/optimum/issues/1527. ONNX implements indexing using a combination of its own operators and when using reverse indexing (e.g., -1 to indicate 1 element from the right side of an array), it can produce incorrect results (see [PyTorch's ONNX export code](https://github.com/pytorch/pytorch/blob/71bedc3a69e3203fd8f76a68ecf2bd7c58d2e13e/torch/onnx/symbolic_opset9.py#L5859-L5865)). In practice, this can cause the batch dimension to get shuffled
Causal LM sequence were previously using `-1` for the last token. Adding `sequence_lengths = torch.where(sequence_lengths >= 0, sequence_lengths, input_ids.shape[-1] - 1)` effectively removes reverse indexing
While this could be fixed in https://github.com/huggingface/optimum by forcing the inputs used to trace the graph to contain a pad token and avoiding reverse indexing, it seems better to fix in `transformers` with the added benefit of bringing the code in line with TensorFlow implementations of the same code (e.g., https://github.com/huggingface/transformers/pull/25085/files#diff-7c6fdd54ac4b8ce0c09bb17da15f176d3e5827df39dd8234fd802631e99ef38dR801-R804)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker, @amyeroberts, @younesbelkada (CC @fxmarty)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28144/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28144",
"html_url": "https://github.com/huggingface/transformers/pull/28144",
"diff_url": "https://github.com/huggingface/transformers/pull/28144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28144.patch",
"merged_at": 1703241224000
} |
https://api.github.com/repos/huggingface/transformers/issues/28143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28143/comments | https://api.github.com/repos/huggingface/transformers/issues/28143/events | https://github.com/huggingface/transformers/pull/28143 | 2,049,060,725 | PR_kwDOCUB6oc5iZDjp | 28,143 | [docs] Fix mistral link in mixtral.md | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Happy to help 🤗"
] | 1,703 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix the mistral link in **`Mixtral`** docs page.
The link in this section generate a 404 error:
> The following implementation details are shared with Mistral AI’s first model [mistral](https://huggingface.co/docs/transformers/main/en/model_doc/~models/doc/mistral):
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28143/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28143",
"html_url": "https://github.com/huggingface/transformers/pull/28143",
"diff_url": "https://github.com/huggingface/transformers/pull/28143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28143.patch",
"merged_at": 1703010854000
} |
https://api.github.com/repos/huggingface/transformers/issues/28142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28142/comments | https://api.github.com/repos/huggingface/transformers/issues/28142/events | https://github.com/huggingface/transformers/pull/28142 | 2,049,058,176 | PR_kwDOCUB6oc5iZC_e | 28,142 | Fix FA2 integration | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> this could be added to the Llama.md as a tip ? (nit)\r\n\r\nDone.",
"So FSDP is saved?",
"I think so, from the experiment @pacman100 shared with me you could load a transformers model with FA-2 and train it with autocast (`fp16=True`) and the model was converging nicely",
"Hello @teknium1, to re-confirm, I ran the below experiment on 8 80GB GPUs to finetune Mistral 7B for the SFT task on Ultrachat 200K (1 epoch).\r\n\r\nCode: https://github.com/pacman100/DHS-LLM-Workshop/blob/main/chat_assistant/training/run_fsdp.sh\r\nConfig: https://github.com/pacman100/DHS-LLM-Workshop/blob/main/chat_assistant/training/configs/fsdp_config.yaml\r\nVersions:\r\n```\r\n- `transformers` version: 4.37.0.dev0\r\n- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.20.1\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.25.0.dev0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.2+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n- trl 0.7.8.dev0\r\n```\r\n\r\nPlots:\r\n\r\n\r\nObservations:\r\nPlot converges as expected similarly to the plot for Zephyr sft training [plots](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta/tensorboard)\r\n"
] | 1,703 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
1. Fix FA2 integration.
Issues with the current FA2 integration.
1. It makes providing `torch_dtype` to the `from_pretrained` class method mandatory. This leads to the whole model being loaded in half-precision which leads to unstable training because it would result in pure half precision training instead of mixed-precision training. Please refer https://github.com/huggingface/transformers/issues/26498#issuecomment-1812528717 for more details.
Currently, main branch throws below error when not passing half precision to `torch_dtype` which shouldn't be the case.
```bash
You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
...
File /raid/sourab/transformers/src/transformers/modeling_utils.py:1422, in PreTrainedModel._check_and_enable_flash_attn_2(cls, config, torch_dtype, device_map, check_device_map, hard_check_only)
1418 logger.warning(
1419 "You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour"
1420 )
1421 elif torch_dtype is not None and torch_dtype not in [torch.float16, torch.bfloat16]:
-> 1422 raise ValueError(
1423 f"Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed {torch_dtype}, this might lead to"
1424 " unexpected behaviour."
1425 )
1427 # The check `torch.empty(0).device.type != "cuda"` is needed as the model may be initialized after `torch.set_default_device` has been called,
1428 # or the model may be initialized under the context manager `with torch.device("cuda"):`.
1429 if check_device_map and device_map is None and torch.empty(0).device.type != "cuda":
ValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour.
```
2. As a workaround, one would pass `torch_dtype`, then recast the model to float32 and try to train but then end up getting error from Flash Attention library as given below:
```
File /raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py:79, in _flash_attn_varlen_forward(q, k, v, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, dropout_p, softmax_scale, causal, window_size, return_softmax)
77 maybe_contiguous = lambda x: x.contiguous() if x.stride(-1) != 1 else x
78 q, k, v = [maybe_contiguous(x) for x in (q, k, v)]
---> 79 out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
80 q,
81 k,
82 v,
83 None,
84 cu_seqlens_q,
85 cu_seqlens_k,
86 max_seqlen_q,
87 max_seqlen_k,
88 dropout_p,
89 softmax_scale,
90 False,
91 causal,
92 window_size[0],
93 window_size[1],
94 return_softmax,
95 None,
96 )
97 # if out.isnan().any() or softmax_lse.isnan().any():
98 # breakpoint()
99 return out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state
RuntimeError: FlashAttention only support fp16 and bf16 data type
```
3. Now, to overcome that, one would need to cast the trainable params to float32 and all the other params to float16, this is only possible with EPFT approaches. For normal fine-tuning, things end here leaving no way to use flash attention correctly. But this change, leads to unstable learning plateauing at high loss therefore no luck in PEFT setup too.

All these issues are being resolved by this PR. Notice the above graph with the before and after PR logs. With this PR, the loss is similar to the case when not using FA2.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28142/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28142/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28142",
"html_url": "https://github.com/huggingface/transformers/pull/28142",
"diff_url": "https://github.com/huggingface/transformers/pull/28142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28142.patch",
"merged_at": 1703062507000
} |
https://api.github.com/repos/huggingface/transformers/issues/28141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28141/comments | https://api.github.com/repos/huggingface/transformers/issues/28141/events | https://github.com/huggingface/transformers/pull/28141 | 2,049,005,796 | PR_kwDOCUB6oc5iY3WB | 28,141 | Update VITS modeling to enable ONNX export | {
"login": "echarlaix",
"id": 80481427,
"node_id": "MDQ6VXNlcjgwNDgxNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/80481427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echarlaix",
"html_url": "https://github.com/echarlaix",
"followers_url": "https://api.github.com/users/echarlaix/followers",
"following_url": "https://api.github.com/users/echarlaix/following{/other_user}",
"gists_url": "https://api.github.com/users/echarlaix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echarlaix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echarlaix/subscriptions",
"organizations_url": "https://api.github.com/users/echarlaix/orgs",
"repos_url": "https://api.github.com/users/echarlaix/repos",
"events_url": "https://api.github.com/users/echarlaix/events{/privacy}",
"received_events_url": "https://api.github.com/users/echarlaix/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> For futur reference best practice is this only when it power of 2 ? does pow(x,2) work better (but is slower I think)\r\n\r\nIn all cases it'll be [casted to fp32](https://github.com/pytorch/pytorch/blob/v2.1.2/torch/onnx/symbolic_opset9.py#L3386) (no matter the exponent), using a [multiplication](https://github.com/onnx/onnx/blob/main/docs/Operators.md#mul) instead removes this constraint, and should be \"faster\" (not that it'll have any impact here)\r\n"
] | 1,703 | 1,704 | 1,704 | COLLABORATOR | null | This PR enables the ONNX export of VITS models in Optimum (https://github.com/huggingface/optimum/pull/1607), currently the export is failing due to [a cast operator added before the pow operator](https://github.com/pytorch/pytorch/blob/v2.1.2/torch/onnx/symbolic_opset9.py#L3382) in the model graph, resulting in an issue during the concatenation of two values of different data type
cc @xenova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28141",
"html_url": "https://github.com/huggingface/transformers/pull/28141",
"diff_url": "https://github.com/huggingface/transformers/pull/28141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28141.patch",
"merged_at": 1704473552000
} |
https://api.github.com/repos/huggingface/transformers/issues/28140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28140/comments | https://api.github.com/repos/huggingface/transformers/issues/28140/events | https://github.com/huggingface/transformers/issues/28140 | 2,048,820,728 | I_kwDOCUB6oc56HoX4 | 28,140 | GPU or MPS error when running run_clm.py | {
"login": "oscar-defelice",
"id": 49638680,
"node_id": "MDQ6VXNlcjQ5NjM4Njgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49638680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oscar-defelice",
"html_url": "https://github.com/oscar-defelice",
"followers_url": "https://api.github.com/users/oscar-defelice/followers",
"following_url": "https://api.github.com/users/oscar-defelice/following{/other_user}",
"gists_url": "https://api.github.com/users/oscar-defelice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oscar-defelice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oscar-defelice/subscriptions",
"organizations_url": "https://api.github.com/users/oscar-defelice/orgs",
"repos_url": "https://api.github.com/users/oscar-defelice/repos",
"events_url": "https://api.github.com/users/oscar-defelice/events{/privacy}",
"received_events_url": "https://api.github.com/users/oscar-defelice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @oscar-defelice, thanks for raising an issue! \r\n\r\nWhen you say that you've tried varying the batch size, what values have you tested with. Are you able to run with a batch size of 1 e.g.:\r\n\r\n```\r\npython run_clm.py \\\r\n--model_name_or_path nferruz/ProtGPT2 \\\r\n--train_file data/fine_tune_data.txt \\\r\n--tokenizer_name nferruz/ProtGPT2 \\\r\n--do_train \\\r\n--output_dir models/ProtGPT/output \\\r\n--learning_rate 1e-06\r\n--per_device_train_batch_size 1 \\\r\n--per_device_eval_batch_size 1 \\\r\n```\r\n\r\nHow about running with a small model e.g.: \r\n```\r\npython run_clm.py \\\r\n--model_name_or_path gpt2 \\\r\n--train_file data/fine_tune_data.txt \\\r\n--tokenizer_name nferruz/ProtGPT2 \\\r\n--do_train \\\r\n--output_dir models/ProtGPT/output \\\r\n--learning_rate 1e-06\r\n--per_device_train_batch_size 1 \\\r\n--per_device_eval_batch_size 1 \\\r\n```\r\n?\r\n\r\nWhat is the size of the GPUs being used on the ubuntu machine? ",
"Hello @amyeroberts thank you for your reply. Even with both train and eval batch size equal 1 I got the same error.\r\n\r\nOn Ubuntu the transformer-cli env command gives\r\n\r\n```bash\r\n- `transformers` version: 4.37.0.dev0\r\n- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.19.4\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.25.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.0+cu121 (True)\r\n```\r\n\r\nand for the GPUs \r\n\r\n```bash\r\ndisplay\r\n description: VGA compatible controller\r\n produit: NVIDIA Corporation\r\n fabricant: NVIDIA Corporation\r\n identifiant matériel: 0\r\n information bus: pci@0000:41:00.0\r\n version: a1\r\n bits: 64 bits\r\n horloge: 33MHz\r\n fonctionnalités: vga_controller bus_master cap_list rom\r\n configuration : driver=nvidia latency=0\r\n ressources : mémoireE/S:1100-10ff mémoireE/S:1180-117f irq:263 mémoire:f0000000-f0ffffff mémoire:11000000000-117ffffffff mémoire:11800000000-11801ffffff portE/S:4000(taille=128) mémoire:f1000000-f107ffff\r\n *-display\r\n description: VGA compatible controller\r\n produit: NVIDIA Corporation\r\n fabricant: NVIDIA Corporation\r\n identifiant matériel: 0\r\n information bus: pci@0000:61:00.0\r\n version: a1\r\n bits: 64 bits\r\n horloge: 33MHz\r\n fonctionnalités: vga_controller bus_master cap_list rom\r\n configuration : driver=nvidia latency=0\r\n ressources : mémoireE/S:1000-fff mémoireE/S:1080-107f irq:262 mémoire:f4000000-f4ffffff mémoire:10000000000-107ffffffff mémoire:10800000000-10801ffffff portE/S:f000(taille=128) mémoire:f5000000-f507ffff\r\n *-graphics\r\n produit: EFI VGA\r\n identifiant matériel: 2\r\n nom logique: /dev/fb0\r\n fonctionnalités: fb\r\n configuration : depth=32 resolution=1920,1080",
"@oscar-defelice Did you try running with a smaller model? Was that successful or did you hit the same issue? Do you see your GPUs being utilized when running the script? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,706 | 1,706 | CONTRIBUTOR | null | ### System Info
## System Info
```bash
- `transformers` version: 4.37.0.dev0
- Platform: macOS-14.2-arm64-arm-64bit
- Python version: 3.11.7
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
---
Even if I am pasting this output, if I run on Ubuntu with 2 GPU I got the same issue.
### Who can help?
@ArthurZucker @muellerz
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I run
```bash
python run_clm.py --model_name_or_path nferruz/ProtGPT2 --train_file data/fine_tune_data.txt --tokenizer_name nferruz/ProtGPT2 --do_train --output_dir models/ProtGPT/output --learning_rate 1e-06
```
And no matter what I try with batch_size and learning rate I always get
```bash
RuntimeError: MPS backend out of memory (MPS allocated: 78.40 GB, other allocations: 2.98 GB, max allowed: 81.60 GB). Tried to allocate 320.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
```
### Expected behavior
It should work and finetune the model =) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28139/comments | https://api.github.com/repos/huggingface/transformers/issues/28139/events | https://github.com/huggingface/transformers/issues/28139 | 2,048,708,752 | I_kwDOCUB6oc56HNCQ | 28,139 | `from_pretrained` is extremely slow when deepspeed zero3 is enabled | {
"login": "Jingru",
"id": 4298653,
"node_id": "MDQ6VXNlcjQyOTg2NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4298653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jingru",
"html_url": "https://github.com/Jingru",
"followers_url": "https://api.github.com/users/Jingru/followers",
"following_url": "https://api.github.com/users/Jingru/following{/other_user}",
"gists_url": "https://api.github.com/users/Jingru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jingru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jingru/subscriptions",
"organizations_url": "https://api.github.com/users/Jingru/orgs",
"repos_url": "https://api.github.com/users/Jingru/repos",
"events_url": "https://api.github.com/users/Jingru/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jingru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @pacman100 ",
"I did some research, and found that all the ranks except rank0 tried to load model weight tensors to `meta device`. It looks like this behavior is extremely slow. \r\nhttps://github.com/huggingface/transformers/blob/bffac926ca6bc6c965a92bfbfd00c567a2c0fb90/src/transformers/modeling_utils.py#L485",
"Hello, the state dict is loaded loaded on rank 0 to avoid excessive memory usage and loading the model on meta on other devices should be the fastest as no materialization takes places. \r\n\r\nCode changes to your example as i can't access the private model:\r\n```diff\r\nimport deepspeed\r\n\r\nfrom transformers.deepspeed import HfDeepSpeedConfig\r\nfrom transformers import AutoModelForCausalLM\r\n\r\n\r\ndeepspeed.init_distributed()\r\n\r\nds_config = {\r\n \"train_batch_size\": 32,\r\n \"train_micro_batch_size_per_gpu\": 4,\r\n \"steps_per_print\": 10,\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"offload_param\": {\"device\": \"cpu\"},\r\n \"offload_optimizer\": {\"device\": \"cpu\"},\r\n \"stage3_param_persistence_threshold\": 10000.0,\r\n \"stage3_max_live_parameters\": 30000000.0,\r\n \"stage3_prefetch_bucket_size\": 30000000.0,\r\n \"memory_efficient_linear\": False,\r\n },\r\n \"fp16\": {\"enabled\": True, \"loss_scale_window\": 100},\r\n \"gradient_clipping\": 1.0,\r\n \"prescale_gradients\": False,\r\n \"wall_clock_breakdown\": False,\r\n \"hybrid_engine\": {\r\n \"enabled\": True,\r\n \"max_out_tokens\": 512,\r\n \"inference_tp_size\": 1,\r\n \"release_inference_cache\": False,\r\n \"pin_parameters\": True,\r\n \"tp_gather_partition_size\": 8,\r\n },\r\n}\r\n\r\ndschf = HfDeepSpeedConfig(ds_config)\r\n\r\n- model = AutoModelForCausalLM.from_pretrained(\r\n- \"../llama_actor\", from_tf=False, trust_remote_code=False\r\n- )\r\n+ model = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\n```\r\n\r\nlaunch command on 2 GPUs: \r\n```\r\ntime torchrun --nnodes 1 --nproc-per-node 2 --rdzv-endpoint=localhost:35000 issue_28139.py\r\n```\r\n\r\nResult:\r\n```\r\n[2023-12-19 15:08:18,112] torch.distributed.run: [WARNING] \r\n[2023-12-19 15:08:18,112] torch.distributed.run: [WARNING] *****************************************\r\n[2023-12-19 15:08:18,112] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n[2023-12-19 15:08:18,112] torch.distributed.run: [WARNING] *****************************************\r\n[2023-12-19 15:08:21,074] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-12-19 15:08:21,074] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/transformers/utils/hub.py:123: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.\r\n warnings.warn(\r\n/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/transformers/utils/hub.py:123: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.\r\n warnings.warn(\r\n/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n[2023-12-19 15:08:23,368] [INFO] [comm.py:637:init_distributed] cdb=None\r\n/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n[2023-12-19 15:08:23,499] [INFO] [comm.py:637:init_distributed] cdb=None\r\n[2023-12-19 15:08:23,499] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl\r\n[2023-12-19 15:08:31,060] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 291, num_elems = 6.74B\r\nLoading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]\r\nLoading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.88s/it]\r\n\r\nreal\t0m21,615s\r\nuser\t0m19,894s\r\nsys\t0m11,505s\r\n```\r\n\r\nSo, checkpoint loaded and sharded across the 2 GPUs in 21 seconds for a 7B model which is inline with the expected behaviour.\r\n\r\n",
"I tried model \"openlm-research/open_llama_7b_v2\" and this model could be loaded as expected.\r\n\r\nDoes this mean that there is something abnormal with my finetuned model?\r\n\r\nI tried to override `map_location` mentioned above to 'cpu' for all the ranks, and this script could load my model in 300s. But the results of `model.generate` are nonsense.\r\n\r\nBesides, I tried to downgrade transformers to 4.31.0 and deepspeed to 0.11.1, and everything is fine. Model can be loaded in a few minutes and generated texts are normal.",
"hi @Jingru, same problem, did u solve this?",
"> hi @Jingru, same proble, did u solve this?\r\n\r\nI load the old checkpoint without deepspeed and `save_pretrained` again. The new checkpoint can be loaded with zero3 normally.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,707 | 1,707 | NONE | null | ### System Info
pytorch: 2.0.1+cu118
transformers: 4.33.3
deepspeed: 0.12.5
### Who can help?
@ArthurZucker @younesbelkada @pac
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Run command `torchrun --nnodes 1 --nproc-per-node 8 --rdzv-endpoint=localhost:35000 test.py`
And my script `test.py` as follows:
```
import deepspeed
from transformers.deepspeed import HfDeepSpeedConfig
from transformers import AutoModelForCausalLM
deepspeed.init_distributed()
ds_config = {
"train_batch_size": 32,
"train_micro_batch_size_per_gpu": 4,
"steps_per_print": 10,
"zero_optimization": {
"stage": 3,
"offload_param": {"device": "cpu"},
"offload_optimizer": {"device": "cpu"},
"stage3_param_persistence_threshold": 10000.0,
"stage3_max_live_parameters": 30000000.0,
"stage3_prefetch_bucket_size": 30000000.0,
"memory_efficient_linear": False,
},
"fp16": {"enabled": True, "loss_scale_window": 100},
"gradient_clipping": 1.0,
"prescale_gradients": False,
"wall_clock_breakdown": False,
"hybrid_engine": {
"enabled": True,
"max_out_tokens": 512,
"inference_tp_size": 1,
"release_inference_cache": False,
"pin_parameters": True,
"tp_gather_partition_size": 8,
},
}
dschf = HfDeepSpeedConfig(ds_config)
model = AutoModelForCausalLM.from_pretrained(
"../llama_actor", from_tf=False, trust_remote_code=False
)
```
In addition, the pretrained model is saved by `transformers==4.31.0`.
2. This command hangs for over 1800s, and encountered nccl timeout error.
### Expected behavior
Model is loaded in a few minutes and this command should not hang. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28139/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28138/comments | https://api.github.com/repos/huggingface/transformers/issues/28138/events | https://github.com/huggingface/transformers/pull/28138 | 2,048,585,744 | PR_kwDOCUB6oc5iXa1F | 28,138 | HF_ENDPOINT value affected in hub.py cached_file | {
"login": "fenglui",
"id": 141198,
"node_id": "MDQ6VXNlcjE0MTE5OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/141198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fenglui",
"html_url": "https://github.com/fenglui",
"followers_url": "https://api.github.com/users/fenglui/followers",
"following_url": "https://api.github.com/users/fenglui/following{/other_user}",
"gists_url": "https://api.github.com/users/fenglui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fenglui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fenglui/subscriptions",
"organizations_url": "https://api.github.com/users/fenglui/orgs",
"repos_url": "https://api.github.com/users/fenglui/repos",
"events_url": "https://api.github.com/users/fenglui/events{/privacy}",
"received_events_url": "https://api.github.com/users/fenglui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@fenglui What is this PR supposed to fix? Can you show a snippet of code that doesn't work as expected? I'm asking before `hf_hub_download` is already supposed to read the `HF_ENDPOINT` environment variable (see [here](https://github.com/huggingface/huggingface_hub/blob/84d1b31901088e8261131f68323c0bee6b2e2f58/src/huggingface_hub/constants.py#L57)).",
"> @fenglui What is this PR supposed to fix? Can you show a snippet of code that doesn't work as expected? I'm asking before `hf_hub_download` is already supposed to read the `HF_ENDPOINT` environment variable (see [here](https://github.com/huggingface/huggingface_hub/blob/84d1b31901088e8261131f68323c0bee6b2e2f58/src/huggingface_hub/constants.py#L57)).\r\n\r\nBecause projects like LLaMA-Factory eg. use cached_file method as a standardalone function to download and load a model\r\n```python\r\nfrom transformers.utils import cached_file\r\n\r\nmapping = cached_file(\r\n path_or_repo_id = os.path.join(self.eval_args.task_dir, self.eval_args.task),\r\n filename=\"mapping.json\",\r\n cache_dir=self.model_args.cache_dir,\r\n **kwargs\r\n)\r\n```\r\n\r\nsee \"https://github.com/hiyouga/LLaMA-Factory/blob/db6cb2d0e78c1a9cab57f5067dd669ffd82ab20f/src/llmtuner/eval/evaluator.py#L13\" \r\n\r\nwhen user start the 3rd project with bash command “HF_ENDPOINT=https://hf-mirror.com python src/train_web.py --flash_attn”\r\nthe user defined var HF_ENDPOINT is still the origin default value.\r\n\r\nReproducing the issue do this:\r\n\r\nmodify transformers/transformers/utils/hub.py\r\nadd the output \r\n\r\n```python\r\nuser_agent = http_user_agent(user_agent)\r\n try:\r\n # Load from URL or cache if already cached\r\n # add lines\r\n print(F\"HUGGINGFACE_CO_RESOLVE_ENDPOINT is {HUGGINGFACE_CO_RESOLVE_ENDPOINT}\")\r\n endpoint = os.environ.get(\"HF_ENDPOINT\", HUGGINGFACE_CO_RESOLVE_ENDPOINT)\r\n print(F\"endpoint is {endpoint }\")\r\n\t\t\r\n resolved_file = hf_hub_download(\r\n```\r\n\r\nlauch the test project use cached_file method\r\n\r\n```bash\r\ngit clone --depth 1 [email protected]:hiyouga/LLaMA-Factory.git\r\ncd LLaMA-Factory\r\nconda create -n llama_factory python=3.10\r\nconda activate llama_factory\r\ncd LLaMA-Factory\r\npip install -r requirements.txt\r\nCUDA_VISIBLE_DEVICES=0 HF_ENDPOINT=https://hf-mirror.com python src/train_web.py\r\n```\r\nand with the ui, load a new model you don't have\r\n\r\nwith this pr, I think can fix that. And now with your newly commit, this issue has gone.\r\n",
"> @fenglui What is this PR supposed to fix? Can you show a snippet of code that doesn't work as expected? I'm asking before `hf_hub_download` is already supposed to read the `HF_ENDPOINT` environment variable (see [here](https://github.com/huggingface/huggingface_hub/blob/84d1b31901088e8261131f68323c0bee6b2e2f58/src/huggingface_hub/constants.py#L57)).\r\n\r\na snippet of code saved as test.py\r\n\r\n```python\r\n \r\nfrom transformers.utils import cached_file\r\n\r\nkwargs = {}\r\n\r\nmapping = cached_file(\r\n path_or_repo_id = \"Qwen/Qwen-1_8B-Chat\",\r\n filename=\"mapping.json\",\r\n cache_dir=\"/root/ML/\",\r\n **kwargs\r\n)\r\n\r\nprint(mapping)\r\n```\r\n\r\nmodify transformers/transformers/utils/hub.py\r\nadd the output \r\n\r\n```python\r\nuser_agent = http_user_agent(user_agent)\r\n try:\r\n # Load from URL or cache if already cached\r\n # add lines\r\n print(F\"HUGGINGFACE_CO_RESOLVE_ENDPOINT is {HUGGINGFACE_CO_RESOLVE_ENDPOINT}\") # add break point \r\n print(F\"endpoint is {endpoint }\")\r\n\t\t\r\n resolved_file = hf_hub_download(\r\n```\r\nbash \"HF_ENDPOINT=https://hf-mirror.com python test.py\" and debug run the test.py\r\n"
] | 1,702 | 1,703 | 1,703 | NONE | null | # What does this PR do?
use os.environ.get("HF_ENDPOINT", HUGGINGFACE_CO_RESOLVE_ENDPOINT) value as endpoint param, so HF_ENDPOINT value will affected when download files using cached_file method
Fixes # (issue)
## Before submitting
- [ ] os.environ["HF_ENDPOINT"]="https://hf-mirror.com" may not affected
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28138/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28138",
"html_url": "https://github.com/huggingface/transformers/pull/28138",
"diff_url": "https://github.com/huggingface/transformers/pull/28138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28138.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28137/comments | https://api.github.com/repos/huggingface/transformers/issues/28137/events | https://github.com/huggingface/transformers/issues/28137 | 2,048,495,611 | I_kwDOCUB6oc56GY_7 | 28,137 | Fail to upload models to hub | {
"login": "minghao-wu",
"id": 17817832,
"node_id": "MDQ6VXNlcjE3ODE3ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/17817832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minghao-wu",
"html_url": "https://github.com/minghao-wu",
"followers_url": "https://api.github.com/users/minghao-wu/followers",
"following_url": "https://api.github.com/users/minghao-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/minghao-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minghao-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minghao-wu/subscriptions",
"organizations_url": "https://api.github.com/users/minghao-wu/orgs",
"repos_url": "https://api.github.com/users/minghao-wu/repos",
"events_url": "https://api.github.com/users/minghao-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/minghao-wu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @minghao-wu, thanks for raising this issue! \r\n\r\nIs this issue transient or have you seen in multiple times? \r\n\r\ncc @Wauplin the hub master 👑 ",
"Hi @amyeroberts, thanks for your reply.\r\n\r\nit's transient. The provided snippet was working smoothly until a few days ago. This is my first time to see this error.\r\n\r\nBTW, although `api.upload_folder` fails, `.push_to_hub` works on my slurm cluster now.😂",
"Hi @minghao-wu, could you share the README file you are trying to upload when this error is happening?\r\n\r\n`Bad request for commit endpoint: \"model-index[0].results[0].dataset.config\" must be a string` means that the server rejected your commit because the model card metadata is not correct. It's weird that it's a transient error given that the server always checks the model card metadata, no matter how you upload your files. Can it be that the README file is sometimes updated between retries?",
"Hi @Wauplin , \r\n\r\nThis is one of those [README.md](https://github.com/huggingface/transformers/files/13725273/README.md).\r\n\r\nPlease note that, for this README file, I am finetining some in-house model with LoRA. More interestingly, I recently found that the fully fine-tuned models have to be uploaded with `api.upload_folder`, while the lora models have to be uploaded by `model.push_to_hub()`. \r\n",
"Thanks for the quick response @minghao-wu. So in the readme file you've shared you can see\r\n \r\n```yml\r\n(...)\r\n dataset:\r\n name: monashnlp/iwslt2017_en_ar\r\n type: monashnlp/iwslt2017_en_ar\r\n config: null\r\n split: None\r\n(...)\r\n```\r\n\r\nwhich is not valid on the server. Maybe the error looked transient because you did that for different trainings, some of which the config was not `null`? The fix would be to remove the line when config is null in the method that generates the model card. I don't know if this part is owned by you or the library but fixing it should solves your problem.\r\n\r\n> I recently found that the fully fine-tuned models have to be uploaded with api.upload_folder, while the lora models have to be uploaded by model.push_to_hub().\r\n\r\nI'm no expert on how `model.push_to_hub` is implemented but it is most likely using `api.upload_folder` under the hood since it is the high-level method to create a commit on the Hub. For your info, every method to upload files will always end up calling `api.create_commit` under the hood (this is where real things are happening :) ). So if you witness a difference between 2 upload methods, the problem usually lies on which files are generated and committed rather than a difference of upload method.\r\n\r\nHope this makes it clearer for you!\r\n\r\n---\r\n\r\nFor the record, I opened an issue in `huggingface_hub` (the underlying library that makes the upload) to fail early in case of invalid model card: https://github.com/huggingface/huggingface_hub/issues/1927. This should avoid confusion in the future.",
"Hi @Wauplin , \r\n\r\nwell explained!\r\n\r\nThank you very much and I think this issue can be closed.",
"Perfect! Glad to hear that! :hugs: "
] | 1,702 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-4.18.0-513.9.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was using following snippet to push my models to hub (I cannot sucessfully push my models using `.push_to_hub()` my slurm cluster).
```
import huggingface_hub
huggingface_hub.login(token="XXX")
model_name = os.path.basename(os.path.dirname(args.ckpt))
repo_id = f"minghaowu/"+model_name
print("uploading to", repo_id)
api = huggingface_hub.HfApi()
api.create_repo(
repo_id=repo_id,
repo_type="model",
private=True,
exist_ok=True,
)
api.upload_folder(
folder_path=args.ckpt,
repo_id=repo_id,
repo_type="model",
)
```
### Expected behavior
The provided code snippet has been working smoothly for a few days, but today I got the error message as follows:
```
Traceback (most recent call last):
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_statusenizer.json: 94%|████████████████████████████████████████████████████████████████████████▌ | 13.7M/14.5M [00:01<00:00, 11.3MB/s]
response.raise_for_status()██████████ | 1/5 [00:06<00:24, 6.19s/it]
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/models/minghaowu/docnmt-bloom-7b-lora-p4-en-fr/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/minghaow/docnmtllm-project/docnmtllm/train_para/upload_model.py", line 44, in <module>
api.upload_folder(
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 849, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 3748, in upload_folder
commit_info = self.create_commit(
^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 849, in _inner
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 2967, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/home/minghaow/.conda/envs/upload/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-65817fb9-7af65e20605305f129b7ad48;ddc7d2fa-2111-4a83-b540-25eda4ca6e86)
Bad request for commit endpoint:
"model-index[0].results[0].dataset.config" must be a string
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28136/comments | https://api.github.com/repos/huggingface/transformers/issues/28136/events | https://github.com/huggingface/transformers/pull/28136 | 2,048,467,063 | PR_kwDOCUB6oc5iXAnM | 28,136 | [Whisper] Make tokenizer normalization public | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28136). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
Using the Whisper English normalizer is common practice when evaluating Whisper models on English ASR. Here, we have to normalize the predictions, e.g. using the argument `normalize=True` to the tokenizer `.decode` method:
https://github.com/huggingface/transformers/blob/5aec50ecaf9c1c039cde85881f0586110f845859/src/transformers/models/whisper/tokenization_whisper.py#L633
However, we also have to normalize the reference, which is most easily done by calling the **private** method `_normalize`: https://github.com/huggingface/transformers/blob/5aec50ecaf9c1c039cde85881f0586110f845859/src/transformers/models/whisper/tokenization_whisper.py#L509
This PR updates the tokenizer to use a **public** method for the second normalization step, the recommended design for exposed methods. Note that I have chosen here to deprecate the existing private method `_normalize`, rather than removing it blindly, since I anticipate that it has been accessed by some users already and want to prevent a hard breaking change. Happy to remove it in one go if we feel it's ok removing a private method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28136/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28136",
"html_url": "https://github.com/huggingface/transformers/pull/28136",
"diff_url": "https://github.com/huggingface/transformers/pull/28136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28136.patch",
"merged_at": 1706544455000
} |
https://api.github.com/repos/huggingface/transformers/issues/28135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28135/comments | https://api.github.com/repos/huggingface/transformers/issues/28135/events | https://github.com/huggingface/transformers/pull/28135 | 2,048,452,546 | PR_kwDOCUB6oc5iW9XH | 28,135 | Update split string in doctest to reflect #28087 | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28135). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
Resolves current failing test `tests/utils/test_doc_samples.py::TestDocLists::test_sdpa_support_list` on main because the string used to split the doc string wasn't updated in line with #28087
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28135/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28135",
"html_url": "https://github.com/huggingface/transformers/pull/28135",
"diff_url": "https://github.com/huggingface/transformers/pull/28135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28135.patch",
"merged_at": 1702994109000
} |
https://api.github.com/repos/huggingface/transformers/issues/28134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28134/comments | https://api.github.com/repos/huggingface/transformers/issues/28134/events | https://github.com/huggingface/transformers/issues/28134 | 2,048,244,765 | I_kwDOCUB6oc56Fbwd | 28,134 | Different intermediate results given different number of epochs | {
"login": "DolevAdas",
"id": 33514523,
"node_id": "MDQ6VXNlcjMzNTE0NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/33514523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DolevAdas",
"html_url": "https://github.com/DolevAdas",
"followers_url": "https://api.github.com/users/DolevAdas/followers",
"following_url": "https://api.github.com/users/DolevAdas/following{/other_user}",
"gists_url": "https://api.github.com/users/DolevAdas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DolevAdas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DolevAdas/subscriptions",
"organizations_url": "https://api.github.com/users/DolevAdas/orgs",
"repos_url": "https://api.github.com/users/DolevAdas/repos",
"events_url": "https://api.github.com/users/DolevAdas/events{/privacy}",
"received_events_url": "https://api.github.com/users/DolevAdas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @DolevAdas, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nFor the difference between 5 and 15 epochs, you will see that the learning rate at each step is different. This will be due to the learning rate scheduler.\r\n\r\nFor the difference between the same number of epochs and different seeds, this is likely due to randomness between runs. I would suggest running with the same seed to see if you observe the same loss values at each step, and then running with a few other seeds to see how different the loss values are across runs. ",
"Thank you ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,706 | 1,706 | NONE | null | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (False)
- - Using GPU in script?: no
- Using distributed or parallel set-up in script?:no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
We are using Hugging Face API to fine-tune a pretrained model ( BertForSequenceClassification).
We see differences in the first five epochs between 5 and 15 epoch runs and do not understand why they would not be (nearly) identical given that only the number of epochs is different between those runs. ( the seed and other parameters are all the same).
**For example:**
### Seed 7
**5 epochs :**
,loss,learning_rate,epoch,step
0,**24.6558**,4.955555555555556e-05,0.04,500,,,,,,,,,
1,19.9439,4.9111111111111114e-05,0.09,1000,,,,,,,,,
2,19.2654,4.866666666666667e-05,0.13,1500,,,,,,,,,
3,20.4078,4.8222222222222225e-05,0.18,2000,,,,,,,,,
4,20.3372,4.7777777777777784e-05,0.22,2500,,,,,,,,,
5,20.0602,4.7333333333333336e-05,0.27,3000,,,,,,,,,
6,19.6761,4.6888888888888895e-05,0.31,3500,,,,,,,,,
7,20.193,4.644444444444445e-05,0.36,4000,,,,,,,,,
8,19.1265,4.600000000000001e-05,0.4,4500,,,,,,,,,
9,19.1949,4.555555555555556e-05,0.44,5000,,,,,,,,,
10,19.5078,4.511111111111112e-05,0.49,5500,,,,,,,,,
11,20.7165,4.466666666666667e-05,0.53,6000,,,,,,,,,
12,20.1907,4.422222222222222e-05,0.58,6500,,,,,,,,,
13,19.6967,4.377777777777778e-05,0.62,7000,,,,,,,,,
14,19.6693,4.3333333333333334e-05,0.67,7500,,,,,,,,,
15,20.011,4.2888888888888886e-05,0.71,8000,,,,,,,,,
16,19.516,4.2444444444444445e-05,0.76,8500,,,,,,,,,
17,18.9949,4.2e-05,0.8,9000,,,,,,,,,
**15 epochs:**
,loss,learning_rate,epoch,step
0,**18.9326**,4.9851851851851855e-05,0.04,500,,,,,,,,,
1,5.6773,4.970370370370371e-05,0.09,1000,,,,,,,,,
2,4.6515,4.955555555555556e-05,0.13,1500,,,,,,,,,
3,4.2881,4.940740740740741e-05,0.18,2000,,,,,,,,,
4,3.641,4.925925925925926e-05,0.22,2500,,,,,,,,,
5,3.2491,4.9111111111111114e-05,0.27,3000,,,,,,,,,
6,3.012,4.896296296296297e-05,0.31,3500,,,,,,,,,
7,2.8161,4.881481481481482e-05,0.36,4000,,,,,,,,,
8,2.7497,4.866666666666667e-05,0.4,4500,,,,,,,,,
9,2.6776,4.851851851851852e-05,0.44,5000,,,,,,,,,
10,2.5254,4.837037037037037e-05,0.49,5500,,,,,,,,,
11,2.6059,4.8222222222222225e-05,0.53,6000,,,,,,,,,
12,2.5966,4.807407407407408e-05,0.58,6500,,,,,,,,,
13,2.2252,4.792592592592593e-05,0.62,7000,,,,,,,,,
14,2.3321,4.7777777777777784e-05,0.67,7500,,,,,,,,,
15,2.23,4.762962962962963e-05,0.71,8000,,,,,,,,,
16,2.3754,4.7481481481481483e-05,0.76,8500,,,,,,,,,
### Seed 0 :
**5 epochs:**
,loss,learning_rate,epoch,step
0,**17.7629**,4.955555555555556e-05,0.04,500,,,,,,,,,
1,5.6264,4.9111111111111114e-05,0.09,1000,,,,,,,,,
2,4.9429,4.866666666666667e-05,0.13,1500,,,,,,,,,
3,4.5756,4.8222222222222225e-05,0.18,2000,,,,,,,,,
4,4.4063,4.7777777777777784e-05,0.22,2500,,,,,,,,,
5,3.9688,4.7333333333333336e-05,0.27,3000,,,,,,,,,
6,3.6656,4.6888888888888895e-05,0.31,3500,,,,,,,,,
7,3.6779,4.644444444444445e-05,0.36,4000,,,,,,,,,
8,3.2495,4.600000000000001e-05,0.4,4500,,,,,,,,,
9,3.2306,4.555555555555556e-05,0.44,5000,,,,,,,,,
10,3.1333,4.511111111111112e-05,0.49,5500,,,,,,,,,
11,2.7543,4.466666666666667e-05,0.53,6000,,,,,,,,,
12,3.1086,4.422222222222222e-05,0.58,6500,,,,,,,,,
13,3.0666,4.377777777777778e-05,0.62,7000,,,,,,,,,
14,3.156,4.3333333333333334e-05,0.67,7500,,,,,,,,,
15,2.5553,4.2888888888888886e-05,0.71,8000,,,,,,,,,
16,2.7727,4.2444444444444445e-05,0.76,8500,,,,,,,,,
17,2.651,4.2e-05,0.8,9000,,,,,,,,,
**15 epochs:**
,loss,learning_rate,epoch,step
0,**14.8927**,4.9851851851851855e-05,0.04,500,,,,,,,,,
1,5.4558,4.970370370370371e-05,0.09,1000,,,,,,,,,
2,4.065,4.955555555555556e-05,0.13,1500,,,,,,,,,
3,3.8751,4.940740740740741e-05,0.18,2000,,,,,,,,,
4,3.4581,4.925925925925926e-05,0.22,2500,,,,,,,,,
5,3.1641,4.9111111111111114e-05,0.27,3000,,,,,,,,,
6,2.8896,4.896296296296297e-05,0.31,3500,,,,,,,,,
7,2.8967,4.881481481481482e-05,0.36,4000,,,,,,,,,
8,2.5912,4.866666666666667e-05,0.4,4500,,,,,,,,,
9,2.5563,4.851851851851852e-05,0.44,5000,,,,,,,,,
10,2.482,4.837037037037037e-05,0.49,5500,,,,,,,,,
11,2.1695,4.8222222222222225e-05,0.53,6000,,,,,,,,,
12,2.447,4.807407407407408e-05,0.58,6500,,,,,,,,,
13,2.4438,4.792592592592593e-05,0.62,7000,,,,,,,,,
14,2.2014,4.7777777777777784e-05,0.67,7500,,,,,,,,,
15,2.2,4.762962962962963e-05,0.71,8000,,,,,,,,,
The only difference in the experiments is the number of epochs.
We also saved the train and validation split to a file and read it from there. To make sure we are reading in the same order.
**My environment**: python 3.9.6, cuda 12.2.0, pytorch 2.0.1
**Here is part of my code:**
from transformers import (AutoTokenizer, DataCollatorWithPadding, TrainingArguments,
BertForSequenceClassification, Trainer, AutoConfig)
import datasets
import numpy as np
import torch
import torch.nn as nn
import random
random.seed(cseed)
np.random.seed(cseed)
torch.manual_seed(cseed)
torch.cuda.manual_seed_all(cseed)
os.environ['CUBLAS_WORKSPACE_CONFIG']=":16:8"
tokenizer = AutoTokenizer.from_pretrained(checkpoint, model_max_length=max_token_len)
training_args = TrainingArguments(out_path,
save_total_limit = 10,
#load_best_model_at_end = True,
report_to=None,
evaluation_strategy="steps",
eval_steps=11250,
do_eval=True,
num_train_epochs=epochs_num,
seed = cseed
)
from transformers import set_seed
set_seed(cseed)
trian_data_from_disk = datasets.Dataset.load_from_disk(tokenized_datasets_path+"/train" , keep_in_memory=True)
validation_data_from_disk = datasets.Dataset.load_from_disk(tokenized_datasets_path+"/validation" , keep_in_memory=True)
model = BertForSequenceClassification.from_pretrained(checkpoint, num_labels=1)
loss_fn = nn.MSELoss()
trainer = CustomTrainer(
model,
training_args,
train_dataset=trian_data_from_disk,
eval_dataset=validation_data_from_disk,
data_collator=data_collator,
tokenizer=tokenizer,
)
training_results = trainer.train()
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28134/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28133/comments | https://api.github.com/repos/huggingface/transformers/issues/28133/events | https://github.com/huggingface/transformers/pull/28133 | 2,048,116,832 | PR_kwDOCUB6oc5iVz5g | 28,133 | [`Mixtral` & `Mistral`] Add support for sdpa | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks ! I don\"t see why sliding window attention shouldn't be supported with SDPA because the only difference vs the eager attention implementation is on the attention mask. Passing arbitrary attention masks in SDPA should be supported without any problem IMO\r\n\r\nI have the same problem here, why sdpa not support window attention? Is there any problems not been solved? @ArthurZucker ",
"@ehuaa the way the window attention is implemented in Mistral original code base is by changing the attention mask to a \"more custom\" attention mask to not attend to tokens that are before `sliding_windows`. Check out more by looking into the details of this method: https://github.com/huggingface/transformers/blob/d90acc16437e8c9e45e068fa1cc1a263b9a7208f/src/transformers/modeling_attn_mask_utils.py#L145\r\nThe point that I tried to convey is that passing that attention mask is supported I think in SDPA so you can implicitly get SDPA + sliding window attention by just passing that correct attention mask. Let me know if this makes sense to you!",
"> @ehuaa the way the window attention is implemented in Mistral original code base is by changing the attention mask to a \"more custom\" attention mask to not attend to tokens that are before `sliding_windows`. Check out more by looking into the details of this method:\r\n> \r\n> https://github.com/huggingface/transformers/blob/d90acc16437e8c9e45e068fa1cc1a263b9a7208f/src/transformers/modeling_attn_mask_utils.py#L145\r\n> \r\n> \r\n> The point that I tried to convey is that passing that attention mask is supported I think in SDPA so you can implicitly get SDPA + sliding window attention by just passing that correct attention mask. Let me know if this makes sense to you!\r\n\r\n@younesbelkada Thank you for your quick reply! Your solution above can pass a custom mask to sdpa, and i think this way is the same as passing sliding_window param to this function.\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L1006-L1023\r\n\r\n"
] | 1,702 | 1,707 | 1,703 | COLLABORATOR | null | # What does this PR do?
Adds the SDPA attention for both classes cc @younesbelkada for visibility 😉 Will help for fast LLava | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28133/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28133/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28133",
"html_url": "https://github.com/huggingface/transformers/pull/28133",
"diff_url": "https://github.com/huggingface/transformers/pull/28133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28133.patch",
"merged_at": 1703158702000
} |
https://api.github.com/repos/huggingface/transformers/issues/28132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28132/comments | https://api.github.com/repos/huggingface/transformers/issues/28132/events | https://github.com/huggingface/transformers/pull/28132 | 2,048,108,896 | PR_kwDOCUB6oc5iVyNU | 28,132 | [`Refactor Attention mask handling`] Moves attention mask processing to the Attention class | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We added support for 4d attention mask inside the converter so should be alright but yeah will check related issues! ",
"Can you specify how this helps with the static cache?\r\n\r\nThe static cache should also work with the attention_mask being passed at every forward call (it'll always have the same shape). I don't think it's a good idea to have the `attention_mask` be a class variable. ",
"It will not be a class variable forgot to update but I'll follow what we do with jax.\r\nThis will help as the cache length is different from the number of tokens that are seen which you get when you are in the attention layer. ",
"Can give more details but basically new cache + attention was not behaving properly. This is gonna be my priority this week anyway! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,707 | 1,707 | COLLABORATOR | null | # What does this PR do?
This is more aligned with our philosophy, but also simplifies and will simplify things.
Will help a lot with the static cache.
The only way to share the mask is to call `LlamaAttention` but if you have a better way I'll update it!
This makes the attention class self contained, which is also pretty convenient for testing.
Ran the slow test without fa2 will run them again on dgx once approved.
cc @patrickvonplaten for visibility | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28132/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28132",
"html_url": "https://github.com/huggingface/transformers/pull/28132",
"diff_url": "https://github.com/huggingface/transformers/pull/28132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28132.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28131/comments | https://api.github.com/repos/huggingface/transformers/issues/28131/events | https://github.com/huggingface/transformers/pull/28131 | 2,048,089,805 | PR_kwDOCUB6oc5iVuEK | 28,131 | [`Sdpa / Flash`] save the attention not a bool | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually #28132 will remove it entirely"
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
Just a small cleanup that shall be proagated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28131/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28131",
"html_url": "https://github.com/huggingface/transformers/pull/28131",
"diff_url": "https://github.com/huggingface/transformers/pull/28131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28131.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28130/comments | https://api.github.com/repos/huggingface/transformers/issues/28130/events | https://github.com/huggingface/transformers/issues/28130 | 2,047,968,862 | I_kwDOCUB6oc56EYZe | 28,130 | Mistral flash attention 2 is not work, training speed is equal to the original way which not use flash attn | {
"login": "FangxuLiu",
"id": 22525254,
"node_id": "MDQ6VXNlcjIyNTI1MjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/22525254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FangxuLiu",
"html_url": "https://github.com/FangxuLiu",
"followers_url": "https://api.github.com/users/FangxuLiu/followers",
"following_url": "https://api.github.com/users/FangxuLiu/following{/other_user}",
"gists_url": "https://api.github.com/users/FangxuLiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FangxuLiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FangxuLiu/subscriptions",
"organizations_url": "https://api.github.com/users/FangxuLiu/orgs",
"repos_url": "https://api.github.com/users/FangxuLiu/repos",
"events_url": "https://api.github.com/users/FangxuLiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/FangxuLiu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @FangxuLiu \r\nHmm this is strange, FA-2 should lead to some speedup https://huggingface.co/docs/transformers/model_doc/mistral#expected-speedups\r\nCan you try on a larger bs / seq_len ? also what is the hardware you are using ? Are you sure you don't train with padding tokens? \r\nIf you use TRL library you can make sure to pack your input sentences using `packing=True` https://huggingface.co/docs/trl/sft_trainer#packing-dataset--constantlengthdataset-",
"With the same error. @FangxuLiu , have you solved this problem? ",
"Hi @FangxuLiu could you share your code\r\n",
"with the same problem",
"sam problem, any feedback,transformer 3.6, seq lenght 800, 4 a100, deepspeed zero 2, had no any speedup\r\ninstall command, pip install -U flash-attn",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,708 | null | NONE | null | ### System Info
transformers==4.36.2
torch==2.0
model = transformers.AutoModelForCausalLM.from_pretrained(script_args.model_path, trust_remote_code=True, use_cache=False, attn_implementation="flash_attention_2", torch_dtype="auto")
I am pretraining Mistral model with deepspeed zero2, when I used flash attention 2, the training speed not improved.
And some log are there:
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
so I want to know what should I do? @ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = transformers.AutoModelForCausalLM.from_pretrained(script_args.model_path, trust_remote_code=True, use_cache=False, attn_implementation="flash_attention_2", torch_dtype="auto")
torchrun \
--nnode 1 \
--master_port 10000 \
--nproc_per_node 4 \
training/train_instruction.py \
--model_path /mnt/bn/ecom-nas-lfx/mrgt/models/Mistral-7B-v0.1 \
--train_data /mnt/bn/ecom-nas-lfx/mrgt/data/v12_1/v2code_train.jsonl \
--output_dir /mnt/bn/ecom-nas-lfx/mrgt/models/mistral-v12-base-4gpu-flash-test \
--max_length 2048 \
--evaluation_strategy no \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 1 \
--learning_rate 1e-5 \
--weight_decay 0.1 \
--optim adamw_torch \
--num_train_epochs 2 \
--max_steps -1 \
--lr_scheduler_type cosine \
--warmup_steps 100 \
--logging_strategy steps \
--logging_steps 1 \
--save_strategy steps \
--save_steps 2000 \
--save_total_limit 1 \
--seed 42 \
--bf16 True \
--report_to none \
--deepspeed config/zero2.json
### Expected behavior
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28130/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28129/comments | https://api.github.com/repos/huggingface/transformers/issues/28129/events | https://github.com/huggingface/transformers/issues/28129 | 2,047,946,036 | I_kwDOCUB6oc56ES00 | 28,129 | LayerDrop support | {
"login": "EthanBnntt",
"id": 95309712,
"node_id": "U_kgDOBa5PkA",
"avatar_url": "https://avatars.githubusercontent.com/u/95309712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EthanBnntt",
"html_url": "https://github.com/EthanBnntt",
"followers_url": "https://api.github.com/users/EthanBnntt/followers",
"following_url": "https://api.github.com/users/EthanBnntt/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanBnntt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EthanBnntt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanBnntt/subscriptions",
"organizations_url": "https://api.github.com/users/EthanBnntt/orgs",
"repos_url": "https://api.github.com/users/EthanBnntt/repos",
"events_url": "https://api.github.com/users/EthanBnntt/events{/privacy}",
"received_events_url": "https://api.github.com/users/EthanBnntt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | NONE | null | ### Feature request
Add support for LayerDrop in Transformers.
### Motivation
LayerDrop allows for faster training, regularization, and superior pruning after training.
### Your contribution
This is a feature I will work on implementing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28129/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28128/comments | https://api.github.com/repos/huggingface/transformers/issues/28128/events | https://github.com/huggingface/transformers/pull/28128 | 2,047,881,724 | PR_kwDOCUB6oc5iVCKI | 28,128 | bug fix: fix vocab_size being 0 for deepspeed zero3 | {
"login": "circlecrystal",
"id": 5665980,
"node_id": "MDQ6VXNlcjU2NjU5ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5665980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/circlecrystal",
"html_url": "https://github.com/circlecrystal",
"followers_url": "https://api.github.com/users/circlecrystal/followers",
"following_url": "https://api.github.com/users/circlecrystal/following{/other_user}",
"gists_url": "https://api.github.com/users/circlecrystal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/circlecrystal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/circlecrystal/subscriptions",
"organizations_url": "https://api.github.com/users/circlecrystal/orgs",
"repos_url": "https://api.github.com/users/circlecrystal/repos",
"events_url": "https://api.github.com/users/circlecrystal/events{/privacy}",
"received_events_url": "https://api.github.com/users/circlecrystal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hello, please give a minimal reproducer of the issue first that this PR is meant to resolve.\r\n\r\nSorry but I encountered a bug related to these lines of codes in my company project. Since the code is proprietary, I cannot share it. Despite hoping to build a minimal example, I cannot find a simple way to achieve it. I encountered this error when running loss computation at this line:\r\nhttps://github.com/huggingface/transformers/blob/4edffda636fb2bf673282b31163e598b5872994e/src/transformers/models/llama/modeling_llama.py#L1209\r\nWhen using DeepSpeed Zero-3, without the provided fix, this line will report error (because self.config.vocab_size will be zero without the fix).",
"The reported error is not related to the provided fix. I think it's a mis-report.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,706 | 1,706 | NONE | null | # What does this PR do?
This PR fixes the error encountered during model training with DeepSpeed Zero-3.
@pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28128/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28128",
"html_url": "https://github.com/huggingface/transformers/pull/28128",
"diff_url": "https://github.com/huggingface/transformers/pull/28128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28128.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28127/comments | https://api.github.com/repos/huggingface/transformers/issues/28127/events | https://github.com/huggingface/transformers/pull/28127 | 2,047,754,831 | PR_kwDOCUB6oc5iUoUC | 28,127 | Update modeling_utils.py | {
"login": "mzelling",
"id": 36188891,
"node_id": "MDQ6VXNlcjM2MTg4ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/36188891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzelling",
"html_url": "https://github.com/mzelling",
"followers_url": "https://api.github.com/users/mzelling/followers",
"following_url": "https://api.github.com/users/mzelling/following{/other_user}",
"gists_url": "https://api.github.com/users/mzelling/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzelling/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzelling/subscriptions",
"organizations_url": "https://api.github.com/users/mzelling/orgs",
"repos_url": "https://api.github.com/users/mzelling/repos",
"events_url": "https://api.github.com/users/mzelling/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzelling/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,703 | 1,703 | CONTRIBUTOR | null | In the docstring for PreTrainedModel.resize_token_embeddings, correct the definition of the new_num_tokens parameter to read "the new number of tokens" (meaning the new size of the vocab) rather than "the number of new tokens" (meaning the number of newly added tokens only). This is in agreement with what the code does (see source and docstring of function PreTrainedModel._get_resized_embeddings).
@stevhliu @MKhalusova
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28127/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28127",
"html_url": "https://github.com/huggingface/transformers/pull/28127",
"diff_url": "https://github.com/huggingface/transformers/pull/28127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28127.patch",
"merged_at": 1703005678000
} |
https://api.github.com/repos/huggingface/transformers/issues/28126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28126/comments | https://api.github.com/repos/huggingface/transformers/issues/28126/events | https://github.com/huggingface/transformers/pull/28126 | 2,047,716,623 | PR_kwDOCUB6oc5iUgXB | 28,126 | [gpt-neox] Add attention_bias config to support model trained without attention biases | {
"login": "dalgarak",
"id": 20063100,
"node_id": "MDQ6VXNlcjIwMDYzMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/20063100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dalgarak",
"html_url": "https://github.com/dalgarak",
"followers_url": "https://api.github.com/users/dalgarak/followers",
"following_url": "https://api.github.com/users/dalgarak/following{/other_user}",
"gists_url": "https://api.github.com/users/dalgarak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dalgarak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dalgarak/subscriptions",
"organizations_url": "https://api.github.com/users/dalgarak/orgs",
"repos_url": "https://api.github.com/users/dalgarak/repos",
"events_url": "https://api.github.com/users/dalgarak/events{/privacy}",
"received_events_url": "https://api.github.com/users/dalgarak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker @younesbelkada ",
"Best of my knowledge, there are many(and almost) gpt-neox models are on the hub that uses attention bias:\r\n * pythia scailing suite family, e.g. EleutherAI/pythia-70m (https://huggingface.co/EleutherAI/pythia-70m),\r\n * polyglot korean LMs, e.g. EleutherAI/polyglot-ko-1.3b (https://huggingface.co/EleutherAI/polyglot-ko-1.3b)\r\n * and gpt-neox models, e.g. EleutherAI/gpt-neox-20b\r\n\r\nand not public released yet(because training is undergoing)- but we want to release a model to hub, which trained GPT-NeoX architecture **without** attention bias. \r\n\r\nTo check attention_bias=False configure, we upload a 'intermediate checkpoint' of model of ours to the hub; see https://huggingface.co/dalgarak/gnx_6.7b_no_attn_bias_test \r\n(NOTE: public, gated user access but will approve automatically. it will available until this-or related PRs- closed.)\r\n\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"oh, I found a mistake in argument documentation, fixed it now. Thanks!"
] | 1,702 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
This PR adds attention_bias configuration into GPT-NeoX models. Currently released models all use bias by default for the linear layers in attention block, but the GPT-NeoX library allows us to train models without attention bias. (can be trained with use_bias_in_attn_linear=False)
For compatibility with existing models, we set the default value of attention-bias to True. I've done some testing and verified the behavior with attn_implementation="flash_attention_2".
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28126/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28126",
"html_url": "https://github.com/huggingface/transformers/pull/28126",
"diff_url": "https://github.com/huggingface/transformers/pull/28126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28126.patch",
"merged_at": 1703063132000
} |
https://api.github.com/repos/huggingface/transformers/issues/28125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28125/comments | https://api.github.com/repos/huggingface/transformers/issues/28125/events | https://github.com/huggingface/transformers/issues/28125 | 2,047,659,948 | I_kwDOCUB6oc56DM-s | 28,125 | [Docs] Broken link in Kubernetes doc | {
"login": "dmsuehir",
"id": 13952606,
"node_id": "MDQ6VXNlcjEzOTUyNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/13952606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmsuehir",
"html_url": "https://github.com/dmsuehir",
"followers_url": "https://api.github.com/users/dmsuehir/followers",
"following_url": "https://api.github.com/users/dmsuehir/following{/other_user}",
"gists_url": "https://api.github.com/users/dmsuehir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmsuehir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmsuehir/subscriptions",
"organizations_url": "https://api.github.com/users/dmsuehir/orgs",
"repos_url": "https://api.github.com/users/dmsuehir/repos",
"events_url": "https://api.github.com/users/dmsuehir/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmsuehir/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Thanks for reporting! It seems like the links don't work for Safari, but they're ok with the Arc/Chrome browsers. It seems like the link is being changed somehow from kubeflow to huggingface which is why it is giving the 404:\r\n\r\n```diff\r\n- https://www.kubeflow.org/docs/components/training/pytorch\r\n+ https://huggingface.co/docs/components/training/pytorch\r\n```\r\n\r\nI'll check with the frontend team and see what's wrong 🙂 ",
"@stevhliu Just checking in - is there any update on this?",
"cc @mishig25, any updates on this issue?"
] | 1,702 | 1,707 | null | CONTRIBUTOR | null | ### System Info
N/A
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I recently helped add kubernetes instructions to the documentation [here](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu_many.md#usage-with-kubernetes), and I saw that with the recent patch, it's now posted at the huggingface.co docs site [here](https://huggingface.co/docs/transformers/perf_train_cpu_many#usage-with-kubernetes). However, at the docs site, it seems like links to non-Hugging Face pages are broken. For example, in the first sentence under the heading when it links "Kubeflow PyTorchJob training operator", that link doesn't work for me. What's also weird is that the link *does* work if I right click it and open it in a new tab, but regular click gives me a 404. The links also work fine from the GitHub.
### Expected behavior
Links should work as they do in GitHub from the .md | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28125/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28124/comments | https://api.github.com/repos/huggingface/transformers/issues/28124/events | https://github.com/huggingface/transformers/issues/28124 | 2,047,594,060 | I_kwDOCUB6oc56C85M | 28,124 | [Trainer.train] learning rate logging inconsistency: learning rate for the future step is logged | {
"login": "HanGuo97",
"id": 18187806,
"node_id": "MDQ6VXNlcjE4MTg3ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanGuo97",
"html_url": "https://github.com/HanGuo97",
"followers_url": "https://api.github.com/users/HanGuo97/followers",
"following_url": "https://api.github.com/users/HanGuo97/following{/other_user}",
"gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions",
"organizations_url": "https://api.github.com/users/HanGuo97/orgs",
"repos_url": "https://api.github.com/users/HanGuo97/repos",
"events_url": "https://api.github.com/users/HanGuo97/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanGuo97/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,702 | 1,707 | null | NONE | null | ### System Info
NA
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[This](https://github.com/huggingface/transformers/blob/c52b515e948fc12ff58ad773a0385860d0162f61/src/transformers/trainer.py#L1913) line of code steps forward the LR scheduler, before `_maybe_log_save_evaluate` is called. This means the learning rate logged represents the learning in the upcoming iteration.
For most of the use cases, the differences between them is small. However, in certain cases, this caused confusion.
### Expected behavior
The learning rate for the current iteration is logged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28124/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28123/comments | https://api.github.com/repos/huggingface/transformers/issues/28123/events | https://github.com/huggingface/transformers/pull/28123 | 2,047,564,974 | PR_kwDOCUB6oc5iT-OF | 28,123 | [Doc] Fix token link in What 🤗 Transformers can do | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix the tokens link in `What 🤗 Transformers can do` .
The link in this section generate a 404 error:
> Token classification
In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as [tokens](https://huggingface.co/glossary#token). Token classification assigns each token a label from a predefined set of classes.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28123/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28123",
"html_url": "https://github.com/huggingface/transformers/pull/28123",
"diff_url": "https://github.com/huggingface/transformers/pull/28123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28123.patch",
"merged_at": 1702940815000
} |
https://api.github.com/repos/huggingface/transformers/issues/28122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28122/comments | https://api.github.com/repos/huggingface/transformers/issues/28122/events | https://github.com/huggingface/transformers/pull/28122 | 2,047,370,131 | PR_kwDOCUB6oc5iTS_q | 28,122 | Fix weights not properly initialized due to shape mismatch | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I will explain it a bit more tomorrow.",
"> only the weight matrices that need to be resized are randomly initialized.\r\n\r\nThat is a very good point, and I did think of it once. Here we have keys in checkpoint and keys in the model's `state_dcit()`.\r\nThey are not always identical, as could be seen [here](https://github.com/huggingface/transformers/blob/5aec50ecaf9c1c039cde85881f0586110f845859/src/transformers/modeling_utils.py#L4007-L4021).\r\n\r\nThose conversions are only inside `modeling_utils.py` and not exposed at all. So I don't have the way to do the suggested testing.\r\n\r\nIn most cases, I would say they are identical. I can still implement it, but I am a bit worried we also have edge cases to skip.\r\n\r\nWDYT?\r\n\r\n\r\n--------------------------------------------------\r\n\r\nBut the change in this PR adds initialization **before any actual weight loading**. So for those weights intended to be loaded, they are loaded in the same way as before this PR. So if there is any problem, it must be something wrong before this PR.\r\n\r\n ",
"@ydshieh Would it be possible to create a small dummy model with e.g. two linear layers - one we set to explicit values and the other we resize to test this? So we don't have to try to handle complex layer naming in the state dict? ",
"> @ydshieh Would it be possible to create a small dummy model with e.g. two linear layers - one we set to explicit values and the other we resize to test this? So we don't have to try to handle complex layer naming in the state dict?\r\n\r\nYes, that could be done, similar to [this](https://github.com/huggingface/transformers/blob/main/tests/test_modeling_common.py#L434-L436)\r\n\r\nI will do it. But TBH, the above test and the new one to be added are model class independent - and we should probably move them into a new test class rather than being inside `ModelTesterMixin`.\r\n\r\nLet me update this PR with your suggestion, and leave the moving of test methods in a follow up PR.",
"SGTM - thanks! ",
"A new test `def test_matched_shapes_have_loaded_weights_when_some_mismatched_shapes_exist(self)` is added",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28122). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,703 | 1,703 | COLLABORATOR | null | # What does this PR do?
Currently, if there is some weight shape mismatched between the model and the checkpoint, and if ignore_mismatched_sizes=True, that/those weight(s) won't get initialized by the model's `_init_weights` method, and could get crazy values like 1e37.
This could make the training gets `nan loss value` from the beginning, (then `Trainer` will change this to `0.0`) and the training won't have any progress (loss always 0.0).
One example is by running `src/transformers/modeling_utils.py` (add `ignore_mismatched_sizes=True`).
We usually set `ignore_mismatched_sizes=True` when we want to perform classification tasks using an existing model but to another task having different number of targets.
This PR aims to fix this issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28122/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28122",
"html_url": "https://github.com/huggingface/transformers/pull/28122",
"diff_url": "https://github.com/huggingface/transformers/pull/28122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28122.patch",
"merged_at": 1703078402000
} |
https://api.github.com/repos/huggingface/transformers/issues/28121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28121/comments | https://api.github.com/repos/huggingface/transformers/issues/28121/events | https://github.com/huggingface/transformers/issues/28121 | 2,047,216,945 | I_kwDOCUB6oc56Bg0x | 28,121 | Add StyleTTS 2 to HF Transformers Pipeline | {
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @sanchit-gandhi @ylacombe ",
"The model is indeed very performant (see [space](https://huggingface.co/spaces/styletts2/styletts2)) and has a permissive license: one of the packages is GNU licensed. Without this, it can be used with the MIT license\r\n\r\n=> on paper it looks like a great candidate for a model addition to Transformers/Diffusers. I wonder if we've missed the boat here with the community interest? WDYT @ylacombe @Vaibhavs10?",
"Hi, we can replace the GPL phonemizer with gruut to make it MIT",
"If it would be helpful I can create a demo later with fully MIT licensed components",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Just posting to keep this from going stale"
] | 1,702 | 1,707 | null | NONE | null | ### Feature request
Add [StyleTTS](https://github.com/yl4579/StyleTTS2) 2 to HF Transformers Pipeline
### Motivation
Would be great to have an easier way to run STTS2
### Your contribution
I created a [fork](https://github.com/neuralvox/styletts2) with importable scripts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28121/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28120/comments | https://api.github.com/repos/huggingface/transformers/issues/28120/events | https://github.com/huggingface/transformers/issues/28120 | 2,047,205,290 | I_kwDOCUB6oc56Bd-q | 28,120 | Add Tortoise TTS to HF Pipeline | {
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sanchit-gandhi @ylacombe ",
"I saw [this issue](https://github.com/huggingface/diffusers/issues/3891), however personally I think it would be great to have a unified way to run TTS models. Transformer's `pipeline` has the most support right now, and pipelines in Diffusers are harder to use. Also, it would make it easier to switch between models without significant changes to code.",
"Hey @fakerybakery - Tortoise TTS is a pipeline composed of two transformer-based models and one diffusion one. `diffusers` pipelines are designed to work with a mix of transformer/diffusion models, whereas `transformers` pipelines are designed to work with single transformer-based model. Therefore, the pipeline is a better fit for `diffusers`, for which there is an on-going PR to add it: https://github.com/huggingface/diffusers/pull/4106\r\n\r\nNote that the pipeline will be still very easy to work with in `diffusers`, emulating a similar API to AudioLDM: https://huggingface.co/cvssp/audioldm-s-full-v2#text-to-audio",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | ### Feature request
Hi,
Might it be possible to add [Tortoise TTS](https://github.com/neonbjb/tortoise-tts) to the `text-to-speech` pipeline?
### Motivation
Tortoise TTS is currently the highest-quality permissively licensed text-to-speech library available.
### Your contribution
Tortoise TTS is already pip-ified so it shouldn't be too hard to add. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28120/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28119/comments | https://api.github.com/repos/huggingface/transformers/issues/28119/events | https://github.com/huggingface/transformers/issues/28119 | 2,047,169,168 | I_kwDOCUB6oc56BVKQ | 28,119 | Save model checkpoint error when multi-gpu training still happens on 4.36.1 | {
"login": "z7ye",
"id": 25996703,
"node_id": "MDQ6VXNlcjI1OTk2NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/25996703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/z7ye",
"html_url": "https://github.com/z7ye",
"followers_url": "https://api.github.com/users/z7ye/followers",
"following_url": "https://api.github.com/users/z7ye/following{/other_user}",
"gists_url": "https://api.github.com/users/z7ye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/z7ye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/z7ye/subscriptions",
"organizations_url": "https://api.github.com/users/z7ye/orgs",
"repos_url": "https://api.github.com/users/z7ye/repos",
"events_url": "https://api.github.com/users/z7ye/events{/privacy}",
"received_events_url": "https://api.github.com/users/z7ye/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @z7ye, thanks for raising this issue! \r\n\r\nCould you provide a minimal code snippet we can use to reproduce this error? \r\n\r\ncc @muellerzr @pacman100 ",
"And please upgrade to 4.36.2",
"> And please upgrade to 4.36.2\r\n\r\nThis problem occurs in training with multiple machines and multiple cards. Perhaps 4.36.2 did not solve this problem either, as 4.36.1 has already attempted to check for the presence of \"stagg_output_dir\" in \"main_process\".",
"Thanks, I’ll look into this",
"> > And please upgrade to 4.36.2\r\n> \r\n> This problem occurs in training with multiple machines and multiple cards. Perhaps 4.36.2 did not solve this problem either, as 4.36.1 has already attempted to check for the presence of \"stagg_output_dir\" in \"main_process\".\r\n\r\nYes, 4.36.2 also suffers from the same problem, even though #28078 has been updated.",
"https://github.com/huggingface/transformers/pull/27929#issuecomment-1853861756\r\n\r\nThis adhoc can fix the problem. It works in my case",
"@ShaneTian or @hieu-blackbox can you please try `pip install git+https://github.com/huggingface/transformers@muellerzr-multinode-save`? It's an alternative we can try as I agree I believe the issue exists only when we don't have a shared file system. ",
"I see the error on 4.36.2 version as well, and I have a shared file system across each node. Using 2 nodes with 8 H100 gpus on each nodes.\r\n",
"> 或者你能试试吗?这是我们可以尝试的替代方案,因为我同意我相信只有当我们没有共享文件系统时才存在问题。`pip install git+https://github.com/huggingface/transformers@muellerzr-multinode-save`\r\n\r\nAfter updating the code, deepspeed starts the cluster and saves the checkpoint named tmp checkpoint-10 from the node. The host point is checkpoint-10. After saving the checkpoint-10, Watchdog cause collective operation timeout occurs and the cluster training is interrupted\r\n\r\n 48%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 10/21 [33:20<35:45, 195.01s/it]/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-12-22 06:15:36,199] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data1/liujifan/data/sft_out/tmp-checkpoint-10/global_step10/zero_pp_rank_8_mp_rank_00_model_states.pt...\r\n[2023-12-22 06:15:39,569] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /data1/liujifan/data/sft_out/tmp-checkpoint-10/global_step10/zero_pp_rank_8_mp_rank_00_model_states.pt.\r\n[2023-12-22 06:15:39,576] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /data1/liujifan/data/sft_out/tmp-checkpoint-10/global_step10/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt...\r\n[2023-12-22 06:15:39,700] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /data1/liujifan/data/sft_out/tmp-checkpoint-10/global_step10/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt.\r\n[2023-12-22 06:15:39,701] [INFO] [engine.py:3428:_save_zero_checkpoint] zero checkpoint saved /data1/liujifan/data/sft_out/tmp-checkpoint-10/global_step10/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt\r\n[2023-12-22 06:15:39,764] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!\r\n[E ProcessGroupNCCL.cpp:475] [Rank 15] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800100 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:475] [Rank 12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800116 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:475] [Rank 9] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800135 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:475] [Rank 11] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800135 milliseconds before timing out.\r\na6000_node2:65506:493 [4] NCCL INFO [Service thread] Connection closed by localRank 4\r\na6000_node2:65509:495 [7] NCCL INFO [Service thread] Connection closed by localRank 7\r\na6000_node2:65505:498 [3] NCCL INFO [Service thread] Connection closed by localRank 3\r\na6000_node2:65506:465 [4] NCCL INFO comm 0xd875220 rank 12 nranks 16 cudaDev 4 busId 81000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 12] NCCL watchdog thread terminated with exception: [Rank 12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800116 milliseconds before timing out.\r\na6000_node2:65503:496 [1] NCCL INFO [Service thread] Connection closed by localRank 1\r\n[E ProcessGroupNCCL.cpp:475] [Rank 14] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800519 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:475] [Rank 13] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800522 milliseconds before timing out.\r\na6000_node2:65507:494 [5] NCCL INFO [Service thread] Connection closed by localRank 5\r\na6000_node2:65508:500 [6] NCCL INFO [Service thread] Connection closed by localRank 6\r\na6000_node2:65508:447 [6] NCCL INFO comm 0xc320220 rank 14 nranks 16 cudaDev 6 busId c1000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 14] NCCL watchdog thread terminated with exception: [Rank 14] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800519 milliseconds before timing out.\r\na6000_node2:65505:452 [3] NCCL INFO comm 0xc021ee0 rank 11 nranks 16 cudaDev 3 busId 61000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 11] NCCL watchdog thread terminated with exception: [Rank 11] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800135 milliseconds before timing out.\r\na6000_node2:65509:459 [7] NCCL INFO comm 0xbc35500 rank 15 nranks 16 cudaDev 7 busId e1000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 15] NCCL watchdog thread terminated with exception: [Rank 15] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800100 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:475] [Rank 10] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800721 milliseconds before timing out.\r\na6000_node2:65503:449 [1] NCCL INFO comm 0xce5ffe0 rank 9 nranks 16 cudaDev 1 busId 25000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 9] NCCL watchdog thread terminated with exception: [Rank 9] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800135 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:475] [Rank 8] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800781 milliseconds before timing out.\r\na6000_node2:65504:497 [2] NCCL INFO [Service thread] Connection closed by localRank 2\r\na6000_node2:65502:499 [0] NCCL INFO [Service thread] Connection closed by localRank 0\r\na6000_node2:65504:454 [2] NCCL INFO comm 0xc9b2f80 rank 10 nranks 16 cudaDev 2 busId 41000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 10] NCCL watchdog thread terminated with exception: [Rank 10] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800721 milliseconds before timing out.\r\na6000_node2:65507:461 [5] NCCL INFO comm 0xbdb3600 rank 13 nranks 16 cudaDev 5 busId a1000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 13] NCCL watchdog thread terminated with exception: [Rank 13] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800522 milliseconds before timing out.\r\na6000_node2:65502:457 [0] NCCL INFO comm 0xc2d6f80 rank 8 nranks 16 cudaDev 0 busId 1000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 8] NCCL watchdog thread terminated with exception: [Rank 8] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800781 milliseconds before timing out.\r\n[2023-12-22 06:45:43,272] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65502 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65503 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65504 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65505 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65507 closing signal SIGTERM\r\n[2023-12-22 06:45:48,361] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 4 (pid: 65506) of binary: /root/anaconda3/envs/ljf_factory/bin/python\r\nTraceback (most recent call last):\r\n File \"/root/anaconda3/envs/ljf_factory/bin/torchrun\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 346, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/run.py\", line 806, in main\r\n run(args)\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/run.py\", line 797, in run\r\n elastic_launch(\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 264, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n======================================================\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 8] NCCL watchdog thread terminated with exception: [Rank 8] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=21668, OpType=ALLREDUCE, NumelIn=1, NumelOut=1, Timeout(ms)=1800000) ran for 1800781 milliseconds before timing out.\r\n[2023-12-22 06:45:43,272] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65502 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65503 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65504 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65505 closing signal SIGTERM\r\n[2023-12-22 06:45:43,273] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65507 closing signal SIGTERM\r\n[2023-12-22 06:45:48,361] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 4 (pid: 65506) of binary: /root/anaconda3/envs/ljf_factory/bin/python\r\nTraceback (most recent call last):\r\n File \"/root/anaconda3/envs/ljf_factory/bin/torchrun\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 346, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/run.py\", line 806, in main\r\n run(args)\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/run.py\", line 797, in run\r\n elastic_launch(\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/root/anaconda3/envs/ljf_factory/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 264, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError:\r\n======================================================\r\nsrc/train_bash.py FAILED\r\n------------------------------------------------------\r\nFailures:\r\n[1]:\r\n time : 2023-12-22_06:45:43\r\n host : A6000_node2\r\n rank : 14 (local_rank: 6)\r\n exitcode : -6 (pid: 65508)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 65508\r\n[2]:\r\n time : 2023-12-22_06:45:43\r\n host : A6000_node2\r\n rank : 15 (local_rank: 7)\r\n exitcode : -6 (pid: 65509)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 65509\r\n------------------------------------------------------\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-12-22_06:45:43\r\n host : A6000_node2\r\n rank : 12 (local_rank: 4)\r\n exitcode : -6 (pid: 65506)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 65506\r\n",
"any update on this issue pls? I think 4.36.2 has the same issue.",
"Any update now? 4.36.2 definitely have the same issue! Which is the latest version that does not have this annoying bug?",
"> Any update now? 4.36.2 definitely have the same issue! Which is the latest version that does not have this annoying bug?\r\n\r\nLatest V4.37.1 still has the same issue in my case...",
"Gentle ping @muellerzr @pacman100 ",
"I just found that setting save_on_each_node=False in TrainingArguments works. See [#28009](https://github.com/huggingface/transformers/pull/28009)",
"Also facing this issue on 4.36.2. `setting save_on_each_node=False` allowed training to continue longer but I still eventually hit an error like: \r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: './output/models/tmp-checkpoint-5970' -> './output/models/checkpoint-5970'`",
"@JohnGiorgi can you give us more information on your setup please?\r\n\r\n1. Windows/Linux/Etc\r\n2. How many GPUs?\r\n3. Is it multi-node or single node (computer)",
"@muellerzr Linux (Ubuntu 22.04.2 LTS), multi-node with 4 nodes and 8 GPUs per node for a total of 32 GPUs (shared file-system and network). I will note that training progressed long enough to successfully save 1 checkpoint to disk, but failed when trying to write a second checkpoint some training steps later.\r\n\r\n",
"@muellerzr This problem seems to be resolved on the latest version of transformers (`4.37.2`)"
] | 1,702 | 1,707 | null | NONE | null | ### System Info
platform: linux
python: 3.9
transformers: 4.36.1
running on two A10.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I saw the release notes of 4.36.1 says this error already fixed, however, it still repeats after I install the latest version when I am running on a two A10.2 machine.
```
Traceback (most recent call last):
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/runpy.py", line 197, in _run_module_as_main
2023-12-17 18:09:08 10.0.1.12: return _run_code(code, main_globals, None,
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/runpy.py", line 87, in _run_code
2023-12-17 18:09:08 10.0.1.12: exec(code, run_globals)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/decompressed_artifact/code/src/axolotl/cli/train.py", line 38, in <module>
2023-12-17 18:09:08 10.0.1.12: fire.Fire(do_cli)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
2023-12-17 18:09:08 10.0.1.12: component_trace = _Fire(component, args, parsed_flag_args, context, name)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
2023-12-17 18:09:08 10.0.1.12: component, remaining_args = _CallAndUpdateTrace(
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
2023-12-17 18:09:08 10.0.1.12: component = fn(*varargs, **kwargs)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/decompressed_artifact/code/src/axolotl/cli/train.py", line 34, in do_cli
2023-12-17 18:09:08 10.0.1.12: train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/decompressed_artifact/code/src/axolotl/train.py", line 126, in train
2023-12-17 18:09:08 10.0.1.12: trainer.train(resume_from_checkpoint=resume_from_checkpoint)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 1537, in train
2023-12-17 18:09:08 10.0.1.12: return inner_training_loop(
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
2023-12-17 18:09:08 10.0.1.12: self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 2274, in _maybe_log_save_evaluate
2023-12-17 18:09:08 10.0.1.12: self._save_checkpoint(model, trial, metrics=metrics)
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer.py", line 2376, in _save_checkpoint
2023-12-17 18:09:08 10.0.1.12: self.state.save_to_json(os.path.join(staging_output_dir, TRAINER_STATE_NAME))
2023-12-17 18:09:08 10.0.1.12: File "/home/datascience/conda/pytorch2_0forgpuonpython3_9_vziqun/lib/python3.9/site-packages/transformers/trainer_callback.py", line 114, in save_to_json
2023-12-17 18:09:08 10.0.1.12: with open(json_path, "w", encoding="utf-8") as f:
2023-12-17 18:09:08 10.0.1.12: FileNotFoundError: [Errno 2] No such file or directory: './qlora-out/tmp-checkpoint-1080/trainer_state.json'
```
### Expected behavior
expect it to work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28119/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28118/comments | https://api.github.com/repos/huggingface/transformers/issues/28118/events | https://github.com/huggingface/transformers/pull/28118 | 2,047,169,117 | PR_kwDOCUB6oc5iSnXF | 28,118 | Fix a typo in tokenizer documentation | {
"login": "mssalvatore",
"id": 19957806,
"node_id": "MDQ6VXNlcjE5OTU3ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19957806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mssalvatore",
"html_url": "https://github.com/mssalvatore",
"followers_url": "https://api.github.com/users/mssalvatore/followers",
"following_url": "https://api.github.com/users/mssalvatore/following{/other_user}",
"gists_url": "https://api.github.com/users/mssalvatore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mssalvatore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mssalvatore/subscriptions",
"organizations_url": "https://api.github.com/users/mssalvatore/orgs",
"repos_url": "https://api.github.com/users/mssalvatore/repos",
"events_url": "https://api.github.com/users/mssalvatore/events{/privacy}",
"received_events_url": "https://api.github.com/users/mssalvatore/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28118). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in tokenizer documentation. For some methods, such as `tokenize()`, the description currently reads "Converts a string in a sequence of tokens, using the tokenizer." I believe what is meant is "Converts a string INTO a sequence of tokens".
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? (N/A)
## Who can review?
@ArthurZucker
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28118/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28118",
"html_url": "https://github.com/huggingface/transformers/pull/28118",
"diff_url": "https://github.com/huggingface/transformers/pull/28118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28118.patch",
"merged_at": 1702925075000
} |
https://api.github.com/repos/huggingface/transformers/issues/28117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28117/comments | https://api.github.com/repos/huggingface/transformers/issues/28117/events | https://github.com/huggingface/transformers/pull/28117 | 2,047,094,751 | PR_kwDOCUB6oc5iSXIn | 28,117 | Fix indentation error - semantic_segmentation.md | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28117). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
this PR removes the indentation error in code segment of sementic_segmentation.md file
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28117/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28117",
"html_url": "https://github.com/huggingface/transformers/pull/28117",
"diff_url": "https://github.com/huggingface/transformers/pull/28117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28117.patch",
"merged_at": 1702921674000
} |
https://api.github.com/repos/huggingface/transformers/issues/28116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28116/comments | https://api.github.com/repos/huggingface/transformers/issues/28116/events | https://github.com/huggingface/transformers/issues/28116 | 2,047,064,498 | I_kwDOCUB6oc56A7my | 28,116 | TypeError: TextInputSequence must be str in converting squad examples to features | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, not 100% this is of high priority, issue is from [here](https://github.com/huggingface/transformers/blob/cfe84ba756059e609591aa83b6de654b6e39ab4d/src/transformers/data/processors/squad.py#L179-L188):\r\n```python \r\n encoded_dict = tokenizer.encode_plus( # TODO(thom) update this logic\r\n texts,\r\n pairs,\r\n truncation=truncation,\r\n padding=padding_strategy,\r\n max_length=max_seq_length,\r\n return_overflowing_tokens=True,\r\n stride=max_seq_length - doc_stride - len(truncated_query) - sequence_pair_added_tokens,\r\n return_token_type_ids=True,\r\n )\r\n```\r\nin the data processor. The tokenizers works well but at some point it is fed with input ids which is not expected. \r\n",
"The reason why it works with a slow tokenizer is because the input is ignore (the `texts` is equal to `[3160, 2050, 1029]`. \r\nIf you want to fix the logic feel free to do so but I don't think we have bandwidth for this is it was not really requested a lot. Let's keep this issue open otherwise",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,706 | 1,706 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-4.15.0-196-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce this behaviour:
I basically have written a function that calls the `squad_convert_examples_to_features` of HF after doing some input framings. This is a mockup code just to show the behaviour, but it's in fact a part of a larger model. Here's my code:
```python
from transformers import SquadExample, squad_convert_examples_to_features, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ahotrod/electra_large_discriminator_squad2_512") #an ELECTRA-LARGE tokenizer
qa_pairs = [[['QuestionA?', "AnswerA"], ['QuestionB', 'AnswerB'], ['QuestionC', 'AnswerC'], ["QuestionD", 'AnswerD']]]
context = "Here's the context text..."
def _answer_questions(
summaries, qa_pairs_lists
) :
qa_inputs = []
context_to_input_index = {}
mapping = {}
for i, (summary, qa_pairs_list) in enumerate(zip(summaries, [[qa_pairs_lists]])):
for j, qa_pairs in enumerate(qa_pairs_list):
for k, qa in enumerate(qa_pairs):
question = qa["question"]
key = (question, summary)
if key not in context_to_input_index:
context_to_input_index[key] = len(qa_inputs)
qa_inputs.append(key)
mapping[(i, j, k)] = context_to_input_index[key]
examples = []
for i, (question, context) in enumerate(qa_inputs):
examples.append(SquadExample(
qas_id=str(i),
question_text=question,
context_text=context,
answer_text=None,
start_position_character=None,
title=None,
is_impossible=True,
answers=[]
))
features, dataset = squad_convert_examples_to_features(
examples,
tokenizer,
384,
0,
512,
False,
padding_strategy="max_length",
return_dataset=False,
threads=1,
tqdm_enabled=True,
)
# throws
"""
Traceback (most recent call last):
File "test.py", line 55, in <module>
_answer_questions(
File "test.py", line 39, in _answer_questions
features, dataset = squad_convert_examples_to_features(
File "/path/to/HF_installed/squad.py", line 376, in squad_convert_examples_to_features
features = list(
File "lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "lib/python3.8/multiprocessing/pool.py", line 420, in <genexpr>
return (item for chunk in result for item in chunk)
File "lib/python3.8/multiprocessing/pool.py", line 868, in next
raise value
TypeError: TextInputSequence must be str
"""
# test
_answer_questions(
[context],
[{'question': v[0], 'answer': v[1] } for v in qa_pairs[0]]
)
```
Here's more debugging info about where this error is coming from:
> Traceback (most recent call last):
> File "PYTHON_PATH/multiprocessing/pool.py", line 125, in worker
> result = (True, func(*args, **kwds))
> File "PYTHON_PATH/multiprocessing/pool.py", line 48, in mapstar
> return list(map(*args))
> File "test.py", line 96, in squad_convert_example_to_features
> encoded_dict = tokenizer.encode_plus( # TODO(thom) update this logic
> File "PYTHON_PATH/site-packages/transformers/tokenization_utils_base.py", line 2981, in encode_plus
> return self._encode_plus(
> File "PYTHON_PATH/site-packages/transformers/tokenization_utils_fast.py", line 576, in _encode_plus
> batched_output = self._batch_encode_plus(
> File "PYTHON_PATH/site-packages/transformers/tokenization_utils_fast.py", line 504, in _batch_encode_plus
> encodings = self._tokenizer.encode_batch(
> TypeError: TextInputSequence must be str
### Expected behavior
I'm expecting to use the `squad_convert_examples_to_features` function smoothly, getting all the `features` and `dataset` without any bugs. I did some digging around the web for a quick fix or workaround and found out that switching the tokenizer to a regular one (by setting `use_fast=False` when initiating the tokenizer) seems to do the trick. But since this issue has been around for like 2 years now (if I remember correctly), I think it's high time to open a new issue page and flag this potential bug. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28116/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28115/comments | https://api.github.com/repos/huggingface/transformers/issues/28115/events | https://github.com/huggingface/transformers/pull/28115 | 2,046,937,134 | PR_kwDOCUB6oc5iR0Yx | 28,115 | [`Mixtral`] Fix loss + nits | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> # What does this PR do?\r\n> Properly compute the loss. Pushes for a uniform distribution.\r\n> \r\n> fixes #28021 Fixes #28093\r\n\r\nWhat were the side effects of the issue? Did it actually degrade training runs"
] | 1,702 | 1,703 | 1,703 | COLLABORATOR | null | # What does this PR do?
Properly compute the loss. Pushes for a uniform distribution.
fixes #28021
Fixes https://github.com/huggingface/transformers/issues/28093 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28115/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28115/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28115",
"html_url": "https://github.com/huggingface/transformers/pull/28115",
"diff_url": "https://github.com/huggingface/transformers/pull/28115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28115.patch",
"merged_at": 1703003514000
} |
https://api.github.com/repos/huggingface/transformers/issues/28114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28114/comments | https://api.github.com/repos/huggingface/transformers/issues/28114/events | https://github.com/huggingface/transformers/pull/28114 | 2,046,732,521 | PR_kwDOCUB6oc5iRG49 | 28,114 | [Whisper] Fix word-level timestamps with bs>1 or num_beams>1 | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28114). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Merging now, once the checks are green, as it is a blocking feature, all tests are passing and @gante is off for a few days!",
"Hi @ylacombe \r\nThanks for the fix! I do still face an issue I think is relevant here so I figured I'll post it.\r\n\r\nWhen I'm trying to use the whisper model with batch size > 1 (for my example: 2) and word level timestamps, I get this error in the DTW method:\r\n\r\n```\r\n...\r\nFile \"/transformers/pipelines/base.py\", line 926, in device_placement\r\n yield\r\nFile \"/transformers/pipelines/base.py\", line 1046, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\nFile \"/transformers/pipelines/automatic_speech_recognition.py\", line 572, in _forward\r\n tokens = self.model.generate(\r\nFile \"/transformers/models/whisper/modeling_whisper.py\", line 2273, in generate\r\n outputs[\"token_timestamps\"] = self._extract_token_timestamps(\r\nFile \"/transformers/models/whisper/modeling_whisper.py\", line 2624, in _extract_token_timestamps\r\n text_indices, time_indices = _dynamic_time_warping(-matrix.double().cpu().numpy())\r\nFile \"/transformers/models/whisper/modeling_whisper.py\", line 258, in _dynamic_time_warping\r\n output_length, input_length = matrix.shape\r\nValueError: too many values to unpack (expected 2)\r\n```\r\n\r\nThe `matrix` object's shape is `(6, 72, 1500)` and indeed it is expecting a shape of 2 so it fails. Do you have any idea what happened or how can I overcome it?\r\n\r\nThanks!\r\n\r\n### Update:\r\nPerhaps the fix might be here:\r\n\r\nline 2611 in modeling_whiaper.py:\r\n```\r\nif num_frames is not None and isinstance(num_frames, (tuple, list)):\r\n```\r\nshould be:\r\n```\r\nif num_frames is not None and isinstance(num_frames, (tuple, list, np.ndarray)):\r\n```\r\nas from what I saw it is a numpy array.\r\n\r\n### Update 2:\r\nThat seems to have fixed it. I'll open a PR and if you can, review it :)\r\nhttps://github.com/huggingface/transformers/pull/28226"
] | 1,702 | 1,703 | 1,703 | COLLABORATOR | null | # What does this PR do?
Supersedes #26699
This PR fixes two issues related to Whisper:
1. Wrong DTW matrix computation when computing word-level timestamps with beam search (issues #27362 and #28007)
2. Bug when computing world-levels timestamps with bs>1 using the pipeline (issue #27446 and PR #26699)
The first issue happens because the DTW matrix is derived from the cross attentions. The latter is of size `beam_search*num_return_sequences*batch_size`, but it should be of size `num_return_sequences*batch_size` , so we need to keep track of the beam indices.
The second issue happens because when batching with the pipeline, `stride` is passed as a list of tuple (one per sample) instead of a single tuple.
When there are multiple strides passed to `_extract_token_timestamps`, we can't compute the DTW matrix in parallel.
It is treated in two cases:
1. If same stride for each sample, compute DTW weights in parallel
2. If not the same stride (i.e end of audio file) compute them sequentially
The loss of parallelism is not so dramatic, since in all cases the DTW algorithm is performed sequentially.
Fixes #27362, #28007, #27446
cc @sanchit-gandhi, @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28114/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28114",
"html_url": "https://github.com/huggingface/transformers/pull/28114",
"diff_url": "https://github.com/huggingface/transformers/pull/28114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28114.patch",
"merged_at": 1703248991000
} |
https://api.github.com/repos/huggingface/transformers/issues/28113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28113/comments | https://api.github.com/repos/huggingface/transformers/issues/28113/events | https://github.com/huggingface/transformers/pull/28113 | 2,046,650,583 | PR_kwDOCUB6oc5iQ1Z6 | 28,113 | Remove warning if `DISABLE_TELEMETRY` is used | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28113). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the quick review @amyeroberts!"
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | In https://github.com/huggingface/transformers/issues/27564 I did some cleaning in the environment variables. I added a warning if `DISABLE_TELEMETRY` was set to encourage using `HF_HUB_DISABLE_TELEMETRY` instead. However this warning is not necessary for at least two reasons:
- `DISABLE_TELEMETRY` is already well understood and parsed by `huggingface_hub`. No need to handle it specifically in `transformers`. If in the future we want to deprecate it and/or handle it differently, everything would have to happen in `huggingface_hub` directly.
- Also, as highlighted in https://github.com/huggingface/huggingface_hub/issues/1917, keeping `DISABLE_TELEMETRY` in addition to our custom `HF_HUB_DISABLED_TELEMETRY` is also beneficial if this variable become a standard with other libraries. In any case, we do not benefit from not handling it.
Therefore this PR removes the deprecation warning + let `huggingface_hub` handle the environment variables by itself. It removes any custom code from `transformers` about this topic. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28113/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28113",
"html_url": "https://github.com/huggingface/transformers/pull/28113",
"diff_url": "https://github.com/huggingface/transformers/pull/28113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28113.patch",
"merged_at": 1702912681000
} |
https://api.github.com/repos/huggingface/transformers/issues/28112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28112/comments | https://api.github.com/repos/huggingface/transformers/issues/28112/events | https://github.com/huggingface/transformers/issues/28112 | 2,046,551,463 | I_kwDOCUB6oc55--Wn | 28,112 | Error pushing Mixtral fine-tune to hub | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Narsil ",
"As the error mentions, `... StorageFull, message: \"No space left on device\" })` you don't have enough space on the device. Make sur to read the stacktrace it usuallu helps a lot 🤗 ",
"@ArthurZucker that's true - but it doesn't seem consistent with the space that @RonanKMcGovern is reporting is left on the device. If the model is the same size as [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1/tree/main), then we'd expect ~100GB required for the safetensor shards. Let's say it's 190GB if all of the currently used memory (600 * 0.31) is just the downloaded checkpoint. I'm not sure of the additional requirements PEFT applies - is it expected to increase the memory requirements of the model by more than 2x? \r\n",
"Yeah, that's what I found @amyeroberts . Originally I tried with only 200 GB of total disk space, so it seemed plausible I was out of space if I had saved another copy of the model, but with 600 GB of space, that seemed more than enough and yet I still got the error.\r\n\r\nBTW, the adapter is merged to the model, so I would have thought the PEFT portion is no longer relevant at that point.",
"@RonanKMcGovern Could you try with a small model that will fit many times into your memory and see whether you still see a large explosion in the memory footprint? ",
"Yes. I have run this script for many models like Llama 7B , openchat 3.5, deepseek models etc.\n\nIn this case I solved the issue with using upload files to hf.",
"> @ArthurZucker that's true - but it doesn't seem consistent with the space that @RonanKMcGovern is reporting is left on the device. If the model is the same size as [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1/tree/main), then we'd expect ~100GB required for the safetensor shards. Let's say it's 190GB if all of the currently used memory (600 * 0.31) is just the downloaded checkpoint. I'm not sure of the additional requirements PEFT applies - is it expected to increase the memory requirements of the model by more than 2x?\r\n\r\nDo you mean Nvidia's graphics memory or do you mean computer memory or disk storage?",
"Hi @zysNLP I meant disk storage.\r\n\r\nBut yes, perhaps that's the issue - insufficient space on the gpu to push. Although I did previously run LoRA bf16 fine-tuning and inference.",
"Closing this out, I haven't been able to test it, but I think the error is GPU out of memory, not disk - probably what Arthur was initially implying.\r\n\r\nI was on 2x A6000s so the VRAM is definitely close to the total Mixtral model size, and trying to push a loaded model may have been too much. I assume that was the issue, if I find different later, I'll reopen this.",
"Hmm, I having the same issue with DeepSeek Coder 33B now. Seems like a safetensors issue when serializing (I've tried to do this with the model on cpu and on gpu...):\r\n```\r\nSafetensorError Traceback (most recent call last)\r\nCell In[7], line 1\r\n----> 1 model.push_to_hub(new_model, token=True, max_shard_size=\"10GB\",safe_serialization=True)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:871, in PushToHubMixin.push_to_hub(self, repo_id, use_temp_dir, commit_message, private, token, max_shard_size, create_pr, safe_serialization, revision, commit_description, **deprecated_kwargs)\r\n 868 files_timestamps = self._get_files_timestamps(work_dir)\r\n 870 # Save all files.\r\n--> 871 self.save_pretrained(work_dir, max_shard_size=max_shard_size, safe_serialization=safe_serialization)\r\n 873 return self._upload_modified_files(\r\n 874 work_dir,\r\n 875 repo_id,\r\n (...)\r\n 881 commit_description=commit_description,\r\n 882 )\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2376, in PreTrainedModel.save_pretrained(self, save_directory, is_main_process, state_dict, save_function, push_to_hub, max_shard_size, safe_serialization, variant, token, save_peft_format, **kwargs)\r\n 2372 for shard_file, shard in shards.items():\r\n 2373 if safe_serialization:\r\n 2374 # At some point we will need to deal better with save_function (used for TPU and other distributed\r\n 2375 # joyfulness), but for now this enough.\r\n-> 2376 safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={\"format\": \"pt\"})\r\n 2377 else:\r\n 2378 save_function(shard, os.path.join(save_directory, shard_file))\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/safetensors/torch.py:281, in save_file(tensors, filename, metadata)\r\n 250 def save_file(\r\n 251 tensors: Dict[str, torch.Tensor],\r\n 252 filename: Union[str, os.PathLike],\r\n 253 metadata: Optional[Dict[str, str]] = None,\r\n 254 ):\r\n 255 \"\"\"\r\n 256 Saves a dictionary of tensors into raw bytes in safetensors format.\r\n 257 \r\n (...)\r\n 279 ```\r\n 280 \"\"\"\r\n--> 281 serialize_file(_flatten(tensors), filename, metadata=metadata)\r\n\r\nSafetensorError: Error while serializing: IoError(Os { code: 28, kind: StorageFull, message: \"No space left on device\" })\r\n```\r\n\r\nI don't recall this issue before and I have pushed many models of this size.",
"@RonanKMcGovern Could you test with a small model and check the memory utilization? Does this still occur if `safe_serialization=False` in the push_to_hub call? ",
"Yes, working fine with smaller models, suggesting it is indeed out of memory somehow. I'll revert the next time I run a larger model again trying to push with safetensors false.",
"Ok, I found the issue. The shard size I had set for pushing to hub was larger than my docker container size... I set my container size to larger than the shard size to push and everything works. It just happens that for bigger models I was using bigger shards, hence why I had issues for larger models.\r\n\r\nThanks for the help. I didn't appreciate how this worked."
] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, 2x A6000
- Using distributed or parallel set-up in script?:
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="flash_attention_2",
cache_dir=cache_dir
)
# Apply an adapter:
from peft import PeftModel
model = PeftModel.from_pretrained(
model,
adapter_dir,
)
model = model.merge_and_unload() # merge adapters with the base model.
model.push_to_hub(new_model, token=True, max_shard_size="10GB",safe_serialization=True)
```
Leads to:
```
SafetensorError Traceback (most recent call last)
Cell In[20], line 1
----> 1 model.push_to_hub(new_model, token=True, max_shard_size="10GB",safe_serialization=True)
File /usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:871, in PushToHubMixin.push_to_hub(self, repo_id, use_temp_dir, commit_message, private, token, max_shard_size, create_pr, safe_serialization, revision, commit_description, **deprecated_kwargs)
868 files_timestamps = self._get_files_timestamps(work_dir)
870 # Save all files.
--> 871 self.save_pretrained(work_dir, max_shard_size=max_shard_size, safe_serialization=safe_serialization)
873 return self._upload_modified_files(
874 work_dir,
875 repo_id,
(...)
881 commit_description=commit_description,
882 )
File /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2376, in PreTrainedModel.save_pretrained(self, save_directory, is_main_process, state_dict, save_function, push_to_hub, max_shard_size, safe_serialization, variant, token, save_peft_format, **kwargs)
2372 for shard_file, shard in shards.items():
2373 if safe_serialization:
2374 # At some point we will need to deal better with save_function (used for TPU and other distributed
2375 # joyfulness), but for now this enough.
-> 2376 safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
2377 else:
2378 save_function(shard, os.path.join(save_directory, shard_file))
File /usr/local/lib/python3.10/dist-packages/safetensors/torch.py:281, in save_file(tensors, filename, metadata)
250 def save_file(
251 tensors: Dict[str, torch.Tensor],
252 filename: Union[str, os.PathLike],
253 metadata: Optional[Dict[str, str]] = None,
254 ):
255 """
256 Saves a dictionary of tensors into raw bytes in safetensors format.
257
(...)
279 ```
280 """
--> 281 serialize_file(_flatten(tensors), filename, metadata=metadata)
SafetensorError: Error while serializing: IoError(Os { code: 28, kind: StorageFull, message: "No space left on device" })
```
Even though I'm only using 31% of 600 GB of disk space locally.
```
### Expected behavior
Typically, safetensors push successfully. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28111/comments | https://api.github.com/repos/huggingface/transformers/issues/28111/events | https://github.com/huggingface/transformers/issues/28111 | 2,046,459,318 | I_kwDOCUB6oc55-n22 | 28,111 | Facing issues when trying to fine-tune T5 | {
"login": "wolfassi123",
"id": 82727504,
"node_id": "MDQ6VXNlcjgyNzI3NTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/82727504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wolfassi123",
"html_url": "https://github.com/wolfassi123",
"followers_url": "https://api.github.com/users/wolfassi123/followers",
"following_url": "https://api.github.com/users/wolfassi123/following{/other_user}",
"gists_url": "https://api.github.com/users/wolfassi123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wolfassi123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wolfassi123/subscriptions",
"organizations_url": "https://api.github.com/users/wolfassi123/orgs",
"repos_url": "https://api.github.com/users/wolfassi123/repos",
"events_url": "https://api.github.com/users/wolfassi123/events{/privacy}",
"received_events_url": "https://api.github.com/users/wolfassi123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, can you share the exact traceback to debug this? ",
"> Hey, can you share the exact traceback to debug this?\r\n\r\nSure thing!\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-17-cb38b76c7066>](https://localhost:8080/#) in <cell line: 24>()\r\n 22 )\r\n 23 \r\n---> 24 trainer.train()\r\n\r\n12 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1553 hf_hub_utils.enable_progress_bars()\r\n 1554 else:\r\n-> 1555 return inner_training_loop(\r\n 1556 args=args,\r\n 1557 resume_from_checkpoint=resume_from_checkpoint,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n 1858 \r\n 1859 with self.accelerator.accumulate(model):\r\n-> 1860 tr_loss_step = self.training_step(model, inputs)\r\n 1861 \r\n 1862 if (\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)\r\n 2723 \r\n 2724 with self.compute_loss_context_manager():\r\n-> 2725 loss = self.compute_loss(model, inputs)\r\n 2726 \r\n 2727 if self.args.n_gpu > 1:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs)\r\n 2746 else:\r\n 2747 labels = None\r\n-> 2748 outputs = model(**inputs)\r\n 2749 # Save past state if it exists\r\n 2750 # TODO: this needs to be fixed and made cleaner later.\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in forward(*args, **kwargs)\r\n 678 \r\n 679 def forward(*args, **kwargs):\r\n--> 680 return model_forward(*args, **kwargs)\r\n 681 \r\n 682 # To act like a decorator so that it can be popped when doing `extract_model_from_parallel`\r\n\r\n[/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)\r\n 666 \r\n 667 def __call__(self, *args, **kwargs):\r\n--> 668 return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n 669 \r\n 670 def __getstate__(self):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/amp/autocast_mode.py](https://localhost:8080/#) in decorate_autocast(*args, **kwargs)\r\n 14 def decorate_autocast(*args, **kwargs):\r\n 15 with autocast_instance:\r\n---> 16 return func(*args, **kwargs)\r\n 17 \r\n 18 decorate_autocast.__script_unsupported = \"@autocast() decorator is not supported in script mode\" # type: ignore[attr-defined]\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1707 if encoder_outputs is None:\r\n 1708 # Convert encoder inputs in embeddings if needed\r\n-> 1709 encoder_outputs = self.encoder(\r\n 1710 input_ids=input_ids,\r\n 1711 attention_mask=attention_mask,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1016 inputs_embeds = self.embed_tokens(input_ids)\r\n 1017 \r\n-> 1018 batch_size, seq_length = input_shape\r\n 1019 \r\n 1020 # required mask seq length can be calculated via length of past\r\n\r\nValueError: too many values to unpack (expected 2)\r\n```",
"I placed a breakpoint in your code, there is an issue with the inputs:\r\n```python \r\ninputs[\"input_ids\"].shape\r\ntorch.Size([16, 1, 512])\r\n```\r\nthere is an extra dimension which probably comes from the way the dataset is processed / the data collator! ",
"The following code fixed it:\r\n```python \r\ndef preprocess_function(examples):\r\n combined_input = examples[\"Question\"] + \": \" + examples[\"true_contexts\"]\r\n model_inputs = tokenizer(combined_input, max_length=512, padding=\"max_length\", truncation=True)\r\n\r\n labels = tokenizer(text_target=examples[\"Rewrite\"], max_length=512, padding=\"max_length\", truncation=True)\r\n\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n```",
"> > Hey, can you share the exact traceback to debug this?\r\n> \r\n> Sure thing!\r\n> \r\n> ```\r\n> ---------------------------------------------------------------------------\r\n> ValueError Traceback (most recent call last)\r\n> [<ipython-input-17-cb38b76c7066>](https://localhost:8080/#) in <cell line: 24>()\r\n> 22 )\r\n> 23 \r\n> ---> 24 trainer.train()\r\n> \r\n> 12 frames\r\n> [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n> 1553 hf_hub_utils.enable_progress_bars()\r\n> 1554 else:\r\n> -> 1555 return inner_training_loop(\r\n> 1556 args=args,\r\n> 1557 resume_from_checkpoint=resume_from_checkpoint,\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n> 1858 \r\n> 1859 with self.accelerator.accumulate(model):\r\n> -> 1860 tr_loss_step = self.training_step(model, inputs)\r\n> 1861 \r\n> 1862 if (\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)\r\n> 2723 \r\n> 2724 with self.compute_loss_context_manager():\r\n> -> 2725 loss = self.compute_loss(model, inputs)\r\n> 2726 \r\n> 2727 if self.args.n_gpu > 1:\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs)\r\n> 2746 else:\r\n> 2747 labels = None\r\n> -> 2748 outputs = model(**inputs)\r\n> 2749 # Save past state if it exists\r\n> 2750 # TODO: this needs to be fixed and made cleaner later.\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n> 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n> 1517 else:\r\n> -> 1518 return self._call_impl(*args, **kwargs)\r\n> 1519 \r\n> 1520 def _call_impl(self, *args, **kwargs):\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n> 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n> 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n> -> 1527 return forward_call(*args, **kwargs)\r\n> 1528 \r\n> 1529 try:\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in forward(*args, **kwargs)\r\n> 678 \r\n> 679 def forward(*args, **kwargs):\r\n> --> 680 return model_forward(*args, **kwargs)\r\n> 681 \r\n> 682 # To act like a decorator so that it can be popped when doing `extract_model_from_parallel`\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)\r\n> 666 \r\n> 667 def __call__(self, *args, **kwargs):\r\n> --> 668 return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n> 669 \r\n> 670 def __getstate__(self):\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/torch/amp/autocast_mode.py](https://localhost:8080/#) in decorate_autocast(*args, **kwargs)\r\n> 14 def decorate_autocast(*args, **kwargs):\r\n> 15 with autocast_instance:\r\n> ---> 16 return func(*args, **kwargs)\r\n> 17 \r\n> 18 decorate_autocast.__script_unsupported = \"@autocast() decorator is not supported in script mode\" # type: ignore[attr-defined]\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n> 1707 if encoder_outputs is None:\r\n> 1708 # Convert encoder inputs in embeddings if needed\r\n> -> 1709 encoder_outputs = self.encoder(\r\n> 1710 input_ids=input_ids,\r\n> 1711 attention_mask=attention_mask,\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n> 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n> 1517 else:\r\n> -> 1518 return self._call_impl(*args, **kwargs)\r\n> 1519 \r\n> 1520 def _call_impl(self, *args, **kwargs):\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n> 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n> 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n> -> 1527 return forward_call(*args, **kwargs)\r\n> 1528 \r\n> 1529 try:\r\n> \r\n> [/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n> 1016 inputs_embeds = self.embed_tokens(input_ids)\r\n> 1017 \r\n> -> 1018 batch_size, seq_length = input_shape\r\n> 1019 \r\n> 1020 # required mask seq length can be calculated via length of past\r\n> \r\n> ValueError: too many values to unpack (expected 2)\r\n> ```\r\n\r\nHi Wolfassi, actually I'm writing to talk to you about your work in Arabic OCR. I've been trying to do some Arabic OCR but not I can only get about 95% accuracy rate. Have you been able to do any better than that and if so, how?",
"> > > Hey, can you share the exact traceback to debug this?\r\n> > \r\n> > \r\n> > Sure thing!\r\n> > ```\r\n> > ---------------------------------------------------------------------------\r\n> > ValueError Traceback (most recent call last)\r\n> > [<ipython-input-17-cb38b76c7066>](https://localhost:8080/#) in <cell line: 24>()\r\n> > 22 )\r\n> > 23 \r\n> > ---> 24 trainer.train()\r\n> > \r\n> > 12 frames\r\n> > [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n> > 1553 hf_hub_utils.enable_progress_bars()\r\n> > 1554 else:\r\n> > -> 1555 return inner_training_loop(\r\n> > 1556 args=args,\r\n> > 1557 resume_from_checkpoint=resume_from_checkpoint,\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n> > 1858 \r\n> > 1859 with self.accelerator.accumulate(model):\r\n> > -> 1860 tr_loss_step = self.training_step(model, inputs)\r\n> > 1861 \r\n> > 1862 if (\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)\r\n> > 2723 \r\n> > 2724 with self.compute_loss_context_manager():\r\n> > -> 2725 loss = self.compute_loss(model, inputs)\r\n> > 2726 \r\n> > 2727 if self.args.n_gpu > 1:\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs)\r\n> > 2746 else:\r\n> > 2747 labels = None\r\n> > -> 2748 outputs = model(**inputs)\r\n> > 2749 # Save past state if it exists\r\n> > 2750 # TODO: this needs to be fixed and made cleaner later.\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n> > 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n> > 1517 else:\r\n> > -> 1518 return self._call_impl(*args, **kwargs)\r\n> > 1519 \r\n> > 1520 def _call_impl(self, *args, **kwargs):\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n> > 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n> > 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n> > -> 1527 return forward_call(*args, **kwargs)\r\n> > 1528 \r\n> > 1529 try:\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in forward(*args, **kwargs)\r\n> > 678 \r\n> > 679 def forward(*args, **kwargs):\r\n> > --> 680 return model_forward(*args, **kwargs)\r\n> > 681 \r\n> > 682 # To act like a decorator so that it can be popped when doing `extract_model_from_parallel`\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)\r\n> > 666 \r\n> > 667 def __call__(self, *args, **kwargs):\r\n> > --> 668 return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n> > 669 \r\n> > 670 def __getstate__(self):\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/torch/amp/autocast_mode.py](https://localhost:8080/#) in decorate_autocast(*args, **kwargs)\r\n> > 14 def decorate_autocast(*args, **kwargs):\r\n> > 15 with autocast_instance:\r\n> > ---> 16 return func(*args, **kwargs)\r\n> > 17 \r\n> > 18 decorate_autocast.__script_unsupported = \"@autocast() decorator is not supported in script mode\" # type: ignore[attr-defined]\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n> > 1707 if encoder_outputs is None:\r\n> > 1708 # Convert encoder inputs in embeddings if needed\r\n> > -> 1709 encoder_outputs = self.encoder(\r\n> > 1710 input_ids=input_ids,\r\n> > 1711 attention_mask=attention_mask,\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n> > 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n> > 1517 else:\r\n> > -> 1518 return self._call_impl(*args, **kwargs)\r\n> > 1519 \r\n> > 1520 def _call_impl(self, *args, **kwargs):\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n> > 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n> > 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n> > -> 1527 return forward_call(*args, **kwargs)\r\n> > 1528 \r\n> > 1529 try:\r\n> > \r\n> > [/usr/local/lib/python3.10/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n> > 1016 inputs_embeds = self.embed_tokens(input_ids)\r\n> > 1017 \r\n> > -> 1018 batch_size, seq_length = input_shape\r\n> > 1019 \r\n> > 1020 # required mask seq length can be calculated via length of past\r\n> > \r\n> > ValueError: too many values to unpack (expected 2)\r\n> > ```\r\n> \r\n> Hi Wolfassi, actually I'm writing to talk to you about your work in Arabic OCR. I've been trying to do some Arabic OCR but not I can only get about 95% accuracy rate. Have you been able to do any better than that and if so, how?\r\n\r\nHello there. Yes I have previously worked on Arabic OCR and no to be honest I did not achieve that high of an accuracy. I believe an accuracy of 95% is just too high to target. I tested using both EasyOCR and Tesseract. Tesseract seemed to perform the best after you finetune the model for the specific font you are using. I would also suggest trying out Paddle Paddle."
] | 1,702 | 1,704 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: T4
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @youne
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to fine tune a T5-base model but have been facing issues despite following the step by step guide found on the huggingface hub [here](https://huggingface.co/docs/transformers/tasks/translation).
So far this is my code:
`transformers.logging.set_verbosity_error()`
```python
from datasets import load_dataset
canard_train_augm = load_dataset("gaussalgo/Canard_Wiki-augmented", split="train")
canard_test_augm = load_dataset("gaussalgo/Canard_Wiki-augmented", split="test")
from transformers import AutoTokenizer
model_name = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
def preprocess_function(examples):
combined_input = examples["Question"] + ": " + examples["true_contexts"]
return tokenizer(combined_input, examples["Rewrite"],max_length=512, padding="max_length", truncation=True, return_tensors="pt")
tokenized_train = canard_train_augm.map(preprocess_function)
tokenized_test = canard_test_augm.map(preprocess_function)
from transformers import DataCollatorForSeq2Seq
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model_name)
from transformers import DataCollatorForSeq2Seq
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model_name)
import evaluate
metric = evaluate.load("sacrebleu")
import numpy as np
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
training_args = Seq2SeqTrainingArguments(
output_dir="wtf",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=2,
predict_with_generate=True,
fp16=True,
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_test,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
```
I tried several examples including my own Customized Class for the trainer function but always ended with the same issue even when I tried the same code found in the step-by-step guide provided by huggingface.
The error happens when calling the `trainer.train()` returning the following:
`ValueError: too many values to unpack (expected 2)`
I followed the exact same format as the documentation and I believe it is something that is happening when calling the loss function but was just unable to put my finger to it, if anyone can help that would be great.
### Expected behavior
Expected behavior is trying being able to fine-tune the T5 model with the above dataset by eliminating or identifying the cause of the error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28110/comments | https://api.github.com/repos/huggingface/transformers/issues/28110/events | https://github.com/huggingface/transformers/pull/28110 | 2,046,371,146 | PR_kwDOCUB6oc5iP3qg | 28,110 | Spelling correction | {
"login": "saeneas",
"id": 47715864,
"node_id": "MDQ6VXNlcjQ3NzE1ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47715864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saeneas",
"html_url": "https://github.com/saeneas",
"followers_url": "https://api.github.com/users/saeneas/followers",
"following_url": "https://api.github.com/users/saeneas/following{/other_user}",
"gists_url": "https://api.github.com/users/saeneas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saeneas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saeneas/subscriptions",
"organizations_url": "https://api.github.com/users/saeneas/orgs",
"repos_url": "https://api.github.com/users/saeneas/repos",
"events_url": "https://api.github.com/users/saeneas/events{/privacy}",
"received_events_url": "https://api.github.com/users/saeneas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28110). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | correct minor typo in overview
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28110",
"html_url": "https://github.com/huggingface/transformers/pull/28110",
"diff_url": "https://github.com/huggingface/transformers/pull/28110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28110.patch",
"merged_at": 1702908245000
} |
https://api.github.com/repos/huggingface/transformers/issues/28109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28109/comments | https://api.github.com/repos/huggingface/transformers/issues/28109/events | https://github.com/huggingface/transformers/issues/28109 | 2,046,259,585 | I_kwDOCUB6oc5593GB | 28,109 | remove unnecessary backend related checks in training_args.py | {
"login": "kevint324",
"id": 8800468,
"node_id": "MDQ6VXNlcjg4MDA0Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8800468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevint324",
"html_url": "https://github.com/kevint324",
"followers_url": "https://api.github.com/users/kevint324/followers",
"following_url": "https://api.github.com/users/kevint324/following{/other_user}",
"gists_url": "https://api.github.com/users/kevint324/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevint324/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevint324/subscriptions",
"organizations_url": "https://api.github.com/users/kevint324/orgs",
"repos_url": "https://api.github.com/users/kevint324/repos",
"events_url": "https://api.github.com/users/kevint324/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevint324/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @muellerzr @pacman100 ",
"Happy new year! Any update?",
"Completely makes sense. For example M1 does not support certain dtypes, but M2 now supports some of them so it doesn't make sense to have the above assumptions.",
"any updates?\r\n"
] | 1,702 | 1,704 | null | NONE | null | ### Feature request
[Here](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1490-L1519)
IMO these checks in transformers should be removed.
```
if (
self.framework == "pt"
and is_torch_available()
and (self.device.type != "cuda")
and (self.device.type != "npu")
and (self.device.type != "xpu")
and (get_xla_device_type(self.device) != "GPU")
and (self.fp16 or self.fp16_full_eval)
):
raise ValueError(
"FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation"
" (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX)."
)
if (
self.framework == "pt"
and is_torch_available()
and (self.device.type != "cuda")
and (self.device.type != "npu")
and (self.device.type != "xpu")
and (get_xla_device_type(self.device) != "GPU")
and (get_xla_device_type(self.device) != "TPU")
and (self.device.type != "cpu")
and (self.bf16 or self.bf16_full_eval)
):
raise ValueError(
"BF16 Mixed precision training with AMP (`--bf16`) and BF16 half precision evaluation"
" (`--bf16_full_eval`) can only be used on CUDA, XPU (with IPEX), NPU or CPU/TPU/NeuronCore devices."
)
```
### Motivation
To make things work each vendor need to extend this `if` by putting another line of ` and (self.device.type != "my_precious_chip")`.
It makes code bloated in transformers.
And I don't really think it's transformers' job to determine capability for backends. Just passthrough the paramters and let backend itself to determine if they can handle the dtype. They should have enough means to report a error.
### Your contribution
I'm glad to delete them if approved : -p | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28109/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28109/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28108/comments | https://api.github.com/repos/huggingface/transformers/issues/28108/events | https://github.com/huggingface/transformers/pull/28108 | 2,046,253,318 | PR_kwDOCUB6oc5iPeIf | 28,108 | Avoid unnecessary warnings when loading `CLIPConfig` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28108). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
">Hard-coding values like this is brittle\r\n\r\nThis is fine IMO for this situation\r\n\r\n> The intention of the user is ambiguous. If I deliberately set \"bos_token_id\": 0 in the config - then I still would have unexpected behaviour but the warning wouldn't show.\r\n\r\nVery much agreed for this!\r\n\r\nSince we don't have a perfect solution to this issue caused by the original design, and 2 core maintainers accept change it to `info` --> Let's do this. (At least, we have this handling block, so it's easy to figure thing out when someone get trapped by [this issue](https://github.com/huggingface/transformers/pull/19954#issuecomment-1295182328))",
"> Do we know if users ever modify the log verbosity for debugging purposes? i.e. would it be enough to set everything to info?\r\n\r\nThe ratio of this is quite low I believe.",
"Changed to `info`. There are are config classes in the clip family like `altclip`, `chineseclip` etc. Will change them too before merge unless there is other opinions."
] | 1,702 | 1,703 | 1,703 | COLLABORATOR | null | # What does this PR do?
Avoid unnecessary warnings when loading `CLIPConfig`: when a user doesn't change something inside `text_config`.
Fix #28042 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28108",
"html_url": "https://github.com/huggingface/transformers/pull/28108",
"diff_url": "https://github.com/huggingface/transformers/pull/28108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28108.patch",
"merged_at": 1703089494000
} |
https://api.github.com/repos/huggingface/transformers/issues/28107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28107/comments | https://api.github.com/repos/huggingface/transformers/issues/28107/events | https://github.com/huggingface/transformers/pull/28107 | 2,046,130,822 | PR_kwDOCUB6oc5iPFL0 | 28,107 | [`Llava` / `Vip-Llava`] Add SDPA into llava | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28107). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
As per title, adds SDPA into Llava-family
This makes generation faster through torch sdpa for llava-like models
Also closes: https://huggingface.co/llava-hf/llava-1.5-7b-hf/discussions/9
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28107",
"html_url": "https://github.com/huggingface/transformers/pull/28107",
"diff_url": "https://github.com/huggingface/transformers/pull/28107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28107.patch",
"merged_at": 1702903590000
} |
https://api.github.com/repos/huggingface/transformers/issues/28106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28106/comments | https://api.github.com/repos/huggingface/transformers/issues/28106/events | https://github.com/huggingface/transformers/issues/28106 | 2,046,055,139 | I_kwDOCUB6oc559FLj | 28,106 | Explicit option to disable deepspeed when loading a model | {
"login": "chiragjn",
"id": 10295418,
"node_id": "MDQ6VXNlcjEwMjk1NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/10295418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragjn",
"html_url": "https://github.com/chiragjn",
"followers_url": "https://api.github.com/users/chiragjn/followers",
"following_url": "https://api.github.com/users/chiragjn/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragjn/subscriptions",
"organizations_url": "https://api.github.com/users/chiragjn/orgs",
"repos_url": "https://api.github.com/users/chiragjn/repos",
"events_url": "https://api.github.com/users/chiragjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragjn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @pacman100 ",
"Hello @chiragjn,\r\n\r\nCan you try to do the below and let us know if that solves this issue as we already have the context manager `zero3_init_context_manager` which controls the zero init:\r\n```\r\ndef main():\r\n trainer_args = TrainingArguments(<fill this>)\r\n with trainer_args.deepspeed_plugin.zero3_init_context_manager(enable=False):\r\n # Check if model can fit just with gpus\r\n config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)\r\n with init_empty_weights():\r\n model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)\r\n device_map = infer_auto_device_map(model, dtype=torch.bfloat16)\r\n logger.info(f\"Inferred device_map for auto settings: {device_map}\")\r\n if any(not isinstance(v, int) for v in device_map.values()):\r\n raise RuntimeError(...)\r\n```\r\n\r\n",
"Ah nice to know this exists, I just checked, and it seems like my problem still occurs and is not just zero init related.\r\nBecause `is_deepspeed_zero3_enabled` does not care about zero init enabled or not.\r\n\r\nI was able to work around my issue, thanks for pointing me in the right direction \r\n\r\n```python\r\nfrom transformers.integrations.deepspeed import (\r\n is_deepspeed_zero3_enabled,\r\n set_hf_deepspeed_config,\r\n unset_hf_deepspeed_config,\r\n)\r\n\r\[email protected]\r\ndef temporarily_disable_deepspeed_zero3(training_arguments: TrainingArguments):\r\n if training_arguments.deepspeed and is_deepspeed_zero3_enabled():\r\n unset_hf_deepspeed_config()\r\n yield\r\n set_hf_deepspeed_config(training_arguments.hf_deepspeed_config)\r\n else:\r\n yield\r\n```\r\n\r\nNote for readers: This ^ works only for `accelerate launch script.py --deepspeed ...`\r\nIf you use `accelerate launch --deepspeed_config_file ... script.py ...` then the handling has to be a little bit different\r\n\r\n`set_hf_deepspeed_config(training_arguments.hf_deepspeed_config)` would change to `set_hf_deepspeed_config(training_arguments.deepspeed_plugin.dschf)`\r\n\r\n---\r\n\r\nIt would be nice to have something like this in the library\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,706 | 1,706 | NONE | null | ### Feature request
Option to disable deepspeed explicitly on a per-model basis
### Motivation
So I have a little bit of an odd setup
In my qlora/lora fine-tuning script, I launch with `accelerate launch --mixed_precision bf16 --use_deepspeed train.py --deepspeed deepspeed_zero3.json ...` and I am using the `TrainingArguments` class to accept this config
In that script, before I start training, I want to load the model with empty weights without deepspeed involved
But once a deepspeed zero 3 config is set, it gets set as a global
https://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/integrations/deepspeed.py#L239
And then all models try to use Deepspeed Zero init or do special handling for Zero 3 sharding
https://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/modeling_utils.py#L1823
This results in error with meta devices
```
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
File "/data/v/ft/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 441, in from_config
return model_class._from_config(config, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1247, in _from_config
model = cls(config, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 459, in wrapper
f(module, *args, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py", line 1141, in __init__
self.model = MixtralModel(config)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 459, in wrapper
f(module, *args, **kwargs)
File "/data/v/ft/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py", line 964, in __init__
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 466, in wrapper
self._post_init_method(module)
File "/data/v/ft/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 995, in _post_init_method
param.data = param.data.to(self.local_device)
NotImplementedError: Cannot copy out of meta tensor; no data!
```
While I can work around my issue, I thought it might be good to have some context manager to disable deepspeed zero in certain sections of the code
---
Additional context on why I load my model separately
Before I start training I just do a check to ensure the base model can fit entirely within the available GPUs in bf16 format. This is to ensure that after tuning I would be able to merge the adapters correctly because currently merge and unload cannot save offloaded modules correctly (A fix for that is under progress See: https://github.com/huggingface/peft/pull/1190)
The code for this check looks like this
```
# Check if model can fit just with gpus
config = AutoConfig.from_pretrained(model_id, trust_remote_code=True)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
device_map = infer_auto_device_map(model, dtype=torch.bfloat16)
logger.info(f"Inferred device_map for auto settings: {device_map}")
if any(not isinstance(v, int) for v in device_map.values()):
raise RuntimeError(...)
```
### Your contribution
# | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28106/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28105/comments | https://api.github.com/repos/huggingface/transformers/issues/28105/events | https://github.com/huggingface/transformers/issues/28105 | 2,045,923,480 | I_kwDOCUB6oc558lCY | 28,105 | T5Tokenizer: Different decoding behaviour depending on the tokenizer method used | {
"login": "sorenmulli",
"id": 42035306,
"node_id": "MDQ6VXNlcjQyMDM1MzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/42035306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sorenmulli",
"html_url": "https://github.com/sorenmulli",
"followers_url": "https://api.github.com/users/sorenmulli/followers",
"following_url": "https://api.github.com/users/sorenmulli/following{/other_user}",
"gists_url": "https://api.github.com/users/sorenmulli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sorenmulli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sorenmulli/subscriptions",
"organizations_url": "https://api.github.com/users/sorenmulli/orgs",
"repos_url": "https://api.github.com/users/sorenmulli/repos",
"events_url": "https://api.github.com/users/sorenmulli/events{/privacy}",
"received_events_url": "https://api.github.com/users/sorenmulli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also get same results with other `t5` models like `t5-base`; haven't tried other tokenizers though.\r\n\r\nWhen using a non-fast tokenizer, I also get the same result. Here I am using `tokenizer.sp_model`instead of `tokenizer.decoder` in the last step. ",
"There must be a punctuated-based post-processing which I fail to see (and might be expected behaviour), as this also happens with tokens `.`. `.`, `!` but not `a` ",
"Hey! You should checkout the [clean_up_tokenization_spaces](https://huggingface.co/docs/transformers/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.clean_up_tokenization_spaces) attribute:\r\n```python \r\ntokenizer.decode(ids, clean_up_tokenization_spaces=False)\r\n```\r\nis what you are looking for",
"Thanks! Then this is intended default behaviour :)\r\n\r\nI think it then makes sense for me to disable this cleaning for my T5 model to get same result as sentencepiece but I cannot completely figure out if I want this cleanup in some other cases :thinking: ",
"Yep I have no clue either as to why this was always done but it's recurring issue 😅 "
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-6.1.55-1-lts-x86_64-with-glibc2.38
- Python version: 3.11.5
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```py
from transformers import T5TokenizerFast
tokenizer = T5TokenizerFast.from_pretrained("google/flan-t5-base")
tokens = ['▁', '?', '▁', '?']
ids = tokenizer.convert_tokens_to_ids(tokens)
# [3, 58, 3, 58]
tokenizer.decode(ids)
# '??'
tokenizer.convert_tokens_to_string(tokens)
# '? ?'
tokenizer.decoder.decode(tokens)
# '? ?'
```
### Expected behavior
I expected these two methods to yield same result: `'? ?'`.
I do not understand the result `'??'` and failed myself to find the logic where this space is removed; I guess it must be in `tokenizers`.
In advance, thank you for all help :heart: :hugs: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28105/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28105/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28104/comments | https://api.github.com/repos/huggingface/transformers/issues/28104/events | https://github.com/huggingface/transformers/issues/28104 | 2,045,869,224 | I_kwDOCUB6oc558Xyo | 28,104 | CUDA Error running the Translaton example with Accelerate or Trainer in a Multi GPU distributed setup | {
"login": "anindya-saha",
"id": 3349535,
"node_id": "MDQ6VXNlcjMzNDk1MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3349535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anindya-saha",
"html_url": "https://github.com/anindya-saha",
"followers_url": "https://api.github.com/users/anindya-saha/followers",
"following_url": "https://api.github.com/users/anindya-saha/following{/other_user}",
"gists_url": "https://api.github.com/users/anindya-saha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anindya-saha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anindya-saha/subscriptions",
"organizations_url": "https://api.github.com/users/anindya-saha/orgs",
"repos_url": "https://api.github.com/users/anindya-saha/repos",
"events_url": "https://api.github.com/users/anindya-saha/events{/privacy}",
"received_events_url": "https://api.github.com/users/anindya-saha/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"Running the same script in the CPU may reveal more details about the error",
"Hi @anindya-saha, thanks for raising this issue! \r\n\r\nThere were some recent fixes committed to `main` which resolve training in the multi-GPU setting. These will be released soon as part of a patch release. Could you try installing from source to see if this resolved the issue? \r\n\r\ncc @pacman100 @muellerzr ",
"@amyeroberts I tried installing transformers from source. It does not resolve the issue."
] | 1,702 | 1,707 | null | NONE | null | ### System Info
Hello Team,
I am trying to run the translation example in examples/pytorch/translation/run_translation.py in a distributed manner through accelerate as follows.
```bash
accelerate launch --config_file default_config.yaml run_translation.py \
--model_name_or_path Helsinki-NLP/opus-mt-en-ro \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--pad_to_max_length True \
--report_to none
```
**Accelerator Config**
```bash
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: 0,1
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
But I see the following CUDA error. Could you please help me to understand what changes I need to make. I have run other examples in the summarization and the language-modeling folder in a similar manner successfully.
**Python venv**
```
transformers==4.35.2
accelerate==0.25.0
datasets==2.15.0
```
**Error Logs**
```
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [421,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "run_translation.py", line 699, in <module>
main()
File "run_translation.py", line 614, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 1860, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 2725, in training_step
loss = self.compute_loss(model, inputs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/trainer.py", line 2748, in compute_loss
outputs = model(**inputs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1519, in forward
else self._run_ddp_forward(*inputs, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1355, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/utils/operations.py", line 680, in forward
return model_forward(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/utils/operations.py", line 668, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/models/marian/modeling_marian.py", line 1402, in forward
outputs = self.model(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/models/marian/modeling_marian.py", line 1185, in forward
encoder_outputs = self.encoder(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/transformers/models/marian/modeling_marian.py", line 739, in forward
hidden_states = inputs_embeds + embed_pos
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
0%| | 0/228870 [00:03<?, ?it/s]
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7f442b5617 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f7f4427098d in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f7f44371128 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x16e76 (0x7f7f44339e76 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x19bad (0x7f7f4433cbad in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x19fcd (0x7f7f4433cfcd in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x510c56 (0x7f7f448dcc56 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x55ca7 (0x7f7f4429aca7 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x1e3 (0x7f7f44292cb3 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #9: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f7f44292e49 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #10: <unknown function> + 0x7c1718 (0x7f7f44b8d718 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #11: THPVariable_subclass_dealloc(_object*) + 0x325 (0x7f7f44b8dac5 in /home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #12: /home/anindya/starcoder-tune/bin/python3() [0x5aced3]
frame #13: /home/anindya/starcoder-tune/bin/python3() [0x5b0174]
frame #14: /home/anindya/starcoder-tune/bin/python3() [0x5f7cdd]
frame #15: /home/anindya/starcoder-tune/bin/python3() [0x5b02f0]
frame #16: /home/anindya/starcoder-tune/bin/python3() [0x5835c2]
frame #17: /home/anindya/starcoder-tune/bin/python3() [0x4c518f]
frame #18: _PyGC_CollectNoFail + 0x2f (0x66721f in /home/anindya/starcoder-tune/bin/python3)
frame #19: PyImport_Cleanup + 0x244 (0x67a634 in /home/anindya/starcoder-tune/bin/python3)
frame #20: Py_FinalizeEx + 0x7f (0x67423f in /home/anindya/starcoder-tune/bin/python3)
frame #21: Py_RunMain + 0x32d (0x6b418d in /home/anindya/starcoder-tune/bin/python3)
frame #22: Py_BytesMain + 0x2d (0x6b43fd in /home/anindya/starcoder-tune/bin/python3)
frame #23: __libc_start_main + 0xf3 (0x7f7f59353083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #24: _start + 0x2e (0x5da67e in /home/anindya/starcoder-tune/bin/python3)
[2023-12-18 06:41:41,495] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 369953) of binary: /home/anindya/starcoder-tune/bin/python3
Traceback (most recent call last):
File "/home/anindya/starcoder-tune/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/commands/launch.py", line 1008, in launch_command
multi_gpu_launcher(args)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/accelerate/commands/launch.py", line 666, in multi_gpu_launcher
distrib_run.run(args)
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/anindya/starcoder-tune/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_translation.py FAILED
------------------------------------------------------------
```
### Who can help?
@patil-suraj @pacman100 @ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
STEP 1: Create a basic Accelerator config `default_config.yaml` file with 2 GPUs m/c as below.
```bash
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: 0,1
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
STEP 2: Run the translation example.
```bash
accelerate launch --config_file default_config.yaml run_translation.py \
--model_name_or_path Helsinki-NLP/opus-mt-en-ro \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--pad_to_max_length True \
--report_to none
```
### Expected behavior
The example should complete without any error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28104/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28103/comments | https://api.github.com/repos/huggingface/transformers/issues/28103/events | https://github.com/huggingface/transformers/issues/28103 | 2,045,776,155 | I_kwDOCUB6oc558BEb | 28,103 | OWL-VIT Vision Foundation Model deployment in the edge cases - Need SDPA support for OWL-ViT Model optimization for Edge Deployment | {
"login": "solomonmanuelraj",
"id": 25194971,
"node_id": "MDQ6VXNlcjI1MTk0OTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/25194971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/solomonmanuelraj",
"html_url": "https://github.com/solomonmanuelraj",
"followers_url": "https://api.github.com/users/solomonmanuelraj/followers",
"following_url": "https://api.github.com/users/solomonmanuelraj/following{/other_user}",
"gists_url": "https://api.github.com/users/solomonmanuelraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/solomonmanuelraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/solomonmanuelraj/subscriptions",
"organizations_url": "https://api.github.com/users/solomonmanuelraj/orgs",
"repos_url": "https://api.github.com/users/solomonmanuelraj/repos",
"events_url": "https://api.github.com/users/solomonmanuelraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/solomonmanuelraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hi,\r\n\r\nFor this one I'd recommend taking a look at the [Optimum](https://huggingface.co/docs/optimum/index) library which provides utilities for ONNX export and further optimization like pruning/quantization.\r\n\r\nYou can probably reduce the size of the model most by quantization.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Since various models have seen SDPA addition (see e.g. #28133), one could add it to OWL-ViT as well.",
"@NielsRogge ,\r\n\r\nTaking inspiration from [mistral](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mixtral/modeling_mixtral.py#L696) and as well as from [llama](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L659) can I add this on the similar lines for OWL-ViT?\r\n\r\nLet me know. "
] | 1,702 | 1,706 | null | NONE | null | ### Feature request
Hi Team,
I am working with OWL-ViT Size model which has around 611 MB size ( https://huggingface.co/google/owlvit-base-patch16).
I want to optimize this model and like to deploy in the edge device for object detection.
Come to know from the group torch.scaled_dot_product_attention can be used for model optimization.
I need your feedback comments how optimally we can reduce the memory size so that we can deploy in the edge device.
waiting for your response.
with thanks
### Motivation
It will help to deploy the models in edge so that more applications we can use it.
### Your contribution
Like to know your feedback comments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28103/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28102/comments | https://api.github.com/repos/huggingface/transformers/issues/28102/events | https://github.com/huggingface/transformers/pull/28102 | 2,045,744,156 | PR_kwDOCUB6oc5iNxZH | 28,102 | fix bug: avoid divide by zero in _maybe_log_save_evaluate() | {
"login": "frankenliu",
"id": 7486431,
"node_id": "MDQ6VXNlcjc0ODY0MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7486431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankenliu",
"html_url": "https://github.com/frankenliu",
"followers_url": "https://api.github.com/users/frankenliu/followers",
"following_url": "https://api.github.com/users/frankenliu/following{/other_user}",
"gists_url": "https://api.github.com/users/frankenliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankenliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankenliu/subscriptions",
"organizations_url": "https://api.github.com/users/frankenliu/orgs",
"repos_url": "https://api.github.com/users/frankenliu/repos",
"events_url": "https://api.github.com/users/frankenliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankenliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hello, thank you for the PR, but I think better solution would be to the following check in `_maybe_log_save_evaluate`:\r\n> \r\n> ```\r\n> if self.control.should_log and self.state.global_step>self._globalstep_last_logged:\r\n> ```\r\n\r\nYes, your solution are better. I just think to avoid in _inner_training_loop() when training.",
"@frankenliu can you either try and fix the addition of all the commits (after rebasing I don't think you did a force push) or open a new clean PR? It's hard to read just what's changed with now 30+ files modified. Thanks!",
"> @frankenliu can you either try and fix the addition of all the commits (after rebasing I don't think you did a force push) or open a new clean PR? It's hard to read just what's changed with now 30+ files modified. Thanks!\r\n\r\nOK,i will open a new clean PR."
] | 1,702 | 1,703 | 1,703 | CONTRIBUTOR | null | set logging_strategy="steps" and logging_steps=10,
when that one epoch have 100 steps, the should_log will be set to True in last step.
And self._globalstep_last_logged will be assign to self.state.global_step in _maybe_log_save_evaluate() method by line 1917 in trainer.py.
the line 1933 in trainer.py , self.callback_handler.on_epoch_end() will keep the should_log=True, then in line 1934 run _maybe_log_save_evaluate() method (self.state.global_step - self._globalstep_last_logged) will be zero in line 2247.
@muellerzr @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28102",
"html_url": "https://github.com/huggingface/transformers/pull/28102",
"diff_url": "https://github.com/huggingface/transformers/pull/28102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28102.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28101/comments | https://api.github.com/repos/huggingface/transformers/issues/28101/events | https://github.com/huggingface/transformers/issues/28101 | 2,045,680,594 | I_kwDOCUB6oc557pvS | 28,101 | Will deep StateSpace models add to this library? | {
"login": "ghosthamlet",
"id": 758325,
"node_id": "MDQ6VXNlcjc1ODMyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/758325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghosthamlet",
"html_url": "https://github.com/ghosthamlet",
"followers_url": "https://api.github.com/users/ghosthamlet/followers",
"following_url": "https://api.github.com/users/ghosthamlet/following{/other_user}",
"gists_url": "https://api.github.com/users/ghosthamlet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghosthamlet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghosthamlet/subscriptions",
"organizations_url": "https://api.github.com/users/ghosthamlet/orgs",
"repos_url": "https://api.github.com/users/ghosthamlet/repos",
"events_url": "https://api.github.com/users/ghosthamlet/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghosthamlet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,702 | 1,702 | null | NONE | null | ### Feature request
Will huggingface add deep StateSpace models to transformers library or create a new repo like Diffusers?
### Motivation
Deep StateSpace models may become the next big thing.
### Your contribution
No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28101/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28100/comments | https://api.github.com/repos/huggingface/transformers/issues/28100/events | https://github.com/huggingface/transformers/issues/28100 | 2,045,679,616 | I_kwDOCUB6oc557pgA | 28,100 | QWenLMHeadModel does not support Flash Attention 2.0 yet. | {
"login": "zhangfan-algo",
"id": 47747764,
"node_id": "MDQ6VXNlcjQ3NzQ3NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangfan-algo",
"html_url": "https://github.com/zhangfan-algo",
"followers_url": "https://api.github.com/users/zhangfan-algo/followers",
"following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions",
"organizations_url": "https://api.github.com/users/zhangfan-algo/orgs",
"repos_url": "https://api.github.com/users/zhangfan-algo/repos",
"events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangfan-algo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"+1",
"Hi @zhangfan-algo, thanks for creating this feature request. \r\n\r\nIs this model code on the hub? If so, the request to add FA2 support should be done on the discussion page for that model. \r\n\r\nIn the meantime - we should update this message on our end to route users to the correct place. ",
"> Hi @zhangfan-algo, thanks for creating this feature request.\r\n> \r\n> Is this model code on the hub? If so, the request to add FA2 support should be done on the discussion page for that model.\r\n> \r\n> In the meantime - we should update this message on our end to route users to the correct place.\r\n\r\n魔塔下载的模型,今天微调的时候出现这个错误:\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/pfs-guan-ssai/nlu/data/tianxy/LLaMA-Factory-2/src/train_bash.py\", line 15, in <module>\r\n main()\r\n File \"/mnt/pfs-guan-ssai/nlu/data/tianxy/LLaMA-Factory-2/src/train_bash.py\", line 5, in main\r\n run_exp()\r\n File \"/mnt/pfs-guan-ssai/nlu/data/tianxy/LLaMA-Factory-2/src/llmtuner/train/tuner.py\", line 26, in run_exp\r\n run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)\r\n File \"/mnt/pfs-guan-ssai/nlu/data/tianxy/LLaMA-Factory-2/src/llmtuner/train/sft/workflow.py\", line 29, in run_sft\r\n model, tokenizer = load_model_and_tokenizer(model_args, finetuning_args, training_args.do_train)\r\n File \"/mnt/pfs-guan-ssai/nlu/data/tianxy/LLaMA-Factory-2/src/llmtuner/model/loader.py\", line 67, in load_model_and_tokenizer\r\n model = AutoModelForCausalLM.from_pretrained(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py\", line 561, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 3456, in from_pretrained\r\n config = cls._autoset_attn_implementation(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 1302, in _autoset_attn_implementation\r\n cls._check_and_enable_flash_attn_2(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 1382, in _check_and_enable_flash_attn_2\r\n raise ValueError(\r\nValueError: QWenLMHeadModel does not support Flash Attention 2.0 yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,706 | 1,706 | NONE | null | 
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28100/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28099/comments | https://api.github.com/repos/huggingface/transformers/issues/28099/events | https://github.com/huggingface/transformers/issues/28099 | 2,045,486,232 | I_kwDOCUB6oc5566SY | 28,099 | Dataset not loading successfully. | {
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @hi-sushanta,\r\n\r\nCan you post `datasets`-related issues on the datasets library instead? \r\nhttps://github.com/huggingface/datasets",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | CONTRIBUTOR | null | ### System Info
* transformers -> 4.36.1
* datasets -> 2.15.0
* huggingface_hub -> 0.19.4
* python -> 3.8.10
* accelerate -> 0.25.0
* pytorch -> 2.0.1+cpu
* Using GPU in Script -> No
### Who can help?
@patrickvonplaten , @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, please check this line of code, when I run Show attribute error.
```
from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audio_sample["sampling_rate"]
# Load the Whisper model in Hugging Face format:
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Use the model and processor to transcribe the audio:
input_features = processor(
waveform, sampling_rate=sampling_rate, return_tensors="pt"
).input_features
# Generate token ids
predicted_ids = model.generate(input_features)
# Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
```
***AttributeError:***
```
AttributeError Traceback (most recent call last)
Cell In[9], line 6
4 # Select an audio file and read it:
5 ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
----> 6 audio_sample = ds[0]["audio"]
7 waveform = audio_sample["array"]
8 sampling_rate = audio_sample["sampling_rate"]
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2795, in Dataset.__getitem__(self, key)
2793 def __getitem__(self, key): # noqa: F811
2794 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2795 return self._getitem(key)
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2780, in Dataset._getitem(self, key, **kwargs)
2778 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs)
2779 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2780 formatted_output = format_table(
2781 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2782 )
2783 return formatted_output
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:629, in format_table(table, key, formatter, format_columns, output_all_columns)
627 python_formatter = PythonFormatter(features=formatter.features)
628 if format_columns is None:
--> 629 return formatter(pa_table, query_type=query_type)
630 elif query_type == "column":
631 if key in format_columns:
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:396, in Formatter.__call__(self, pa_table, query_type)
394 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
395 if query_type == "row":
--> 396 return self.format_row(pa_table)
397 elif query_type == "column":
398 return self.format_column(pa_table)
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:437, in PythonFormatter.format_row(self, pa_table)
435 return LazyRow(pa_table, self)
436 row = self.python_arrow_extractor().extract_row(pa_table)
--> 437 row = self.python_features_decoder.decode_row(row)
438 return row
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:215, in PythonFeaturesDecoder.decode_row(self, row)
214 def decode_row(self, row: dict) -> dict:
--> 215 return self.features.decode_example(row) if self.features else row
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1917, in Features.decode_example(self, example, token_per_repo_id)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
-> 1917 return {
1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1918, in <dictcomp>(.0)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
1917 return {
-> 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/audio.py:191, in Audio.decode_example(self, value, token_per_repo_id)
189 array = array.T
190 if self.mono:
--> 191 array = librosa.to_mono(array)
192 if self.sampling_rate and self.sampling_rate != sampling_rate:
193 array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:78, in attach.<locals>.__getattr__(name)
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
77 submod = importlib.import_module(submod_path)
---> 78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
83 if name == attr_to_modules[name]:
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:77, in attach.<locals>.__getattr__(name)
75 elif name in attr_to_modules:
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
---> 77 submod = importlib.import_module(submod_path)
78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
File /usr/lib/python3.8/importlib/__init__.py:127, in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:671, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:848, in exec_module(self, module)
File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds)
File /opt/pytorch/lib/python3.8/site-packages/librosa/core/audio.py:13
11 import audioread
12 import numpy as np
---> 13 import scipy.signal
14 import soxr
15 import lazy_loader as lazy
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/__init__.py:323
314 from ._spline import ( # noqa: F401
315 cspline2d,
316 qspline2d,
(...)
319 symiirorder2,
320 )
322 from ._bsplines import *
--> 323 from ._filter_design import *
324 from ._fir_filter_design import *
325 from ._ltisys import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/_filter_design.py:16
13 from numpy.polynomial.polynomial import polyval as npp_polyval
14 from numpy.polynomial.polynomial import polyvalfromroots
---> 16 from scipy import special, optimize, fft as sp_fft
17 from scipy.special import comb
18 from scipy._lib._util import float_factorial
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/__init__.py:405
1 """
2 =====================================================
3 Optimization and root finding (:mod:`scipy.optimize`)
(...)
401
402 """
404 from ._optimize import *
--> 405 from ._minimize import *
406 from ._root import *
407 from ._root_scalar import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_minimize.py:26
24 from ._trustregion_krylov import _minimize_trust_krylov
25 from ._trustregion_exact import _minimize_trustregion_exact
---> 26 from ._trustregion_constr import _minimize_trustregion_constr
28 # constrained minimization
29 from ._lbfgsb_py import _minimize_lbfgsb
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/__init__.py:4
1 """This module contains the equality constrained SQP solver."""
----> 4 from .minimize_trustregion_constr import _minimize_trustregion_constr
6 __all__ = ['_minimize_trustregion_constr']
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py:5
3 from scipy.sparse.linalg import LinearOperator
4 from .._differentiable_functions import VectorFunction
----> 5 from .._constraints import (
6 NonlinearConstraint, LinearConstraint, PreparedConstraint, strict_bounds)
7 from .._hessian_update_strategy import BFGS
8 from .._optimize import OptimizeResult
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_constraints.py:8
6 from ._optimize import OptimizeWarning
7 from warnings import warn, catch_warnings, simplefilter
----> 8 from numpy.testing import suppress_warnings
9 from scipy.sparse import issparse
12 def _arr_to_scalar(x):
13 # If x is a numpy array, return x.item(). This will
14 # fail if the array has more than one element.
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/__init__.py:11
8 from unittest import TestCase
10 from . import _private
---> 11 from ._private.utils import *
12 from ._private.utils import (_assert_valid_refcount, _gen_alignment_data)
13 from ._private import extbuild, decorators as dec
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/_private/utils.py:480
476 pprint.pprint(desired, msg)
477 raise AssertionError(msg.getvalue())
--> 480 @np._no_nep50_warning()
481 def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True):
482 """
483 Raises an AssertionError if two items are not equal up to desired
484 precision.
(...)
548
549 """
550 __tracebackhide__ = True # Hide traceback for py.test
File /opt/pytorch/lib/python3.8/site-packages/numpy/__init__.py:313, in __getattr__(attr)
305 raise AttributeError(__former_attrs__[attr])
307 # Importing Tester requires importing all of UnitTest which is not a
308 # cheap import Since it is mainly used in test suits, we lazy import it
309 # here to save on the order of 10 ms of import time for most users
310 #
311 # The previous way Tester was imported also had a side effect of adding
312 # the full `numpy.testing` namespace
--> 313 if attr == 'testing':
314 import numpy.testing as testing
315 return testing
AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
```
### Expected behavior
```
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'
```
Also, make sure this script is provided for your official website so please update:
[script](https://huggingface.co/docs/transformers/model_doc/whisper) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28099/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28098/comments | https://api.github.com/repos/huggingface/transformers/issues/28098/events | https://github.com/huggingface/transformers/issues/28098 | 2,045,287,243 | I_kwDOCUB6oc556JtL | 28,098 | Create create_token_type_ids_from_sequences for CodeGenTokenizer | {
"login": "cridin1",
"id": 73068277,
"node_id": "MDQ6VXNlcjczMDY4Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/73068277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cridin1",
"html_url": "https://github.com/cridin1",
"followers_url": "https://api.github.com/users/cridin1/followers",
"following_url": "https://api.github.com/users/cridin1/following{/other_user}",
"gists_url": "https://api.github.com/users/cridin1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cridin1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cridin1/subscriptions",
"organizations_url": "https://api.github.com/users/cridin1/orgs",
"repos_url": "https://api.github.com/users/cridin1/repos",
"events_url": "https://api.github.com/users/cridin1/events{/privacy}",
"received_events_url": "https://api.github.com/users/cridin1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"cc @ArthurZucker ",
"Hey, would you like to open a PR? 🤗 ",
"Yes, I will do it.",
"Are `token_type_ids` ever used by CodeGen? BERT uses `token_type_ids` to differentiate between the two sentences used for the next sentence prediction training task. To the best of my understanding, CodeGen does not do that.\r\n\r\nLet's explore this further by comparing how `BertTokenizer` and `CodeGenTokenizer` tokenize a pair of texts. Let's first define two short texts:\r\n\r\n```python\r\n>>> text_a = \"Text A\"\r\n>>> text_b = \"Text B\"\r\n```\r\n\r\nWhen we use `BertTokenizer` to tokenize the text pair, it generates `input_ids` that include both texts. Additionally, it produces `token_type_ids` to clearly mark the distinction between the two texts.\r\n\r\n```python\r\n>>> from transformers import BertTokenizer\r\n>>> bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n>>> tokenized_text_bert = bert_tokenizer(text_a, text_b)\r\n>>> tokenized_text_bert\r\n{'input_ids': [101, 3793, 1037, 102, 3793, 1038, 102], 'token_type_ids': [0, 0, 0, 0, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}\r\n>>> bert_tokenizer.decode(tokenized_text_bert[\"input_ids\"])\r\n'[CLS] text a [SEP] text b [SEP]'\r\n```\r\n\r\nUnlike `BertTokenizer`, `CodeGenTokenizer` does not produce `token_type_ids` in its output.\r\n\r\n```python\r\n>>> from transformers import CodeGenTokenizer\r\n>>> codegen_tokenizer = CodeGenTokenizer.from_pretrained(\"Salesforce/codegen-350M-mono\")\r\n>>> tokenized_text_codegen = codegen_tokenizer(text_a, text_b)\r\n>>> tokenized_text_codegen\r\n{'input_ids': [8206, 317, 8206, 347], 'attention_mask': [1, 1, 1, 1]}\r\n>>> codegen_tokenizer.decode(tokenized_text_codegen[\"input_ids\"])\r\n'Text AText B'\r\n```\r\n\r\nInterestingly, if `token_type_ids` are given to the `CodeGenModel.forward` function, they are embedded using the same embeddings as the `input_token_ids` ([here](https://github.com/huggingface/transformers/blob/3f69f415adcbdaedec154ba8eac220ef3276975d/src/transformers/models/codegen/modeling_codegen.py#L512-L519)). This approach seems unusual to me. I observed similar code in the [GPT2](https://github.com/huggingface/transformers/blob/3f69f415adcbdaedec154ba8eac220ef3276975d/src/transformers/models/gpt2/modeling_gpt2.py#L836-L843) model and some other as well. On the other hand, [BERT](https://github.com/huggingface/transformers/blob/3f69f415adcbdaedec154ba8eac220ef3276975d/src/transformers/models/bert/modeling_bert.py#L233) uses dedicated embedding matrix for `token_type_ids`. Is using the same embeddings for `input_token_ids` and `token_type_ids` correct?\r\n",
"I can take this issue!"
] | 1,702 | 1,706 | null | NONE | null | ### Feature request
In CodeGenTokenizer [here](src/transformers/models/codegen/tokenization_codegen.py), there is no implementation for create_token_type_ids_from_sequences.
I was looking to the tutorial for token_type_ids as a reference: [here](https://huggingface.co/docs/transformers/glossary#token-type-ids).
### Motivation
The model in input can require the token_type_ids [see here](https://huggingface.co/docs/transformers/model_doc/codegen#transformers.CodeGenForCausalLM), so would be useful to add this feature.
### Your contribution
I think it can be used directly the one frome codebert [here](src/transformers/models/bert/tokenization_bert.py):
````
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
Returns:
`List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
"""
sep = [self.sep_token_id]
cls = [self.cls_token_id]
if token_ids_1 is None:
return len(cls + token_ids_0 + sep) * [0]
return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1]
```` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28098/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28097/comments | https://api.github.com/repos/huggingface/transformers/issues/28097/events | https://github.com/huggingface/transformers/issues/28097 | 2,045,272,598 | I_kwDOCUB6oc556GIW | 28,097 | WhisperProcessor doesn't copy output tensor to CPU for `decode(output_offsets=True)` | {
"login": "rklasen",
"id": 13201731,
"node_id": "MDQ6VXNlcjEzMjAxNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/13201731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rklasen",
"html_url": "https://github.com/rklasen",
"followers_url": "https://api.github.com/users/rklasen/followers",
"following_url": "https://api.github.com/users/rklasen/following{/other_user}",
"gists_url": "https://api.github.com/users/rklasen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rklasen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rklasen/subscriptions",
"organizations_url": "https://api.github.com/users/rklasen/orgs",
"repos_url": "https://api.github.com/users/rklasen/repos",
"events_url": "https://api.github.com/users/rklasen/events{/privacy}",
"received_events_url": "https://api.github.com/users/rklasen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.36.1
- Platform: Linux-6.6.7-arch1-1-x86_64-with-glibc2.38
- Python version: 3.11.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES (this is the cause of the bug)
- Using distributed or parallel set-up in script?: no, one local GPU
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm creating a WhisperProcessor, copy input and model to GPU and run `generate()` with `return_timestamps=True`.
```python
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
model = model.to("cuda")
input_features = input_features.to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=True, enable_mem_efficient=True):
output_with_prompt = model.generate(input_features, return_timestamps=True)
result = processor.decode(output_with_prompt[0], skip_special_tokens=True, decode_with_timestamps=False, output_offsets=True)
print(result)
```
Causes the error:
```
File [/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1030](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1030), in Tensor.__array__(self, dtype)
[1028](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1028) return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
[1029](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1029) if dtype is None:
-> [1030](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1030) return self.numpy()
[1031](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1031) else:
[1032](https://file+.vscode-resource.vscode-cdn.net/media/DataStore02-12TB/codeRepos/myFasterWhisperTest/.venv/lib/python3.11/site-packages/torch/_tensor.py:1032) return self.numpy().astype(dtype, copy=False)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
```
Copying the output tensor to the CPU before fixes that:
```
output_with_prompt2=output_with_prompt.cpu()
# timesatmps must be on to skip the initial prompt
result = processor.decode(output_with_prompt2[0], skip_special_tokens=True, decode_with_timestamps=False, output_offsets=True)
print(result)
```
However, that **only** happens when `output_offsets=True` is enabled. When the flag is disabled, the decode works fine on the GPU (but we don't get the timestamps).
I'm also seeing that the decode function is called about 11 times for one `processor.decode`, is that by design?
### Expected behavior
Automatically copy the tensor to CPU for the decode. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28097/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28096/comments | https://api.github.com/repos/huggingface/transformers/issues/28096/events | https://github.com/huggingface/transformers/issues/28096 | 2,045,262,269 | I_kwDOCUB6oc556Dm9 | 28,096 | [LlamaTokenizer] Inconsistent slow vs. fast tokenization when dealing with unknown tokens | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Another test case:\r\n```py\r\ntext = \"def main():\\n\\tpass\"\r\n...\r\n# Fast tokenizer\r\n# ['▁def', '▁main', '(', ')', ':', '\\n', '<unk>', 'p', 'ass']\r\n# [1, 12849, 17375, 32, 33, 29, 5, 0, 31694, 1917]\r\n\r\n# ['▁def', '▁main', '(', ')', ':', '\\n', '▁pass']\r\n# [1, 12849, 17375, 32, 33, 29, 5, 4005]\r\n\r\n# ['▁def', '▁main', '(', ')', ':', '\\n', '▁pass']\r\n# [1, 12849, 17375, 32, 33, 29, 5, 4005]\r\n\r\n# ['▁def', '▁main', '(', ')', ':', '\\n', '▁pass']\r\n# [1, 12849, 17375, 32, 33, 29, 5, 4005]\r\n```",
"> custom-built llama tokenizer\r\n\r\nFeels to me we shouldn't care that much for a custom built tokenizer (especially if they removed the byte fallbacks). Byte fallback should definitely try to look for the bytes in the vocab, and resort to using unk if they cannot find them, this is obviously the best choice imo (in terms of logics).\r\n\r\nTraining byte_fallback without the fallback pieces is the issue I think, not this particular exhibit.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,707 | 1,707 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-6.2.0-1018-azure-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker (maybe @Narsil for tokenizers?)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from transformers import AutoTokenizer
text = "\n\t\n"
tokenizer_fast = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=True)
tokenizer_slow = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=False)
tokenizer_slow_non_legacy = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=False, legacy=False)
tokenizer_slow_legacy = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m", use_fast=False, legacy=True)
print(tokenizer_fast.tokenize(text)) # ['▁', '\n', '<unk>', '\n']
print(tokenizer_fast.encode(text)) # [1, 31654, 5, 0, 5]
print()
print(tokenizer_slow.tokenize(text)) # ['▁', '\n', '▁', '\n']
print(tokenizer_slow.encode(text)) # [1, 31654, 5, 31654, 5]
print()
print(tokenizer_slow_non_legacy.tokenize(text)) # ['▁', '\n', '▁', '\n']
print(tokenizer_slow_non_legacy.encode(text)) # [1, 31654, 5, 31654, 5]
print()
print(tokenizer_slow_legacy.tokenize(text)) # ['▁', '\n', '▁', '\n']
print(tokenizer_slow_legacy.encode(text)) # [1, 31654, 5, 31654, 5]
```
### Expected behavior
I'm not quite sure which is the correct behaviour, since this is a custom-built llama tokenizer, and does not include "byte fallback" tokens in the vocabulary. Intuitively, it would make sense to fallback to the unknown token if the byte fallback fails, but I assume we should follow how the sentencepiece implementation does it (which seems to be excluding it?) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28096/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28095/comments | https://api.github.com/repos/huggingface/transformers/issues/28095/events | https://github.com/huggingface/transformers/issues/28095 | 2,044,930,519 | I_kwDOCUB6oc554ynX | 28,095 | logits squeezing causes an error during the inference time (If the last epoch contains only one sample) | {
"login": "fadiabdulf",
"id": 81809527,
"node_id": "MDQ6VXNlcjgxODA5NTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/81809527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fadiabdulf",
"html_url": "https://github.com/fadiabdulf",
"followers_url": "https://api.github.com/users/fadiabdulf/followers",
"following_url": "https://api.github.com/users/fadiabdulf/following{/other_user}",
"gists_url": "https://api.github.com/users/fadiabdulf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fadiabdulf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fadiabdulf/subscriptions",
"organizations_url": "https://api.github.com/users/fadiabdulf/orgs",
"repos_url": "https://api.github.com/users/fadiabdulf/repos",
"events_url": "https://api.github.com/users/fadiabdulf/events{/privacy}",
"received_events_url": "https://api.github.com/users/fadiabdulf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can anyone solve this in the next release?",
"Hi @fadiabdulf, \r\n\r\nPlease follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and provide information about the running environment (run `transformers-cli env` in the terminal and copy-paste the output); the error encountered including the stack trace and a minimal code snippet that can reproduce the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,706 | 1,706 | NONE | null | https://github.com/huggingface/transformers/blob/238d2e3c44366aba9dc5c770c95475765a6725cb/src/transformers/trainer.py#L3452 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28095/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28094/comments | https://api.github.com/repos/huggingface/transformers/issues/28094/events | https://github.com/huggingface/transformers/pull/28094 | 2,044,897,781 | PR_kwDOCUB6oc5iLBlE | 28,094 | [`Add Mamba`] Adds support for the `Mamba` models | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Oups! Still planned but KVCache will come first",
"Alright I am picking this back up! ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28094). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hey, it's great to see that mamba is being integrated in Transformers! Just wondering, is there a timeline or ETA for this PR? Thanks so much.",
"I want to merge it asap so probably max end of next week! ",
"Got side tracked, done with caching issues! \r\nWas meditating the stateful vs stateless approach we want to take to support torch compile and graphs without the extra complexity similarly to #27931. \r\nIt was advised that for mamba, cache should work in a stateless manner\r\n"
] | 1,702 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
- [x] Implement cpu ops
- [x] Add integration tests
- [x] Implement fast path
- [ ] check training + peft
- [x] convert all checkpoints: just need to make sure config is correct
Feel free to try this:
```python
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/mamba-130m")
tokenizer.pad_token = tokenizer.eos_token
model = MambaForCausalLM.from_pretrained("ArthurZ/mamba-130m", vocab_size=50280, num_hidden_layers=24, torch_dtype=torch.float32)
model.config.use_cache = True
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]
out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))
```
fixes #28086 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28094/reactions",
"total_count": 11,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28094/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28094",
"html_url": "https://github.com/huggingface/transformers/pull/28094",
"diff_url": "https://github.com/huggingface/transformers/pull/28094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28094.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28093/comments | https://api.github.com/repos/huggingface/transformers/issues/28093/events | https://github.com/huggingface/transformers/issues/28093 | 2,044,645,421 | I_kwDOCUB6oc553tAt | 28,093 | load_balancing_loss in mixtral model | {
"login": "1773226512",
"id": 82659526,
"node_id": "MDQ6VXNlcjgyNjU5NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/82659526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1773226512",
"html_url": "https://github.com/1773226512",
"followers_url": "https://api.github.com/users/1773226512/followers",
"following_url": "https://api.github.com/users/1773226512/following{/other_user}",
"gists_url": "https://api.github.com/users/1773226512/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1773226512/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1773226512/subscriptions",
"organizations_url": "https://api.github.com/users/1773226512/orgs",
"repos_url": "https://api.github.com/users/1773226512/repos",
"events_url": "https://api.github.com/users/1773226512/events{/privacy}",
"received_events_url": "https://api.github.com/users/1773226512/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same question.",
"Hi everyone, https://github.com/huggingface/transformers/pull/28115 should fix this issue",
"The aux loss is constant too(now is 8 but not 4, and no grad when BP) after using PR #28115 \r\n@younesbelkada @ArthurZucker ",
"I also meet this. And there is no `grad_fn` in the loss tensor.\r\n\r\n> The aux loss is constant too(now is 8 but not 4, and no grad when BP) after using PR #28115 @younesbelkada @ArthurZucker\r\n\r\n",
"Not sure I understand. Here is a small repro:\r\n```python\r\nfrom transformers import MixtralForCausalLM, MixtralConfig\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mixtral-8x7B-Instruct-v0.1\")\r\nconfig = MixtralConfig(num_hidden_layers=2, hidden_size=1024, )\r\nmodel = MixtralForCausalLM(config)\r\n\r\ninputs = tokenizer(\"This sentence should be properly\", return_tensors=\"pt\")\r\ninputs[\"labels\"] = inputs[\"input_ids\"]\r\nouts = model(**inputs)\r\nprint(outs.loss)\r\nouts.loss.backward()\r\n\r\ninputs = tokenizer(\"A prioris cela devrait changer\", return_tensors=\"pt\")\r\ninputs[\"labels\"] = inputs[\"input_ids\"]\r\nouts = model(**inputs)\r\nprint(outs.loss)\r\n```\r\n```\r\ntensor(10.5236, grad_fn=<NllLossBackward0>)\r\ntensor(10.8024, grad_fn=<NllLossBackward0>)\r\n```\r\ncould both of you open a new issue and elaborate? ",
"If I understand correctly, the loss consists of two parts, the autoregressive categorical loss on the one hand, and balancing the loss of each expert on the other.\r\nI print each of the above two losses before adding up the final one, however, only the first one has a `grad_fn` of `NllLossBackward0`, the second one is just a tensor without `grad_fn` . That's why the `grad_fn` of the final loss is `NllLossBackward0` instead of `AddBackward`.\r\nAnd the balancing expert's loss doesn't change during the training process. Maybe the changes you see are just changes due to the autoregressive categorical loss.\r\n\r\nI open a new issue https://github.com/huggingface/transformers/issues/28205."
] | 1,702 | 1,703 | 1,703 | NONE | null | ### System Info
torch '1.13.0+cu117'
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The balancing loss function always return a constant.
Here is the official code:
```
def load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2) -> float:
r"""
Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.
See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss
function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing between
experts is too unbalanced.
Args:
gate_logits (Union[`torch.Tensor`, Tuple[torch.Tensor]):
Logits from the `gate`, should be a tuple of tensors. Shape: [batch_size, seqeunce_length, num_experts].
num_experts (`int`, *optional*):
Number of experts
Returns:
The auxiliary loss.
"""
if gate_logits is None:
return 0
if isinstance(gate_logits, tuple):
# cat along the layers?
gate_logits = torch.cat(gate_logits, dim=0)
routing_weights, selected_experts = torch.topk(gate_logits, top_k, dim=-1)
routing_weights = routing_weights.softmax(dim=-1)
# cast the expert indices to int64, otherwise one-hot encoding will fail
if selected_experts.dtype != torch.int64:
selected_experts = selected_experts.to(torch.int64)
if len(selected_experts.shape) == 2:
selected_experts = selected_experts.unsqueeze(2)
expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts)
# For a given token, determine if it was routed to a given expert.
expert_mask = torch.max(expert_mask, axis=-2).values
# cast to float32 otherwise mean will fail
expert_mask = expert_mask.to(torch.float32)
tokens_per_group_and_expert = torch.mean(expert_mask, axis=-2)
router_prob_per_group_and_expert = torch.mean(routing_weights, axis=-1)
return torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert.unsqueeze(-1)) * (num_experts**2)
```
Here is my code:
```
num_hidden_layers=30
batch_size = 16
seq_len = 32
num_experts = 8
gate_logits = tuple(torch.randn(batch_size*seq_len, num_experts) for _ in range(num_hidden_layers))
load_balancing_loss_func(gate_logits=gate_logits, num_experts=num_experts)
```
It always return 4.
### Expected behavior
please anwser this question | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28093/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/28093/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28092/comments | https://api.github.com/repos/huggingface/transformers/issues/28092/events | https://github.com/huggingface/transformers/pull/28092 | 2,044,627,356 | PR_kwDOCUB6oc5iKJ3W | 28,092 | Mixtral: Reduce and Increase Expert Models | {
"login": "minato-ellie",
"id": 82735346,
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minato-ellie",
"html_url": "https://github.com/minato-ellie",
"followers_url": "https://api.github.com/users/minato-ellie/followers",
"following_url": "https://api.github.com/users/minato-ellie/following{/other_user}",
"gists_url": "https://api.github.com/users/minato-ellie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minato-ellie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minato-ellie/subscriptions",
"organizations_url": "https://api.github.com/users/minato-ellie/orgs",
"repos_url": "https://api.github.com/users/minato-ellie/repos",
"events_url": "https://api.github.com/users/minato-ellie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minato-ellie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker @younesbelkada ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | This pr adds a method to MixtralSparseMoeBlock, MixtralDecoderLayer and MixtralModel that removes one or more experts from MixtralModel. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28092/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28092",
"html_url": "https://github.com/huggingface/transformers/pull/28092",
"diff_url": "https://github.com/huggingface/transformers/pull/28092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28092.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28091/comments | https://api.github.com/repos/huggingface/transformers/issues/28091/events | https://github.com/huggingface/transformers/pull/28091 | 2,044,572,059 | PR_kwDOCUB6oc5iJ-w6 | 28,091 | fix ConversationalPipeline docstring | {
"login": "not-lain",
"id": 70411813,
"node_id": "MDQ6VXNlcjcwNDExODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/not-lain",
"html_url": "https://github.com/not-lain",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"repos_url": "https://api.github.com/users/not-lain/repos",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hii, @stevhliu and @MKhalusova following the issue mentioned above, i would like to ask for a review on this pull request",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28091). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #28090
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28091/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28091",
"html_url": "https://github.com/huggingface/transformers/pull/28091",
"diff_url": "https://github.com/huggingface/transformers/pull/28091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28091.patch",
"merged_at": 1702912117000
} |
https://api.github.com/repos/huggingface/transformers/issues/28090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28090/comments | https://api.github.com/repos/huggingface/transformers/issues/28090/events | https://github.com/huggingface/transformers/issues/28090 | 2,044,571,444 | I_kwDOCUB6oc553a80 | 28,090 | fix documentation docstring | {
"login": "not-lain",
"id": 70411813,
"node_id": "MDQ6VXNlcjcwNDExODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/not-lain",
"html_url": "https://github.com/not-lain",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"repos_url": "https://api.github.com/users/not-lain/repos",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | ### System Info
the following the deocumentation in huggingface about [ConversationalPipeline](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.ConversationalPipeline.example) the example shown in here seems to be broken

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
checkout the link above
### Expected behavior
a parsable example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28089/comments | https://api.github.com/repos/huggingface/transformers/issues/28089/events | https://github.com/huggingface/transformers/issues/28089 | 2,044,319,489 | I_kwDOCUB6oc552dcB | 28,089 | Error in pipeline while inferencing Llama2, colab link below | {
"login": "goblinvalo",
"id": 153084421,
"node_id": "U_kgDOCR_iBQ",
"avatar_url": "https://avatars.githubusercontent.com/u/153084421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goblinvalo",
"html_url": "https://github.com/goblinvalo",
"followers_url": "https://api.github.com/users/goblinvalo/followers",
"following_url": "https://api.github.com/users/goblinvalo/following{/other_user}",
"gists_url": "https://api.github.com/users/goblinvalo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goblinvalo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goblinvalo/subscriptions",
"organizations_url": "https://api.github.com/users/goblinvalo/orgs",
"repos_url": "https://api.github.com/users/goblinvalo/repos",
"events_url": "https://api.github.com/users/goblinvalo/events{/privacy}",
"received_events_url": "https://api.github.com/users/goblinvalo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@goblinvalo you can follow the [model card](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GPTQ) for the `TheBloke/Llama-2-7B-Chat-GPTQ` model \r\ni just used the following code in colab and it worked for me \r\n\r\n```bash\r\n!pip3 install transformers>=4.32.0 optimum>=1.12.0\r\n!pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7\r\n``` \r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\r\n\r\nmodel_name_or_path = \"TheBloke/Llama-2-7b-Chat-GPTQ\"\r\n# To use a different branch, change revision\r\n# For example: revision=\"gptq-4bit-64g-actorder_True\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name_or_path,\r\n device_map=\"auto\",\r\n trust_remote_code=False,\r\n revision=\"main\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)\r\n\r\nprompt = \"Tell me about AI\" # 👈 your prompt here\r\nprompt_template=f'''[INST] <<SYS>>\r\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\r\n<</SYS>>\r\n{prompt}[/INST]\r\n\r\n'''\r\n\r\n# print(\"\\n\\n*** Generate:\")\r\n# input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()\r\n# output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)\r\n# print(tokenizer.decode(output[0]))\r\nprint(\"*** Pipeline:\")\r\npipe = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n max_new_tokens=512,\r\n do_sample=True,\r\n temperature=0.7,\r\n top_p=0.95,\r\n top_k=40,\r\n repetition_penalty=1.1\r\n)\r\n\r\nprint(pipe(prompt_template)[0]['generated_text'])\r\n```\r\n",
"\r\n@not-lain Thank you so much for helping, it is working fine thanks a lot for the code."
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
here is the colab notebook link
https://colab.research.google.com/drive/1rjDR7i9MWkTmOhsZEg3oU-AHntQhBFCD?usp=sharing
until yesterday it was working fine got this error today.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1rjDR7i9MWkTmOhsZEg3oU-AHntQhBFCD?usp=sharing
### Expected behavior
until yesterday it was working fine look like today some update have been made by maintainers got this error all of a sudden. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28089/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28088/comments | https://api.github.com/repos/huggingface/transformers/issues/28088/events | https://github.com/huggingface/transformers/pull/28088 | 2,044,314,999 | PR_kwDOCUB6oc5iJHBv | 28,088 | Support `DeepSpeed` when using auto find batch size | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tried the latest main as of today which should include this commit and unfortunately it still crashes\r\n```\r\n File \"/data/v/ft/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1958, in backward\r\n self.deepspeed_engine_wrapped.backward(loss, **kwargs)\r\nAttributeError: 'NoneType' object has no attribute 'backward'\r\n```\r\n\r\n---\r\n\r\n```\r\ntransformers[accelerate,deepspeed,sentencepiece,tokenizers] @ git+https://github.com/huggingface/transformers@edb170238febf7fc3e3278ed5b9ca0b2c40c70e3\r\nsafetensors==0.4.1\r\naccelerate==0.26.1\r\ndeepspeed @ git+https://github.com/microsoft/DeepSpeed@a85b6e472534d2e0b61fe234fae4f6a2332c95bf\r\n```\r\n\r\n---\r\n\r\nI think the root cause highlighted here still applies: https://github.com/huggingface/transformers/issues/24558#issuecomment-1850723229"
] | 1,702 | 1,705 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
This PR addresses https://github.com/huggingface/transformers/issues/24558 by letting the `Trainer` modify the deepspeed plugin *specifically when using auto batch size finder*.
It refactors the logic for propagation of the deepspeed arguments into its own function so that on the fly we can modify any related to the train batch size if needed.
Fixes # (issue)
Fixes https://github.com/huggingface/transformers/issues/24558
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28088/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28088/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28088",
"html_url": "https://github.com/huggingface/transformers/pull/28088",
"diff_url": "https://github.com/huggingface/transformers/pull/28088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28088.patch",
"merged_at": 1704884593000
} |
https://api.github.com/repos/huggingface/transformers/issues/28087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28087/comments | https://api.github.com/repos/huggingface/transformers/issues/28087/events | https://github.com/huggingface/transformers/pull/28087 | 2,044,310,026 | PR_kwDOCUB6oc5iJF7s | 28,087 | [docs] General doc fixes | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @amyeroberts! The CI error is:\r\n\r\n```\r\nException: The following objects are in the public init so should be documented:\r\n - NerPipeline\r\n```\r\n\r\nWould this be fixed by removing `NerPipeline` from the following?\r\n- https://github.com/huggingface/transformers/blob/0d63d17765f954ba2b050c1d8be0001e952b7830/src/transformers/pipelines/__init__.py#L82\r\n\r\n- https://github.com/huggingface/transformers/blob/0d63d17765f954ba2b050c1d8be0001e952b7830/src/transformers/__init__.py#L957",
"@stevhliu Yes, however this would stop NerPipeline being importable from the top level of transformers, which would be a breaking change.\r\n\r\nInstead, we can make an exception for it in the doc checks by adding it to [DEPRECATED_OBJECTS](https://github.com/huggingface/transformers/blob/e6cb8e052a74313c2b2440c43df26303d379df71/utils/check_repo.py#L906) (which it effectively is if TokenClassificationPipeline supercedes it) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28087). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | MEMBER | null | Cleans up a few things:
- Fused AWQ benchmark [table](https://huggingface.co/docs/transformers/main/en/quantization#fused-awq-modules) broken because there wasn't a blankline between the title and table
- tidies up the new sections added in the GPU inference doc
- removes the `NerPipeline` from internal discussion [here](https://huggingface.slack.com/archives/C02GLJ5S0E9/p1702480089513399) (basically it's identical to the `TokenClassification` pipeline and it may cause confusion)
- removes `perf_train_tpu.md` because it is empty and doesn't add any value to the docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28087/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28087",
"html_url": "https://github.com/huggingface/transformers/pull/28087",
"diff_url": "https://github.com/huggingface/transformers/pull/28087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28087.patch",
"merged_at": 1702925049000
} |
https://api.github.com/repos/huggingface/transformers/issues/28086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28086/comments | https://api.github.com/repos/huggingface/transformers/issues/28086/events | https://github.com/huggingface/transformers/issues/28086 | 2,044,202,742 | I_kwDOCUB6oc552A72 | 28,086 | Add [`Mamba`] model | {
"login": "JLTastet",
"id": 8004066,
"node_id": "MDQ6VXNlcjgwMDQwNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8004066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JLTastet",
"html_url": "https://github.com/JLTastet",
"followers_url": "https://api.github.com/users/JLTastet/followers",
"following_url": "https://api.github.com/users/JLTastet/following{/other_user}",
"gists_url": "https://api.github.com/users/JLTastet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JLTastet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JLTastet/subscriptions",
"organizations_url": "https://api.github.com/users/JLTastet/orgs",
"repos_url": "https://api.github.com/users/JLTastet/repos",
"events_url": "https://api.github.com/users/JLTastet/events{/privacy}",
"received_events_url": "https://api.github.com/users/JLTastet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for opening this issue! Given the sensitivity of this model, the HF team will take it over, we'll have a look at your fork and add you as a co-other 🤗 ",
"Thanks a lot!\r\n\r\nMy fork is largely inspired from the original Mamba repo, the differences mostly consisting in boilerplate code. So don’t hesitate to start from the upstream repo.\r\n\r\nI (and the linter) have noticed a couple of bugs or pieces of dead code in the upstream (some of which remain in my fork). So keep an eye for them!",
"I did a similar study https://github.com/LegallyCoder/mamba-hf .\r\nI'm working on this too.",
"I've seen a CPU only implementation fork mentioned somewhere in the source repo issues. The author of the fork removed Triton and CUDA dependencies.\r\n\r\nFound it: https://github.com/kroggen/mamba-cpu\r\nTraining is not working there, tho. Maybe you can get in touch with the author."
] | 1,702 | 1,705 | null | NONE | null | ### Model description
Mamba is a new architecture proposed in [arXiv:2312.00752](https://arxiv.org/abs/2312.00752) by Albert Gu (CMU) and Tri Dao (Princeton).
It is inspired by structured state space models (SSMs), but with the addition of a selection mechanism that allows it to combines the ability of transformers to perform content-based reasoning with the performance of SSMs on long sequences. Mamba can be efficiently trained in parallel while also enjoying efficient inference by running recurrently.
The paper claims SoTA performance on various modalities, with performance tested up to 2.8B parameters. Crucially, the model cannot be implemented efficiently using only PyTorch operations; instead, it relies on optimised CUDA and `triton` kernels.
The original implementation by the authors is available at https://github.com/state-spaces/mamba/tree/main under an Apache 2.0 license.
Starting from their implementation, I have started porting the model to 🤗 Transformers. This is **work in progress** 🚧, and can be found in my fork at https://github.com/JLTastet/transformers/tree/mamba.
I can open a PR, but in its current state my branch is not ready to be merged. I will also open an issue in the original repo to let the authors know about this, in case they want to chime in.
What I got working:
- Forward and backward passes.
- Loading checkpoints from the Hub using `AutoModel`.
What still needs some work:
- Even though backprop itself works, I get some CUDA errors when using `Trainer`, and I still don’t understand what causes them.
- Compiling the CUDA kernels takes ~1 hour. This does not happen with the original package, so I think they are using prebuilt binaries. I didn’t manage to port that part so far.
- I don’t think there is any non-CUDA fallback path, so this model probably cannot run without CUDA in its current form.
- When using `generate`, we should check that the optimised recurrent inference is used instead of the slower autoregressive inference.
- Tests, tests and moar tests.
- Most of the documentation needs to be written.
- Add the relevant dependencies.
- The code could certainly benefit from some cleanup (remove dead code, many TODO’s, update copyright notices, ...).
I am opening this issue to avoid duplicating work, since I saw [some mention](https://github.com/huggingface/transformers/issues/28049#issuecomment-1857574924) of Mamba today by @ArthurZucker.
My main motivation for porting this model is to learn a bit more about it (and about the internals of 🤗 Transformers) and to run more evals. Some of you probably know this library much better than me, so feel free to write your own implementation if you can do it better or quicker. Otherwise, don’t hesitate to build on top of my fork.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- Paper: https://arxiv.org/abs/2312.00752 by @albertfgu and @tridao.
- Original repo by the authors: https://github.com/state-spaces/mamba/tree/main
- My WIP implementation in 🤗 Transformers: https://github.com/JLTastet/transformers/tree/mamba | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28086/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 7,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28086/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28085/comments | https://api.github.com/repos/huggingface/transformers/issues/28085/events | https://github.com/huggingface/transformers/pull/28085 | 2,044,194,088 | PR_kwDOCUB6oc5iIsbN | 28,085 | Fix Vip-llava docs | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28085). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,703 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Fixes some nits on the Vip-Llava docs, in fact users should be aware that the correct prompt format is different from the llava one, as stated on the model card: https://huggingface.co/llava-hf/vip-llava-7b-hf#how-to-use-the-model
Also updated the docs in the modeling
cc @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28085/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28085",
"html_url": "https://github.com/huggingface/transformers/pull/28085",
"diff_url": "https://github.com/huggingface/transformers/pull/28085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28085.patch",
"merged_at": 1702667807000
} |
https://api.github.com/repos/huggingface/transformers/issues/28084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28084/comments | https://api.github.com/repos/huggingface/transformers/issues/28084/events | https://github.com/huggingface/transformers/pull/28084 | 2,044,189,740 | PR_kwDOCUB6oc5iIrf6 | 28,084 | Misc updates to CPU Dockerfiles | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@dmsuehir feel free to review too.\r\nThanks.",
"cc @ydshieh for first review ",
"Hi @ashahba \r\n\r\nThank you for the issue and this PR 🤗 .\r\n\r\nThese files are not used at all in our CI. We should probably them. So we won't merge this fix, sorry.\r\n\r\n",
"Thanks @ydshieh\r\nDo you want me to close the PR and submit one that deletes them?",
"> Thanks @ydshieh Do you want me to close the PR and submit one that deletes them?\r\n\r\nHi @ashahba I will discuss with team members and come back to you :-) Thanks.",
"Hello again @ashahba Yes, we can remove them, and if you want to open a PR for this, go ahead 🤗 Thanks a lot.",
"Thanks @ydshieh \r\nThe issue and the PR to delete those Dockerfiles are here:\r\nIssue: https://github.com/huggingface/transformers/issues/28148\r\nPR: https://github.com/huggingface/transformers/pull/28149\r\n\r\nOnce this one is merged, I'll close the original PR and both issues can be marked as done as well.\r\n\r\nThanks."
] | 1,702 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes #28082
Currently `cpu` containers and possibly GPU ones as well under `docker` folder are out of date.The reason being [ubuntu:18.04](https://hub.docker.com/_/ubuntu/tags?page=1&name=18.04) base image was updated over 6 months ago and most likely there won't be anymore updates for that base and that's causing containers to fail during the build.
I also noticed even after updating to newer base images, Torch CPU containers install `GPU` distribution as well which is not only unnecessary but also leads to large final containers.
```
$ docker images | grep transformers-pytorch-cpu
transformers-pytorch-cpu-pr latest 9e377aa5efd9 20 minutes ago 867MB
transformers-pytorch-cpu-main latest 9a88121eeb33 20 hours ago 1.61GB
```
This PR:
- Sets the default base to [ubuntu:22.04](https://hub.docker.com/_/ubuntu/tags?page=1&name=22.04) which should be supported for a couple of years
- Adds appropriate license headers to the files
- Removes unnecessary `Torch GPU` bits from `CPU` containers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28084/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28084",
"html_url": "https://github.com/huggingface/transformers/pull/28084",
"diff_url": "https://github.com/huggingface/transformers/pull/28084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28084.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28083/comments | https://api.github.com/repos/huggingface/transformers/issues/28083/events | https://github.com/huggingface/transformers/pull/28083 | 2,044,157,025 | PR_kwDOCUB6oc5iIkVh | 28,083 | PatchtTST and PatchTSMixer fixes | {
"login": "wgifford",
"id": 79663411,
"node_id": "MDQ6VXNlcjc5NjYzNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/79663411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wgifford",
"html_url": "https://github.com/wgifford",
"followers_url": "https://api.github.com/users/wgifford/followers",
"following_url": "https://api.github.com/users/wgifford/following{/other_user}",
"gists_url": "https://api.github.com/users/wgifford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wgifford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgifford/subscriptions",
"organizations_url": "https://api.github.com/users/wgifford/orgs",
"repos_url": "https://api.github.com/users/wgifford/repos",
"events_url": "https://api.github.com/users/wgifford/events{/privacy}",
"received_events_url": "https://api.github.com/users/wgifford/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @wgifford, thanks for opening this PR! \r\n\r\nLet us know when it's ready for review! ",
"Hi @amyeroberts I just rebased on current main. I think it is ready for review.",
"thanks @amyeroberts I'll fix it up and push suggestions",
"> thanks @amyeroberts I'll fix it up and push suggestions\r\n\r\nHi Kashif, I think all the suggestions are good. Please reach out if any issues.",
"@kashif I see this is passing -- any other issues to address before we merge?",
"@amyeroberts Can this be merged? Some of these fixes are critical.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28083). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks for iterating on this - LGTM!\r\n\r\nThank you @amyeroberts!"
] | 1,702 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
Makes PatchTST and PatchTSMixer interfaces more consistent -- using similar parameter names for method arguments and returned data objects.
Fixes a few minor bugs in PatchTST implementation.
Ensures more consistent output shapes with regression when an output_distribution is chosen (in both forward and generate methods).
Fixes slow tests for PatchTST. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28083/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28083",
"html_url": "https://github.com/huggingface/transformers/pull/28083",
"diff_url": "https://github.com/huggingface/transformers/pull/28083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28083.patch",
"merged_at": 1706522967000
} |
https://api.github.com/repos/huggingface/transformers/issues/28082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28082/comments | https://api.github.com/repos/huggingface/transformers/issues/28082/events | https://github.com/huggingface/transformers/issues/28082 | 2,044,146,496 | I_kwDOCUB6oc551zNA | 28,082 | Dockerfiles under "docker" folder specially CPU ones fail to build. | {
"login": "ashahba",
"id": 12436063,
"node_id": "MDQ6VXNlcjEyNDM2MDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/12436063?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashahba",
"html_url": "https://github.com/ashahba",
"followers_url": "https://api.github.com/users/ashahba/followers",
"following_url": "https://api.github.com/users/ashahba/following{/other_user}",
"gists_url": "https://api.github.com/users/ashahba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashahba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashahba/subscriptions",
"organizations_url": "https://api.github.com/users/ashahba/orgs",
"repos_url": "https://api.github.com/users/ashahba/repos",
"events_url": "https://api.github.com/users/ashahba/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashahba/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ydshieh "
] | 1,702 | 1,703 | 1,703 | CONTRIBUTOR | null | ### System Info
- tip of `main` hence `v4.36.1`
- Runing Docker version 24.0.7 on Linux Ubuntu 22.04
### Reproduction
Running:
```
docker build -f docker/transformers-pytorch-cpu/Dockerfile . --tag transformers-pytorch-cpu
```
from tip of `main` branch results in:
```
=> [5/6] COPY . transformers/ 1.5s
=> ERROR [6/6] RUN cd transformers/ && python3 -m pip install --no-cache-dir . 6.8s
------
> [6/6] RUN cd transformers/ && python3 -m pip install --no-cache-dir .:
0.939 Processing /workspace/transformers
0.942 Installing build dependencies: started
3.357 Installing build dependencies: finished with status 'done'
3.358 Getting requirements to build wheel: started
4.225 Getting requirements to build wheel: finished with status 'done'
4.228 Preparing metadata (pyproject.toml): started
5.367 Preparing metadata (pyproject.toml): finished with status 'done'
6.195 Collecting pyyaml>=5.1
6.237 Downloading PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (677 kB)
6.365 Collecting requests
6.375 Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
6.462 ERROR: Could not find a version that satisfies the requirement huggingface-hub<1.0,>=0.19.3 (from transformers) (from versions: 0.0.1, 0.0.2, 0.0.3rc1, 0.0.3rc2, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.1.0, 0.1.1, 0.1.2, 0.2.0, 0.2.1, 0.4.0)
6.462 ERROR: No matching distribution found for huggingface-hub<1.0,>=0.19.3
------
Dockerfile:22
--------------------
21 | COPY . transformers/
22 | >>> RUN cd transformers/ && \
23 | >>> python3 -m pip install --no-cache-dir .
24 |
--------------------
ERROR: failed to solve: process "/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir ." did not complete successfully: exit code: 1
```
### Expected behavior
Container builds specially on recent Docker versions should be successful.
I have fixes I can supply for the build issues and I'm going to submit the first set in a subsequent PR for your review along with looking into the root cause.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28082/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28081/comments | https://api.github.com/repos/huggingface/transformers/issues/28081/events | https://github.com/huggingface/transformers/pull/28081 | 2,044,120,456 | PR_kwDOCUB6oc5iIcBv | 28,081 | More TF fixes | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Quick re-ping about this one @amyeroberts! (it's a very small PR, but I'd like to get it in so the CI stops upsetting Yih-Dar)"
] | 1,702 | 1,702 | 1,702 | MEMBER | null | The TF `build()` PR brought back an old issue where TF would latch onto the first concrete shape it saw, which would then become the model's save signature. We avoid it by hitting `self._set_save_spec()` with flexible shapes ASAP when models are created.
This PR also replaces a few more instances of `build()` with `build_in_name_scope()` in our tests. This should hopefully fix the CI issues (cc @ydshieh) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28081/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28081",
"html_url": "https://github.com/huggingface/transformers/pull/28081",
"diff_url": "https://github.com/huggingface/transformers/pull/28081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28081.patch",
"merged_at": 1702913164000
} |
https://api.github.com/repos/huggingface/transformers/issues/28080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28080/comments | https://api.github.com/repos/huggingface/transformers/issues/28080/events | https://github.com/huggingface/transformers/pull/28080 | 2,044,009,323 | PR_kwDOCUB6oc5iID1Q | 28,080 | Update fixtures-image-utils | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | MEMBER | null | The [hf-internal-testing/fixtures_image_utils](https://huggingface.co/datasets/hf-internal-testing/fixtures_image_utils) dataset fixture will break with the next release of datasets .
This dataset has a script that writes cache image files that are used in tests.
But in the next release the dataset is loaded from the Parquet files (so there is no local cache image file anymore).
FYI the issue appears because of new security features: `datasets` now loads the datasets Parquet exports by default to not let users run dataset scripts if possible.
To fix this I opened a PR on to remove the datasets script here: https://huggingface.co/datasets/hf-internal-testing/fixtures_image_utils/discussions/1
And in this PR I pass `revision="refs/pr/1"` in the tests to use the fixed dataset fixture and update the tests that rely on it.
IMO later we can merge the PR on HF and remove the `revision` argument (if we do this right now it will break tests in the other PRs on github)
cc @NielsRogge I think it's impacting tests you implemented | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28080/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28080",
"html_url": "https://github.com/huggingface/transformers/pull/28080",
"diff_url": "https://github.com/huggingface/transformers/pull/28080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28080.patch",
"merged_at": 1702659517000
} |
https://api.github.com/repos/huggingface/transformers/issues/28079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28079/comments | https://api.github.com/repos/huggingface/transformers/issues/28079/events | https://github.com/huggingface/transformers/issues/28079 | 2,043,961,955 | I_kwDOCUB6oc551GJj | 28,079 | Expose `gradient_as_bucket_view` as training argument for `DDP` | {
"login": "chiragjn",
"id": 10295418,
"node_id": "MDQ6VXNlcjEwMjk1NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/10295418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiragjn",
"html_url": "https://github.com/chiragjn",
"followers_url": "https://api.github.com/users/chiragjn/followers",
"following_url": "https://api.github.com/users/chiragjn/following{/other_user}",
"gists_url": "https://api.github.com/users/chiragjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiragjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiragjn/subscriptions",
"organizations_url": "https://api.github.com/users/chiragjn/orgs",
"repos_url": "https://api.github.com/users/chiragjn/repos",
"events_url": "https://api.github.com/users/chiragjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiragjn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @muellerzr @pacman100 ",
"In the start of Q1 I'll be looking at closely integrating an Accelerate configuration as part of the `TrainingArguments` that will let you configure and pass anything you want in, rather than blowing up the `TrainingArguments` with many different arguments. If you can wait a few weeks until then (first few weeks into January) I should have something workable. "
] | 1,702 | 1,702 | null | NONE | null | ### Feature request
As the title says, add `gradient_as_bucket_view` as the training argument (default False)
### Motivation
I have been experimenting with qlora fine-tuning LLMs on multiple A10 GPUs and I am leveraging DDP. I was going through the torch docs and https://pytorch.org/docs/2.1/generated/torch.nn.parallel.DistributedDataParallel.html and it seems the `gradient_as_bucket_view` argument can save a little bit of memory. It would be great to have it added as accelerate's DDP plugin already supports it.
I am already experimenting with it to test it out
```python
class HFTrainer(Trainer):
def _wrap_model(self, model, training=True, dataloader=None):
outputs = super()._wrap_model(model, training, dataloader)
if self.args.parallel_mode == ParallelMode.DISTRIBUTED and self.accelerator.ddp_handler:
self.accelerator.ddp_handler.gradient_as_bucket_view = True
return outputs
```
### Your contribution
Let me know, I can also work on a PR for this as the change is relatively small | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28079/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28078/comments | https://api.github.com/repos/huggingface/transformers/issues/28078/events | https://github.com/huggingface/transformers/pull/28078 | 2,043,945,589 | PR_kwDOCUB6oc5iH14z | 28,078 | Fix bug for checkpoint saving on multi node training setting | {
"login": "dumpmemory",
"id": 64742282,
"node_id": "MDQ6VXNlcjY0NzQyMjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/64742282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dumpmemory",
"html_url": "https://github.com/dumpmemory",
"followers_url": "https://api.github.com/users/dumpmemory/followers",
"following_url": "https://api.github.com/users/dumpmemory/following{/other_user}",
"gists_url": "https://api.github.com/users/dumpmemory/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dumpmemory/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumpmemory/subscriptions",
"organizations_url": "https://api.github.com/users/dumpmemory/orgs",
"repos_url": "https://api.github.com/users/dumpmemory/repos",
"events_url": "https://api.github.com/users/dumpmemory/events{/privacy}",
"received_events_url": "https://api.github.com/users/dumpmemory/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@muellerzr Is this a regression fix like #28009 or resolving an existing bug?",
"@amyeroberts regression fix, includes a fix for the same issue but on multinode. ",
"cc @thundergolfer too since I know you're trying to keep an eye on them all :)",
"Thanks @dumpmemory. Another edge case squashed!"
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
fix bug on multi node training setting with shared file system
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28078/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28078",
"html_url": "https://github.com/huggingface/transformers/pull/28078",
"diff_url": "https://github.com/huggingface/transformers/pull/28078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28078.patch",
"merged_at": 1702657136000
} |
https://api.github.com/repos/huggingface/transformers/issues/28077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28077/comments | https://api.github.com/repos/huggingface/transformers/issues/28077/events | https://github.com/huggingface/transformers/pull/28077 | 2,043,933,828 | PR_kwDOCUB6oc5iHzTT | 28,077 | Disable jitter noise during evaluation in SwitchTransformers | {
"login": "DaizeDong",
"id": 113810510,
"node_id": "U_kgDOBsicTg",
"avatar_url": "https://avatars.githubusercontent.com/u/113810510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaizeDong",
"html_url": "https://github.com/DaizeDong",
"followers_url": "https://api.github.com/users/DaizeDong/followers",
"following_url": "https://api.github.com/users/DaizeDong/following{/other_user}",
"gists_url": "https://api.github.com/users/DaizeDong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaizeDong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaizeDong/subscriptions",
"organizations_url": "https://api.github.com/users/DaizeDong/orgs",
"repos_url": "https://api.github.com/users/DaizeDong/repos",
"events_url": "https://api.github.com/users/DaizeDong/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaizeDong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28077). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The jitter noise was mistakenly added during evaluation in GPTSanJapanese and SwitchTransformers, which would bring uncertainty to the evaluation results. Now the bug is fixed, and the implementation is the same as the [native code](https://github.com/tensorflow/mesh/blob/e6798a2610a2c2f4c4cd236d8214422cb1ecc00a/mesh_tensorflow/transformer/moe.py#L903-L905) of switch transformers.
The former implementation is:
```python
if self.jitter_noise > 0:
# Multiply the token inputs by the uniform distribution - adding some noise
hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.jitter_noise, 1.0 + self.jitter_noise)
```
The fixed implementation is:
```python
if self.training and self.jitter_noise > 0:
# Multiply the token inputs by the uniform distribution - adding some noise
hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.jitter_noise, 1.0 + self.jitter_noise)
```
This PR also updates the outdated annotations in `configuration_switch_transformers.py`. Now the default values in the annotation area are the same as the values in the `__init__` for SwitchTransformersConfig.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28077/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28077",
"html_url": "https://github.com/huggingface/transformers/pull/28077",
"diff_url": "https://github.com/huggingface/transformers/pull/28077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28077.patch",
"merged_at": 1702912136000
} |
https://api.github.com/repos/huggingface/transformers/issues/28076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28076/comments | https://api.github.com/repos/huggingface/transformers/issues/28076/events | https://github.com/huggingface/transformers/issues/28076 | 2,043,873,647 | I_kwDOCUB6oc550wlv | 28,076 | The model's name is saved as model.safetensors while the logger reported its name as pytorch_model.bin, which is quite weird. | {
"login": "izyForever",
"id": 43177954,
"node_id": "MDQ6VXNlcjQzMTc3OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/43177954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izyForever",
"html_url": "https://github.com/izyForever",
"followers_url": "https://api.github.com/users/izyForever/followers",
"following_url": "https://api.github.com/users/izyForever/following{/other_user}",
"gists_url": "https://api.github.com/users/izyForever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izyForever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izyForever/subscriptions",
"organizations_url": "https://api.github.com/users/izyForever/orgs",
"repos_url": "https://api.github.com/users/izyForever/repos",
"events_url": "https://api.github.com/users/izyForever/events{/privacy}",
"received_events_url": "https://api.github.com/users/izyForever/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The function parameter is safe_serialization: bool = True, but the logger file name is a constant which is used for the situation of safe_serialization: bool = False and it causes ambiguousness.",
"Hi @izyForever, thanks for raising this issue! \r\n\r\nIndeed, it seems this logging line wasn't updated to reflect the default safe serialization behaviour that now happens. Would you like to open a PR to update the logger message? This way you get the github contribution for spotting this\r\n\r\n",
"> Hi @izyForever, thanks for raising this issue!\r\n> \r\n> Indeed, it seems this logging line wasn't updated to reflect the default safe serialization behaviour that now happens. Would you like to open a PR to update the logger message? This way you get the github contribution for spotting this\r\n\r\nSure, I will fix this.",
"> 你好@izyForever,感谢您提出这个问题!\r\n> \r\n> 事实上,这条日志记录行似乎没有更新以反映现在发生的默认安全序列化行为。您想打开 PR 来更新记录器消息吗?这样您就可以发现这个问题的 github 贡献\r\n\r\n\r\n\r\n> 你好@izyForever,感谢您提出这个问题!\r\n> \r\n> 事实上,这条日志记录行似乎没有更新以反映现在发生的默认安全序列化行为。您想打开 PR 来更新记录器消息吗?这样您就可以发现这个问题的 github 贡献\r\n\r\nWow, you are so cute! Follow you!"
] | 1,702 | 1,706 | 1,703 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.2
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?


When I was training my model, I finded there was not pytorch_model.bin but model.safetensors,
I think it's misguided in some ways.
The logger is below:



### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. use [the p-tuning script](https://github.com/THUDM/ChatGLM2-6B/blob/main/ptuning/main.py) to fine tuning chatglm2-6b
2. In Windows 10, I use bat to run the script, as below, which works well but inconsistent logger.
```
set PRE_SEQ_LEN=128
set LR=2e-2
python main.py ^
--do_train ^
--train_file AdvertiseGen/train.json ^
--validation_file AdvertiseGen/dev.json ^
--preprocessing_num_workers 10 ^
--prompt_column content ^
--response_column summary ^
--overwrite_cache ^
--model_name_or_path E://ChatGLM2-6B ^
--output_dir output/adgen-chatglm2-6b-pt-%PRE_SEQ_LEN%-%LR% ^
--overwrite_output_dir ^
--max_source_length 64 ^
--max_target_length 128 ^
--per_device_train_batch_size 1 ^
--per_device_eval_batch_size 1 ^
--gradient_accumulation_steps 16 ^
--predict_with_generate ^
--max_steps 3 ^
--logging_steps 1 ^
--save_steps 1 ^
--learning_rate %LR% ^
--pre_seq_len %PRE_SEQ_LEN% ^
--quantization_bit 4
```
### Expected behavior
I think it will be better if the file name is the same as the logger shows. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28076/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28076/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28075/comments | https://api.github.com/repos/huggingface/transformers/issues/28075/events | https://github.com/huggingface/transformers/issues/28075 | 2,043,781,068 | I_kwDOCUB6oc550Z_M | 28,075 | Adding support for a static shape `generate` | {
"login": "alessandropalla",
"id": 28634533,
"node_id": "MDQ6VXNlcjI4NjM0NTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/28634533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alessandropalla",
"html_url": "https://github.com/alessandropalla",
"followers_url": "https://api.github.com/users/alessandropalla/followers",
"following_url": "https://api.github.com/users/alessandropalla/following{/other_user}",
"gists_url": "https://api.github.com/users/alessandropalla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alessandropalla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alessandropalla/subscriptions",
"organizations_url": "https://api.github.com/users/alessandropalla/orgs",
"repos_url": "https://api.github.com/users/alessandropalla/repos",
"events_url": "https://api.github.com/users/alessandropalla/events{/privacy}",
"received_events_url": "https://api.github.com/users/alessandropalla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante ",
"Any update on this ticket?",
"There is an open PR: https://github.com/huggingface/transformers/pull/27931",
"Thanks @oobabooga 🤗 and yes this is my main focus, hoping to ship by end of the week",
"Many thanks! do you need help for the PR? (Development/testing/writing examples on how to run a model with static shape on the NPU?)",
"I don't really have access to a NPU currently so feel free to test it. It's still in draft mode so when it's ready for review! "
] | 1,702 | 1,707 | 1,707 | NONE | null | ### Feature request
Many inference AI accelerators (Intel NPU, IPU, TPU, etc...) requires static shapes get maximum performance. Static shapes allows the NN graph compiler to improve memory management, schedule and overall network performance.
However in transformers the `generate` function uses dynamic shapes and increase the size of the input (and kv-cache) at every successive step. I opened this issue to implement a way to still do LLM generation inference using transformers API while maintaining static shapes:
The trick is to use left padding and shift left the kv-cached values while doing inference. By setting the `position_id` correctly we can have a correct inference. Attached a GIF that hopefully explains how it works:

Fist inference you pad left and run as usual. It is important to set the `attention_mask` and `position_ids` accordingly. In the kv-cached part you only need to pass the new token and the proper `prosition_ids` and `attention_mask` while the cache values are shifted left. This works because in the MHA block the cached values and keys are concatenated left with the new ones and having left padding makes the new token key and value tensors adjacent to the cache values
Here a snippet for a function that implements this. This code is not production ready but is a POC to show you how it is supposed to work both with and without KV-caching
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Llama-2-7b-chat-hf"
device = "cpu" # use the accelerator that you have or use "cpu"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
# Load model
model = AutoModelForCausalLM.from_pretrained(model_id).to(device)
# Utility function to compute shift left and insert a value into a tensor
def lshift_insert(tensor, value):
tensor = torch.roll(tensor, shifts=-1, dims=-1)
tensor[0, -1] = value
return tensor
# Generate function
@torch.no_grad()
def generate_with_static_shape(model, input_ids, attention_mask=None, max_length=None, use_past=True, pad_token_id=None, **kwargs):
# Get sequence lenght
batch, seq_lenght = input_ids.shape
if pad_token_id is None:
RuntimeError("pad_token_id is not set and needed for static shape generation")
# Padding attention mask
if attention_mask is None:
attention_mask = torch.ones_like(input_ids, dtype=torch.int32).to(model.device)
attention_mask_padding = torch.zeros((batch, max_length - seq_lenght), dtype=input_ids.dtype, device=input_ids.device)
attention_mask = torch.cat((attention_mask_padding, attention_mask), dim=-1)
# Padding input_ids with left padding
padding_input_ids = pad_token_id * torch.ones((batch, max_length - seq_lenght), dtype=input_ids.dtype, device=input_ids.device)
input_ids = torch.cat((padding_input_ids, input_ids), dim=-1).to(model.device)
# Set the proper position ids
position_ids = kwargs.get('position_ids', None)
if position_ids is None:
position_ids = torch.tensor([[0] * (max_length - seq_lenght) + list(range(seq_lenght))], dtype=torch.int32).to(model.device)
else:
raise RuntimeError("Cannot set position_ids with in static shape generation")
# past_key_values for KV-cache
past_key_values = None
for idx in range(seq_lenght, max_length):
# Run the inference
out = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=past_key_values)
## Here I do greedy search as an example, but in general is where you want to select the next token with your fancy decoding algorithm
logits = out.logits
new_token = torch.argmax(logits[0, -1, :])
yield new_token
if not use_past:
# Shift left input and position ids and set the new token and idx to the proper values
input_ids = lshift_insert(input_ids, new_token)
position_ids = lshift_insert(position_ids, idx)
else:
# Set input_ids and position_ids to their new value
input_ids = torch.tensor([[new_token]], dtype=input_ids.dtype).to(model.device)
position_ids = torch.tensor([[idx]], dtype=input_ids.dtype).to(model.device)
# Select the proper KV cached keys for next inference
past_key_values = [[item[:, :, 1:, :] for item in layer_past] for layer_past in out.past_key_values]
# Shift left attention mask and set the last value to one
attention_mask = lshift_insert(attention_mask, 1)
prompt = "List all numbers in the Fibonacci sequence: 1, 1, 2, 3, "
max_length = 512
# Tokenize
input_ids = tokenizer(prompt, return_tensors='pt')['input_ids'].to(device)
print(prompt, end="", flush=True)
results = generate_with_static_shape(model, input_ids=input_ids, max_length=max_length, use_past=True, pad_token_id=tokenizer.pad_token_id)
for new_token_id in results:
token = tokenizer.decode([new_token_id], skip_special_tokens=True)
# Not very good as depending on the tokenizer it might or might not add spaces
print(token , end="", flush=True)
```
### Motivation
Enabling AI inference accelerator to be used with the `generate` API
### Your contribution
I'll be happy to help integrating the code into `transformers` library. Let me know how I can help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28075/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28074/comments | https://api.github.com/repos/huggingface/transformers/issues/28074/events | https://github.com/huggingface/transformers/issues/28074 | 2,043,766,034 | I_kwDOCUB6oc550WUS | 28,074 | Inference speed becomes slower after quantization | {
"login": "xinyual",
"id": 74362153,
"node_id": "MDQ6VXNlcjc0MzYyMTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/74362153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinyual",
"html_url": "https://github.com/xinyual",
"followers_url": "https://api.github.com/users/xinyual/followers",
"following_url": "https://api.github.com/users/xinyual/following{/other_user}",
"gists_url": "https://api.github.com/users/xinyual/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinyual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinyual/subscriptions",
"organizations_url": "https://api.github.com/users/xinyual/orgs",
"repos_url": "https://api.github.com/users/xinyual/repos",
"events_url": "https://api.github.com/users/xinyual/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinyual/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada @SunMarc ",
"hi @xinyual \r\nThanks a lot for the issue, in fact, in your script you are using `bnb_4bit_use_double_quant` which slows down inference at the price of being more memory efficient since the linear layers will be quantized twice. \r\nif you disable double quant it should be faster, but not sure it will be faster than fp16, this will depend on your problem setup (batch size, seq lenght, etc.) If you want a fast 4-bit model for deployment I would advise to \r\n\r\n- Quantize your fine-tuned model with AWQ algorithm using auto-awq: https://github.com/casper-hansen/AutoAWQ\r\n- Use AWQ fused modules for generation which is ~3x faster than native fp16: https://huggingface.co/docs/transformers/quantization#benchmarks \r\n\r\nYou can also consider quantizing your model with GPTQ algo but GPTQ do not support fused modules yet\r\n\r\nRead more about the benefits of each quantization scheme and when to use them in this blogpost: https://huggingface.co/blog/overview-quantization-transformers \r\n\r\nLet me know if you face into any issue. cc @casper-hansen just FYI ",
"It would probably be good to disable double quantization when using AutoAWQ, maybe raise an error? If you quantize with AWQ and then BNB, you will just have a worse model in the end.",
"> It would probably be good to disable double quantization when using AutoAWQ, maybe raise an error? If you quantize with AWQ and then BNB, you will just have a worse model in the end.\r\n\r\nYes this is already handled in transformers ! What I meant is the following workflow:\r\n\r\n- Fine-tune your model using QLoRA\r\n- Merge the LoRA weights into the base model and push / save the final checkpoint in float16\r\n- Quantize the merged model using AWQ",
"Sorry for late response! Could you please give me instruction about `Merge the LoRA weights into the base model and push / save the final checkpoint in float16`?\r\nAfter\r\n```\r\nmodel = PeftModel.from_pretrained(\r\n model,\r\n lora_weights,\r\n device_map =\"auto\"\r\n)\r\n```\r\nThe model.save_pretrained will only save lora.",
"Hi @xinyual \r\nSure yes, please refer to this comment: https://huggingface.co/Salesforce/codegen2-7B/discussions/1#6543f4eb2996405c23882b03 to understand what merging means and how to perform it using PEFT. Once you have called `model.merge_and_unload()` you can call save_pretrained directly",
"Thanks for your effort! I use AWQ and then it speeds up a lot. Very appreciate about that. I will close this issue\r\n.",
"Thanks @xinyual , this is great !"
] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.10
- Python version: 3.8.17
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I use the script from here to finetune my quantized mistral-7b-instruct model with lora: https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zbf2-9VQTtGJ24k?usp=sharing#scrollTo=Ybeyl20n3dYH
After training, I inference as:
```
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(base_model,
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.bfloat16,
trust_remote_code=True)
model = PeftModel.from_pretrained(
model,
lora_weights,
device_map ="auto"
)
model.half()
model.eval()
```
The bnb_config is same as training process.
After that, I inference like:
```
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
do_sample=True,
max_length=input_l + max_new_tokens + 100,
temperature=0.001
)
```
Compared with the model without quantization, the speed is slow. From my knowledge, we use low-bit computation so it would be fast. My hardware is AWS g5.12xlarge. Is that normal?
### Expected behavior
Please tell me whether it is normal to be slower or I made some mistakes in my scripts. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28074/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28073/comments | https://api.github.com/repos/huggingface/transformers/issues/28073/events | https://github.com/huggingface/transformers/pull/28073 | 2,043,765,991 | PR_kwDOCUB6oc5iHOei | 28,073 | Fix weights not properly initialized due to shape mismatch | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28073). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"As this touches the core file, request 2 reviews 🙏 ",
"Guess I have to add some tests for this case. Change to draft for now"
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | Currently, if there is some weight shape mismatched between the model and the checkpoint, and if `ignore_mismatched_sizes=True`, that/those weight(s) won't get initialized by the model's `_init_weights` method, and could get crazy values like `1e37`.
This could make the training gets nan loss value from the beginning and won't have any progress.
One example is by running `src/transformers/modeling_utils.py` (add `ignore_mismatched_sizes=True`).
We usually set `ignore_mismatched_sizes=True` when we want to perform classification tasks using an existing model but to another task having different number of targets.
This PR aims to fix this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28073/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28073",
"html_url": "https://github.com/huggingface/transformers/pull/28073",
"diff_url": "https://github.com/huggingface/transformers/pull/28073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28073.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28072/comments | https://api.github.com/repos/huggingface/transformers/issues/28072/events | https://github.com/huggingface/transformers/issues/28072 | 2,043,757,981 | I_kwDOCUB6oc550UWd | 28,072 | Can i convert open-clip trained models (.pt) using code “src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py” ? | {
"login": "jzssz",
"id": 112179055,
"node_id": "U_kgDOBq-3bw",
"avatar_url": "https://avatars.githubusercontent.com/u/112179055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzssz",
"html_url": "https://github.com/jzssz",
"followers_url": "https://api.github.com/users/jzssz/followers",
"following_url": "https://api.github.com/users/jzssz/following{/other_user}",
"gists_url": "https://api.github.com/users/jzssz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzssz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzssz/subscriptions",
"organizations_url": "https://api.github.com/users/jzssz/orgs",
"repos_url": "https://api.github.com/users/jzssz/repos",
"events_url": "https://api.github.com/users/jzssz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzssz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hi @jzssz, thanks for raising an issue! \r\n\r\nYes, it should be possible to convert a CLIP checkpoint to its HF equivalent using the script. Without having access to your checkpoint, it's not possible to reproduce and debug the issue on our side. \r\n\r\nFrom the error, it looks like there's a problem in the serialization of the original checkpoint itself: the error is being raised in the pickle library when trying to load the state dict. You'll need to make sure you can load the original checkpoint in pytorch to ensure that the script can be used. ",
"Hi,\r\n\r\nThis script can be used to convert OpenCLP models to HF: https://gist.github.com/rwightman/c79fd0241ed3c860e898114931c07990. Would perhaps be great to add it to the \"clip\" folder as a general utility script"
] | 1,702 | 1,702 | null | NONE | null | ### Model description
openclip:https://github.com/mlfoundations/open_clip
i use openclip to train model and get "epoch_400.pt".
**and i want to convert this "epoch_400.pt" to hf, so i run:**
python src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py --pytorch_dump_folder_path "./openclip_syf_hf" --checkpoint_path "/openclip_output/2023_12_07-15_24_24-model_ViT-B-32-lr_0.0005-b_256-j_8-p_amp/checkpoints/epoch_400.pt" --config_path "/open_clip-main/src/open_clip/model_configs/ViT-B-32.json"
**but get bug:**
Traceback (most recent call last):
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/clip/clip.py", line 130, in load
model = torch.jit.load(opened_file, map_location=device if jit else "cpu").eval()
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/jit/_serialization.py", line 164, in load
cpp_module = torch._C.import_ir_module_from_buffer(
RuntimeError: PytorchStreamReader failed locating file constants.pkl: file not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py", line 150, in <module>
convert_clip_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.config_path)
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py", line 120, in convert_clip_checkpoint
pt_model, _ = load(checkpoint_path, device="cpu", jit=False)
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/clip/clip.py", line 137, in load
state_dict = torch.load(opened_file, map_location="cpu")
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/serialization.py", line 795, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/anaconda3/envs/transformer/lib/python3.8/site-packages/torch/serialization.py", line 1002, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
**so i am wondering if i can convert open-clip trained models (.pt) using code “src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py” ?**
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28072/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28071/comments | https://api.github.com/repos/huggingface/transformers/issues/28071/events | https://github.com/huggingface/transformers/pull/28071 | 2,043,757,468 | PR_kwDOCUB6oc5iHMoj | 28,071 | Fix SpeechT5 `decoder_attention_mask` shape | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"After running the slow tests, we have these two failing tests:\r\n```\r\nFAILED tests/models/speecht5/test_modeling_speecht5.py::SpeechT5ForTextToSpeechIntegrationTests::test_batch_generation - AssertionError: torch.Size([3, 264, 80]) != (3, 262, 80)\r\nFAILED tests/models/speecht5/test_modeling_speecht5.py::SpeechT5ForTextToSpeechIntegrationTests::test_generation - AssertionError: torch.Size([226, 80]) != (230, 80)\r\n```\r\nThis is most likely related to #25943, as the current PR only impacts output when `labels` are passed.\r\n\r\nI'll deep dive into it, let's wait before merging\r\n",
"@ylacombe Thanks for checking and reporting back! "
] | 1,702 | 1,707 | null | COLLABORATOR | null | # What does this PR do?
#26598 rightfully raised a warning when passing labels to `SpeechT5`. When it happens, a reduction factor is applied to the `labels` but not to the `decoder_attention_mask` that comes with it, resulting in a mismatch.
Fixes #26598
cc @amyeroberts @sanchit-gandhi | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28071/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28071",
"html_url": "https://github.com/huggingface/transformers/pull/28071",
"diff_url": "https://github.com/huggingface/transformers/pull/28071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28071.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28070/comments | https://api.github.com/repos/huggingface/transformers/issues/28070/events | https://github.com/huggingface/transformers/issues/28070 | 2,043,706,277 | I_kwDOCUB6oc550Hul | 28,070 | TypeError: 'ModelMetaclass' object is not iterable | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @andysingal \r\nThanks for the issue! \r\nI think `HfArgumentParser` requires a `dataclass` - can you try to un-comment `# @dataclass` ?",
"> Hi @andysingal Thanks for the issue! I think `HfArgumentParser` requires a `dataclass` - can you try to un-comment `# @dataclass` ?\r\n\r\ni tried, now getting error:\r\n```\r\nusage: ipykernel_launcher.py [-h] --local_rank LOCAL_RANK\r\n --per_device_train_batch_size\r\n PER_DEVICE_TRAIN_BATCH_SIZE\r\n --per_device_eval_batch_size\r\n PER_DEVICE_EVAL_BATCH_SIZE\r\n --gradient_accumulation_steps\r\n GRADIENT_ACCUMULATION_STEPS --learning_rate\r\n LEARNING_RATE --max_grad_norm MAX_GRAD_NORM\r\n --weight_decay WEIGHT_DECAY --lora_alpha\r\n LORA_ALPHA --lora_dropout LORA_DROPOUT --lora_r\r\n LORA_R --max_seq_length MAX_SEQ_LENGTH\r\n --model_name MODEL_NAME --dataset_name\r\n DATASET_NAME [--use_4bit USE_4BIT]\r\n [--use_nested_quant USE_NESTED_QUANT]\r\n --bnb_4bit_compute_dtype BNB_4BIT_COMPUTE_DTYPE\r\n --bnb_4bit_quant_type BNB_4BIT_QUANT_TYPE\r\n --num_train_epochs NUM_TRAIN_EPOCHS [--fp16 FP16]\r\n [--bf16 BF16] [--packing PACKING]\r\n [--gradient_checkpointing GRADIENT_CHECKPOINTING]\r\n --optim OPTIM --lr_scheduler_type\r\n LR_SCHEDULER_TYPE --max_steps MAX_STEPS\r\n --warmup_ratio WARMUP_RATIO\r\n [--group_by_length [GROUP_BY_LENGTH]]\r\n --save_steps SAVE_STEPS --logging_steps\r\n LOGGING_STEPS [--merge_and_push MERGE_AND_PUSH]\r\n --output_dir OUTPUT_DIR\r\nipykernel_launcher.py: error: the following arguments are required: --local_rank, --per_device_train_batch_size, --per_device_eval_batch_size, --gradient_accumulation_steps, --learning_rate, --max_grad_norm, --weight_decay, --lora_alpha, --lora_dropout, --lora_r, --max_seq_length, --model_name, --dataset_name, --bnb_4bit_compute_dtype, --bnb_4bit_quant_type, --num_train_epochs, --optim, --lr_scheduler_type, --max_steps, --warmup_ratio, --save_steps, --logging_steps, --output_dir\r\nAn exception has occurred, use %tb to see the full traceback.\r\n\r\nSystemExit: 2\r\n/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py:3556: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.\r\n warn(\"To exit: use 'exit', 'quit', or Ctrl-D.\", stacklevel=1)\r\n```",
"Much better! Now this simply means that to run your script you need to provide the required arguments such as --local-rank, etc.",
"> Much better! Now this simply means that to run your script you need to provide the required arguments such as --local-rank, etc.\r\n\r\nThanks alot for instant reply, love to see how i can run the code in an efficient way... i was following: https://python.plainenglish.io/intruct-fine-tuning-mistral-7b-model-with-your-custom-data-7eb22921a483 ",
"Thanks ! I guess the issue is solved now! Feel free to close it!",
"> > Much better! Now this simply means that to run your script you need to provide the required arguments such as --local-rank, etc.\r\n> \r\n> Thanks alot for instant reply, love to see how i can run the code in an efficient way... i was following: https://python.plainenglish.io/intruct-fine-tuning-mistral-7b-model-with-your-custom-data-7eb22921a483\r\n\r\n@younesbelkada if you find a workaround to resolve the issue, please keep me posted",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
RTX 3090
### Who can help?
@younesbelkada while working on:
```
import os
from dataclasses import dataclass, field
from typing import Optional
from datasets.arrow_dataset import Dataset
import torch
from datasets import load_dataset
from peft import LoraConfig
from peft import AutoPeftModelForCausalLM
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
AutoTokenizer,
TrainingArguments,
)
from pydantic_settings import BaseSettings
from trl import SFTTrainer
torch.manual_seed(42)
# @dataclass
class ScriptArguments(BaseSettings):
"""
These arguments vary depending on how many GPUs you have, what their capacity and features are, and what size model you want to train.
"""
local_rank: Optional[int] = field(default=-1, metadata={"help": "Used for multi-gpu"})
per_device_train_batch_size: Optional[int] = field(default=4)
per_device_eval_batch_size: Optional[int] = field(default=4)
gradient_accumulation_steps: Optional[int] = field(default=4)
learning_rate: Optional[float] = field(default=2e-5)
max_grad_norm: Optional[float] = field(default=0.3)
weight_decay: Optional[int] = field(default=0.01)
lora_alpha: Optional[int] = field(default=16)
lora_dropout: Optional[float] = field(default=0.1)
lora_r: Optional[int] = field(default=32)
max_seq_length: Optional[int] = field(default=512)
model_name: Optional[str] = field(
default="mistralai/Mistral-7B-Instruct-v0.1",
metadata={
"help": "The model that you want to train from the Hugging Face hub. E.g. gpt2, gpt2-xl, bert, etc."
}
)
dataset_name: Optional[str] = field(
default="iamtarun/python_code_instructions_18k_alpaca",
metadata={"help": "The preference dataset to use."},
)
use_4bit: Optional[bool] = field(
default=True,
metadata={"help": "Activate 4bit precision base model loading"},
)
use_nested_quant: Optional[bool] = field(
default=False,
metadata={"help": "Activate nested quantization for 4bit base models"},
)
bnb_4bit_compute_dtype: Optional[str] = field(
default="float16",
metadata={"help": "Compute dtype for 4bit base models"},
)
bnb_4bit_quant_type: Optional[str] = field(
default="nf4",
metadata={"help": "Quantization type fp4 or nf4"},
)
num_train_epochs: Optional[int] = field(
default=100,
metadata={"help": "The number of training epochs for the reward model."},
)
fp16: Optional[bool] = field(
default=False,
metadata={"help": "Enables fp16 training."},
)
bf16: Optional[bool] = field(
default=True,
metadata={"help": "Enables bf16 training."},
)
packing: Optional[bool] = field(
default=False,
metadata={"help": "Use packing dataset creating."},
)
gradient_checkpointing: Optional[bool] = field(
default=True,
metadata={"help": "Enables gradient checkpointing."},
)
optim: Optional[str] = field(
default="paged_adamw_32bit",
metadata={"help": "The optimizer to use."},
)
lr_scheduler_type: str = field(
default="constant",
metadata={"help": "Learning rate schedule. Constant a bit better than cosine, and has advantage for analysis"},
)
max_steps: int = field(default=1000000, metadata={"help": "How many optimizer update steps to take"})
warmup_ratio: float = field(default=0.03, metadata={"help": "Fraction of steps to do a warmup for"})
group_by_length: bool = field(
default=True,
metadata={
"help": "Group sequences into batches with same length. Saves memory and speeds up training considerably."
},
)
save_steps: int = field(default=50, metadata={"help": "Save checkpoint every X updates steps."})
logging_steps: int = field(default=50, metadata={"help": "Log every X updates steps."})
merge_and_push: Optional[bool] = field(
default=False,
metadata={"help": "Merge and push weights after training"},
)
output_dir: str = field(
default="./results_packing",
metadata={"help": "The output directory where the model predictions and checkpoints will be written."},
)
parser = HfArgumentParser(ScriptArguments)
script_args = parser.parse_args_into_dataclasses()[0]
```
ERROR:
```
/usr/local/lib/python3.10/dist-packages/trl/trainer/ppo_config.py:141: UserWarning: The `optimize_cuda_cache` arguement will be deprecated soon, please use `optimize_device_cache` instead.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_name" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ('settings_',)`.
warnings.warn(
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 117
107 merge_and_push: Optional[bool] = field(
108 default=False,
109 metadata={"help": "Merge and push weights after training"},
110 )
111 output_dir: str = field(
112 default="./results_packing",
113 metadata={"help": "The output directory where the model predictions and checkpoints will be written."},
114 )
--> 117 parser = HfArgumentParser(ScriptArguments)
118 script_args = parser.parse_args_into_dataclasses()[0]
File /usr/local/lib/python3.10/dist-packages/transformers/hf_argparser.py:134, in HfArgumentParser.__init__(self, dataclass_types, **kwargs)
132 if dataclasses.is_dataclass(dataclass_types):
133 dataclass_types = [dataclass_types]
--> 134 self.dataclass_types = list(dataclass_types)
135 for dtype in self.dataclass_types:
136 self._add_dataclass_arguments(dtype)
TypeError: 'ModelMetaclass' object is not iterable
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
shared about
Addiitonally:
```
%pip install transformers peft bitsandbytes accelerate trl pydantic-settings --quiet
```
### Expected behavior
needs to run
check:
https://python.plainenglish.io/intruct-fine-tuning-mistral-7b-model-with-your-custom-data-7eb22921a483 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28070/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28069/comments | https://api.github.com/repos/huggingface/transformers/issues/28069/events | https://github.com/huggingface/transformers/issues/28069 | 2,043,705,426 | I_kwDOCUB6oc550HhS | 28,069 | Add time progress bar to track the group_by_length computation for bigger datasets on Trainer | {
"login": "T-Almeida",
"id": 19167453,
"node_id": "MDQ6VXNlcjE5MTY3NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/19167453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/T-Almeida",
"html_url": "https://github.com/T-Almeida",
"followers_url": "https://api.github.com/users/T-Almeida/followers",
"following_url": "https://api.github.com/users/T-Almeida/following{/other_user}",
"gists_url": "https://api.github.com/users/T-Almeida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/T-Almeida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/T-Almeida/subscriptions",
"organizations_url": "https://api.github.com/users/T-Almeida/orgs",
"repos_url": "https://api.github.com/users/T-Almeida/repos",
"events_url": "https://api.github.com/users/T-Almeida/events{/privacy}",
"received_events_url": "https://api.github.com/users/T-Almeida/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @muellerzr @pacman100 "
] | 1,702 | 1,702 | null | NONE | null | ### Feature request
When setting the flag `group_by_length=True` on the TrainingArguments, there is no user feedback of the operations running in the background, namely getting the list of lengths for all the samples and running the grouping algorithm. This can be a frustrating problem when dealing with large datasets (Millions of samples) on slow IO devices, since it appears that the Trainer is hanging and does not start!
More precisely, In my current setup, I found out that the following lines take almost 2h to finish. (Due to my slow IO (reading from a NFS from an old machine))
https://github.com/huggingface/transformers/blob/c817c17dbe264329b9f9d227b48ce70edd9e3204/src/transformers/trainer_pt_utils.py#L585
NOTE 1): using `.select_columns(model_input_name)` and then iterating would not be faster? Assuming that the dataset has more feature like "attention_mask" for instance.
I believe that more feedback could possibly be given to the user, like the time that would take to finish. (Also store the dataset length under .cache).
NOTE 2): After realising this issue, I also noticed the `length_column_name` flag. Maybe raising a warning to let the users know that on larger datasets they should precompute the length. By doing so, the time went from 2h to (15-20)min.
### Motivation
I was training a model on a LM task. My dataset has 22M samples with average length of +/- 512. When I run the model with `group_by_length=True` I thought that something was wrong because the training was not starting (I was actually writing an bug about my problem, because I thought it was an issue with the Trainer). After further inspection, I notice that the main culprit was the computation of the length that is really slow on my current setup.
### Your contribution
If you feel like this is an issue that is worth to address, I am willing to do PR under your orientation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28069/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28068/comments | https://api.github.com/repos/huggingface/transformers/issues/28068/events | https://github.com/huggingface/transformers/pull/28068 | 2,043,661,913 | PR_kwDOCUB6oc5iG3yT | 28,068 | [`Mixtral`] update conversion script to reflect new changes | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Re removal from Mixtral modeling files, I'd say yes if it's never used. Depends on whether you think it'll ever be added again in the future.\r\n\r\nLet me check that with Mistral team and if it will not be added again, I also think it should be removed",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28068). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Fixes: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1/discussions/41
Sliding window has been recently removed from mixtral config, thus we need to reflect these changes in the conversion script
I also wonder if we should ignore `sliding_window` in MixtralAttention & MixtralFlashAttention as it will be never used
cc @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28068/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28068",
"html_url": "https://github.com/huggingface/transformers/pull/28068",
"diff_url": "https://github.com/huggingface/transformers/pull/28068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28068.patch",
"merged_at": 1702645520000
} |
https://api.github.com/repos/huggingface/transformers/issues/28067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28067/comments | https://api.github.com/repos/huggingface/transformers/issues/28067/events | https://github.com/huggingface/transformers/issues/28067 | 2,043,646,283 | I_kwDOCUB6oc55z5FL | 28,067 | No module named 'clip' | {
"login": "jzssz",
"id": 112179055,
"node_id": "U_kgDOBq-3bw",
"avatar_url": "https://avatars.githubusercontent.com/u/112179055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzssz",
"html_url": "https://github.com/jzssz",
"followers_url": "https://api.github.com/users/jzssz/followers",
"following_url": "https://api.github.com/users/jzssz/following{/other_user}",
"gists_url": "https://api.github.com/users/jzssz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzssz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzssz/subscriptions",
"organizations_url": "https://api.github.com/users/jzssz/orgs",
"repos_url": "https://api.github.com/users/jzssz/repos",
"events_url": "https://api.github.com/users/jzssz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzssz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The instructions for installing the package `clip` can be found on the original implementation's github README: https://github.com/openai/CLIP"
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in> yes
- Using distributed or parallel set-up in script?: <fill in> yes
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
"src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py"
"from clip import load"
bug:**No module named 'clip'.**
so I use “pip install clip” to install clip, but:
another bug: **cannot import name 'load' from 'clip'**
so I'm wondering if the way I installed the clip package is wrong? I'm hoping to find the correct way to install this package so I can run this code "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py"
### Expected behavior
I'm hoping to find the correct way to install “clip” this package so I can run this code "src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28067/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28066/comments | https://api.github.com/repos/huggingface/transformers/issues/28066/events | https://github.com/huggingface/transformers/issues/28066 | 2,043,567,873 | I_kwDOCUB6oc55zl8B | 28,066 | Tokenizer padding does not work when return_tensor="pt" | {
"login": "simeneide",
"id": 7136076,
"node_id": "MDQ6VXNlcjcxMzYwNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7136076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simeneide",
"html_url": "https://github.com/simeneide",
"followers_url": "https://api.github.com/users/simeneide/followers",
"following_url": "https://api.github.com/users/simeneide/following{/other_user}",
"gists_url": "https://api.github.com/users/simeneide/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simeneide/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simeneide/subscriptions",
"organizations_url": "https://api.github.com/users/simeneide/orgs",
"repos_url": "https://api.github.com/users/simeneide/repos",
"events_url": "https://api.github.com/users/simeneide/events{/privacy}",
"received_events_url": "https://api.github.com/users/simeneide/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you share a traceback of the error? \r\n",
"Sorry, it is updated now :)",
"Also related but not sure if it is the same. When using DataCollator on batches with keys that are not inputIds it fails:\r\nhttps://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorWithPadding\r\n\r\nit calls tokenizer.pad inside which does seem to break e.g.\r\n\r\n```\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\ntokenizer.pad_token = tokenizer.eos_token\r\ntokenizer.pad([{'input_ids' : [1,2], 'label' : [1,2]}, {'input_ids' : [1,2,3], 'label' : [1,2,3]},], return_tensors=\"pt\" )\r\n# ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`label` in this case) have excessive nesting (inputs type `list` where type `int` is expected).\r\n```",
"Few things here. \r\n- you need to pad the data. For that you need to add a padding token \r\n- you need to set `padding=True` the following worked for me:\r\n```python\r\n def tokenize_function(example):\r\n return tokenizer(example[\"sentence1\"], example[\"sentence2\"], padding = True, truncation=True, return_tensors=\"pt\")\r\ntokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n```\r\n- why you [need to pad ](https://huggingface.co/docs/transformers/pad_truncation)as well? truncation only truncates the longest sequence, the others have to be padded. \r\n\r\n[This](https://discuss.huggingface.co/t/the-datasets-map-method-doesnt-keep-tensor-format-from-tokenizer/25482) as well for the output format. ",
"Hey, thanks for looking into this. \r\nI am (as stated) following the dynamic padding tutorial (https://huggingface.co/learn/nlp-course/chapter3/2?fw=pt#dynamic-padding) and the code is taken from there, just that I added `return_tensors=\"pt\"`.\r\n\r\nThe reason I do not want to pad here is because I want to pad in a collate_fn function later (to get dynamic padding). But I would prefer to use tensors instead of lists when tokenizing. The error comes from the tokenizer.pad (where i added an example in the comments).\r\n\r\nIs this function supposed to work when the tokenizer returns lists but not with tensors?",
"Of course, you cannot create a tensor with samples of different sizes. That is why we have to pad in that context",
"I dont think you understood what I meant (i want to use dynamic padding in a collate_fn function), but Ive found another solution :) Thanks",
"Alright ! Feel free to share it here for the community 🤗 ",
"Yes, I basically had to write my own DataCollator. The problem is that `tokenizer.pad` will not work in the data collator if there are features other than input_ids & attention_mask in the dataset.\r\n\r\nThis issue summarizes it fairly well:\r\nhttps://github.com/huggingface/transformers/issues/20182\r\n\r\nMy solution is the following modification to the DataCollatorWithPadding:\r\n\r\n```\r\nfrom dataclasses import dataclass\r\nfrom random import randint\r\nfrom typing import Any, Callable, Dict, List, NewType, Optional, Tuple, Union\r\nfrom transformers.utils import PaddingStrategy\r\nfrom transformers import PreTrainedTokenizerBase\r\n\r\n@dataclass\r\nclass DataCollatorWithPadding:\r\n \"\"\"\r\n Data collator that will dynamically pad the inputs received.\r\n\r\n Args:\r\n tokenizer ([`PreTrainedTokenizer`] or [`PreTrainedTokenizerFast`]):\r\n The tokenizer used for encoding the data.\r\n padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`):\r\n Select a strategy to pad the returned sequences (according to the model's padding side and padding index)\r\n among:\r\n\r\n - `True` or `'longest'` (default): Pad to the longest sequence in the batch (or no padding if only a single\r\n sequence is provided).\r\n - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum\r\n acceptable input length for the model if that argument is not provided.\r\n - `False` or `'do_not_pad'`: No padding (i.e., can output a batch with sequences of different lengths).\r\n max_length (`int`, *optional*):\r\n Maximum length of the returned list and optionally padding length (see above).\r\n pad_to_multiple_of (`int`, *optional*):\r\n If set will pad the sequence to a multiple of the provided value.\r\n\r\n This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=\r\n 7.5 (Volta).\r\n return_tensors (`str`, *optional*, defaults to `\"pt\"`):\r\n The type of Tensor to return. Allowable values are \"np\", \"pt\" and \"tf\".\r\n \"\"\"\r\n\r\n tokenizer : PreTrainedTokenizerBase\r\n padding: Union[bool, str, PaddingStrategy] = True\r\n max_length: Optional[int] = None\r\n pad_to_multiple_of: Optional[int] = None\r\n return_tensors: str = \"pt\"\r\n\r\n def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:\r\n padding_features = {key : val for key, val in features.items() if key in ['input_ids','attention_mask']}\r\n batch = self.tokenizer.pad(\r\n padding_features,\r\n padding=self.padding,\r\n max_length=self.max_length,\r\n pad_to_multiple_of=self.pad_to_multiple_of,\r\n return_tensors=self.return_tensors,\r\n )\r\n\r\n batch['labels'] = self.tokenizer.pad(\r\n {'input_ids' : features['labels']},\r\n padding=self.padding,\r\n max_length=self.max_length,\r\n pad_to_multiple_of=self.pad_to_multiple_of,\r\n return_tensors=self.return_tensors,\r\n )['input_ids']\r\n \r\n for key, val in features.items():\r\n if key not in ['input_ids','attention_mask','labels']:\r\n batch[key] = val\r\n\r\n return batch\r\n```",
"@simeneide your method is good. And if DataCollatorWithPadding in transformers only pad input_ids and attention_masks. It's not very useful. @ArthurZucker ",
"@simeneide I am using transformers 4.62.3. The features is of type List[Dict[str, Any]]. So I modified a little bit.\r\n\r\n```\r\nfrom dataclasses import dataclass\r\nfrom random import randint\r\nfrom typing import Any, Callable, Dict, List, NewType, Optional, Tuple, Union\r\nfrom transformers.utils import PaddingStrategy\r\nfrom transformers import PreTrainedTokenizerBase, BatchEncoding\r\n\r\n@dataclass\r\nclass MyDataCollatorWithPadding:\r\n \"\"\"\r\n Data collator that will dynamically pad the inputs received.\r\n\r\n Args:\r\n tokenizer ([`PreTrainedTokenizer`] or [`PreTrainedTokenizerFast`]):\r\n The tokenizer used for encoding the data.\r\n padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`):\r\n Select a strategy to pad the returned sequences (according to the model's padding side and padding index)\r\n among:\r\n\r\n - `True` or `'longest'` (default): Pad to the longest sequence in the batch (or no padding if only a single\r\n sequence is provided).\r\n - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum\r\n acceptable input length for the model if that argument is not provided.\r\n - `False` or `'do_not_pad'`: No padding (i.e., can output a batch with sequences of different lengths).\r\n max_length (`int`, *optional*):\r\n Maximum length of the returned list and optionally padding length (see above).\r\n pad_to_multiple_of (`int`, *optional*):\r\n If set will pad the sequence to a multiple of the provided value.\r\n\r\n This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=\r\n 7.5 (Volta).\r\n return_tensors (`str`, *optional*, defaults to `\"pt\"`):\r\n The type of Tensor to return. Allowable values are \"np\", \"pt\" and \"tf\".\r\n \"\"\"\r\n\r\n tokenizer : PreTrainedTokenizerBase\r\n padding: Union[bool, str, PaddingStrategy] = True\r\n max_length: Optional[int] = None\r\n pad_to_multiple_of: Optional[int] = None\r\n return_tensors: str = \"pt\"\r\n\r\n def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:\r\n padding_features = [{key : val for key, val in row.items() if key in ['input_ids','attention_mask']} for row in features]\r\n \r\n batch = self.tokenizer.pad(\r\n padding_features,\r\n padding=self.padding,\r\n max_length=self.max_length,\r\n pad_to_multiple_of=self.pad_to_multiple_of,\r\n return_tensors=None,\r\n )\r\n\r\n batch['labels'] = self.tokenizer.pad(\r\n [{'input_ids' : row['labels']} for row in features],\r\n padding=self.padding,\r\n max_length=self.max_length,\r\n pad_to_multiple_of=self.pad_to_multiple_of,\r\n return_tensors=None,\r\n )['input_ids']\r\n \r\n \r\n for row in features:\r\n for key, value in row.items():\r\n if key in ['input_ids','attention_mask','labels']:\r\n continue\r\n if key not in batch:\r\n batch[key] = []\r\n batch[key].append(value)\r\n\r\n return BatchEncoding(batch, tensor_type=self.return_tensors)\r\n\r\n```",
"@fancyerii sorry for the inconvenience, an error raising proposing a fix on how to pad if there are more than 2 inputs might be nice indeed! ",
"I ran into the same problem in my scenario of using a custom data set to fine-tune the LLAMA2 model, and I used the following function to process each piece of data:\r\n```\r\ndef tokenize_add_label(example):\r\n prompt = tokenizer.encode(\r\n tokenizer.bos_token + example[\"prompt\"], add_special_tokens=False\r\n )\r\n action = tokenizer.encode(\r\n example[\"action\"] + tokenizer.eos_token, add_special_tokens=False\r\n )\r\n\r\n example = {\r\n \"input_ids\": prompt + action,\r\n \"attention_mask\": [1] * (len(prompt) + len(action)),\r\n \"labels\": [-100] * len(prompt) + action,\r\n }\r\n```\r\nI want to train the model using 'DataCollatorWithPadding' to fill in the data for each batch during training and get the same error:\r\n```\r\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\ntrainer = Trainer(\r\n tokenizer=tokenizer,\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n data_collator=data_collator,\r\n callbacks=[profiler_callback] if enable_profiler else [],\r\n )\r\ntrainer.train()\r\nlib/python3.9 / site - packages/transformers/tokenization_utils_base. Py \", line 764, in convert_to_tensors\r\nraise ValueError(\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).\r\n```"
] | 1,702 | 1,706 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.36.1
- Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from datasets import load_dataset
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
raw_datasets = load_dataset("glue", "mrpc")
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True, return_tensors="pt")
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
```
Resulting traceback
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:748](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:748), in BatchEncoding.convert_to_tensors(self, tensor_type, prepend_batch_axis)
[747](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=746) if not is_tensor(value):
--> [748](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=747) tensor = as_tensor(value)
[750](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=749) # Removing this for now in favor of controlling the shape with `prepend_batch_axis`
[751](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=750) # # at-least2d
[752](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=751) # if tensor.ndim > 2:
[753](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=752) # tensor = tensor.squeeze(0)
[754](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=753) # elif tensor.ndim < 2:
[755](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=754) # tensor = tensor[None, :]
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:720](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:720), in BatchEncoding.convert_to_tensors..as_tensor(value, dtype)
[719](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=718) return torch.tensor(np.array(value))
--> [720](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=719) return torch.tensor(value)
ValueError: expected sequence of length 52 at dim 1 (got 77)
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
[/home/simen.eide](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide)@schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py in line 8
[6](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=5) def tokenize_function(example):
[7](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=6) return tokenizer(example["sentence1"], example["sentence2"], truncation=True, return_tensors="pt")
----> [8](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=7) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
File [~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:855](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:855), in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
[852](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=851) if cache_file_names is None:
[853](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=852) cache_file_names = {k: None for k in self}
[854](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=853) return DatasetDict(
--> [855](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=854) {
[856](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=855) k: dataset.map(
[857](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=856) function=function,
[858](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=857) with_indices=with_indices,
[859](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=858) with_rank=with_rank,
[860](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=859) input_columns=input_columns,
[861](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=860) batched=batched,
[862](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=861) batch_size=batch_size,
[863](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=862) drop_last_batch=drop_last_batch,
[864](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=863) remove_columns=remove_columns,
[865](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=864) keep_in_memory=keep_in_memory,
[866](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=865) load_from_cache_file=load_from_cache_file,
[867](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=866) cache_file_name=cache_file_names[k],
[868](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=867) writer_batch_size=writer_batch_size,
[869](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=868) features=features,
[870](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=869) disable_nullable=disable_nullable,
[871](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=870) fn_kwargs=fn_kwargs,
[872](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=871) num_proc=num_proc,
[873](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=872) desc=desc,
[874](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=873) )
[875](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=874) for k, dataset in self.items()
[876](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=875) }
[877](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=876) )
File [~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:856](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/dataset_dict.py:856), in (.0)
[852](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=851) if cache_file_names is None:
[853](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=852) cache_file_names = {k: None for k in self}
[854](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=853) return DatasetDict(
[855](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=854) {
--> [856](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=855) k: dataset.map(
[857](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=856) function=function,
[858](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=857) with_indices=with_indices,
[859](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=858) with_rank=with_rank,
[860](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=859) input_columns=input_columns,
[861](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=860) batched=batched,
[862](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=861) batch_size=batch_size,
[863](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=862) drop_last_batch=drop_last_batch,
[864](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=863) remove_columns=remove_columns,
[865](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=864) keep_in_memory=keep_in_memory,
[866](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=865) load_from_cache_file=load_from_cache_file,
[867](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=866) cache_file_name=cache_file_names[k],
[868](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=867) writer_batch_size=writer_batch_size,
[869](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=868) features=features,
[870](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=869) disable_nullable=disable_nullable,
[871](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=870) fn_kwargs=fn_kwargs,
[872](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=871) num_proc=num_proc,
[873](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=872) desc=desc,
[874](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=873) )
[875](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=874) for k, dataset in self.items()
[876](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=875) }
[877](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/dataset_dict.py?line=876) )
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:591](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:591), in transmit_tasks..wrapper(*args, **kwargs)
[589](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=588) self: "Dataset" = kwargs.pop("self")
[590](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=589) # apply actual function
--> [591](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=590) out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
[592](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=591) datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
[593](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=592) for dataset in datasets:
[594](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=593) # Remove task templates if a column mapping of the template is no longer valid
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:556](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:556), in transmit_format..wrapper(*args, **kwargs)
[549](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=548) self_format = {
[550](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=549) "type": self._format_type,
[551](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=550) "format_kwargs": self._format_kwargs,
[552](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=551) "columns": self._format_columns,
[553](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=552) "output_all_columns": self._output_all_columns,
[554](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=553) }
[555](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=554) # apply actual function
--> [556](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=555) out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
[557](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=556) datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
[558](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=557) # re-apply format to the output
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3089](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3089), in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
[3082](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3081) if transformed_dataset is None:
[3083](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3082) with logging.tqdm(
[3084](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3083) disable=not logging.is_progress_bar_enabled(),
[3085](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3084) unit=" examples",
[3086](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3085) total=pbar_total,
[3087](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3086) desc=desc or "Map",
[3088](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3087) ) as pbar:
-> [3089](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3088) for rank, done, content in Dataset._map_single(**dataset_kwargs):
[3090](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3089) if done:
[3091](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3090) shards_done += 1
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3466](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3466), in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
[3462](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3461) indices = list(
[3463](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3462) range(*(slice(i, i + batch_size).indices(shard.num_rows)))
[3464](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3463) ) # Something simpler?
[3465](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3464) try:
-> [3466](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3465) batch = apply_function_on_filtered_inputs(
[3467](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3466) batch,
[3468](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3467) indices,
[3469](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3468) check_same_num_examples=len(shard.list_indexes()) > 0,
[3470](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3469) offset=offset,
[3471](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3470) )
[3472](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3471) except NumExamplesMismatchError:
[3473](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3472) raise DatasetTransformationNotAllowedError(
[3474](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3473) "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."
[3475](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3474) ) from None
File [~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3345](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py:3345), in Dataset._map_single..apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset)
[3343](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3342) if with_rank:
[3344](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3343) additional_args += (rank,)
-> [3345](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3344) processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
[3346](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3345) if isinstance(processed_inputs, LazyDict):
[3347](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3346) processed_inputs = {
[3348](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3347) k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format
[3349](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py?line=3348) }
[/home/simen.eide](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide)@schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py in line 7, in tokenize_function(example)
[6](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=5) def tokenize_function(example):
----> [7](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/data/Untitled-1.py?line=6) return tokenizer(example["sentence1"], example["sentence2"], truncation=True, return_tensors="pt")
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2802](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2802), in PreTrainedTokenizerBase.__call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
[2800](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2799) if not self._in_target_context_manager:
[2801](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2800) self._switch_to_input_mode()
-> [2802](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2801) encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
[2803](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2802) if text_target is not None:
[2804](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2803) self._switch_to_target_mode()
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2888](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2888), in PreTrainedTokenizerBase._call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
[2883](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2882) raise ValueError(
[2884](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2883) f"batch length of `text`: {len(text)} does not match batch length of `text_pair`:"
[2885](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2884) f" {len(text_pair)}."
[2886](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2885) )
[2887](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2886) batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text
-> [2888](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2887) return self.batch_encode_plus(
[2889](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2888) batch_text_or_text_pairs=batch_text_or_text_pairs,
[2890](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2889) add_special_tokens=add_special_tokens,
[2891](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2890) padding=padding,
[2892](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2891) truncation=truncation,
[2893](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2892) max_length=max_length,
[2894](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2893) stride=stride,
[2895](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2894) is_split_into_words=is_split_into_words,
[2896](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2895) pad_to_multiple_of=pad_to_multiple_of,
[2897](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2896) return_tensors=return_tensors,
[2898](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2897) return_token_type_ids=return_token_type_ids,
[2899](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2898) return_attention_mask=return_attention_mask,
[2900](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2899) return_overflowing_tokens=return_overflowing_tokens,
[2901](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2900) return_special_tokens_mask=return_special_tokens_mask,
[2902](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2901) return_offsets_mapping=return_offsets_mapping,
[2903](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2902) return_length=return_length,
[2904](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2903) verbose=verbose,
[2905](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2904) **kwargs,
[2906](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2905) )
[2907](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2906) else:
[2908](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2907) return self.encode_plus(
[2909](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2908) text=text,
[2910](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2909) text_pair=text_pair,
(...)
[2926](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2925) **kwargs,
[2927](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=2926) )
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:3079](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:3079), in PreTrainedTokenizerBase.batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
[3069](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3068) # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
[3070](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3069) padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
[3071](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3070) padding=padding,
[3072](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3071) truncation=truncation,
(...)
[3076](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3075) **kwargs,
[3077](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3076) )
-> [3079](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3078) return self._batch_encode_plus(
[3080](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3079) batch_text_or_text_pairs=batch_text_or_text_pairs,
[3081](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3080) add_special_tokens=add_special_tokens,
[3082](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3081) padding_strategy=padding_strategy,
[3083](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3082) truncation_strategy=truncation_strategy,
[3084](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3083) max_length=max_length,
[3085](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3084) stride=stride,
[3086](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3085) is_split_into_words=is_split_into_words,
[3087](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3086) pad_to_multiple_of=pad_to_multiple_of,
[3088](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3087) return_tensors=return_tensors,
[3089](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3088) return_token_type_ids=return_token_type_ids,
[3090](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3089) return_attention_mask=return_attention_mask,
[3091](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3090) return_overflowing_tokens=return_overflowing_tokens,
[3092](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3091) return_special_tokens_mask=return_special_tokens_mask,
[3093](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3092) return_offsets_mapping=return_offsets_mapping,
[3094](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3093) return_length=return_length,
[3095](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3094) verbose=verbose,
[3096](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3095) **kwargs,
[3097](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=3096) )
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:552](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:552), in PreTrainedTokenizerFast._batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose)
[550](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py?line=549) for input_ids in sanitized_tokens["input_ids"]:
[551](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py?line=550) self._eventual_warn_about_too_long_sequence(input_ids, max_length, verbose)
--> [552](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py?line=551) return BatchEncoding(sanitized_tokens, sanitized_encodings, tensor_type=return_tensors)
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:223](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:223), in BatchEncoding.__init__(self, data, encoding, tensor_type, prepend_batch_axis, n_sequences)
[219](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=218) n_sequences = encoding[0].n_sequences
[221](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=220) self._n_sequences = n_sequences
--> [223](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=222) self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File [~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:764](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:764), in BatchEncoding.convert_to_tensors(self, tensor_type, prepend_batch_axis)
[759](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=758) if key == "overflowing_tokens":
[760](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=759) raise ValueError(
[761](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=760) "Unable to create tensor returning overflowing tokens of different lengths. "
[762](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=761) "Please see if a fast version of this tokenizer is available to have this feature available."
[763](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=762) ) from e
--> [764](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=763) raise ValueError(
[765](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=764) "Unable to create tensor, you should probably activate truncation and/or padding with"
[766](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=765) " 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your"
[767](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=766) f" features (`{key}` in this case) have excessive nesting (inputs type `list` where type `int` is"
[768](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=767) " expected)."
[769](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=768) ) from e
[771](file:///home/simen.eide%40schibsted.com/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py?line=770) return self
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`input_ids` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
```
### Expected behavior
I am following the manual on dynamic padding here: https://huggingface.co/learn/nlp-course/chapter3/2?fw=pt#dynamic-padding
When it returns lists it all is fine, but the padding fails when I ask the tokenizer to return "pt". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28066/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28065/comments | https://api.github.com/repos/huggingface/transformers/issues/28065/events | https://github.com/huggingface/transformers/pull/28065 | 2,043,507,125 | PR_kwDOCUB6oc5iGVb6 | 28,065 | Cache: `Bart` and related architectures support `Cache` objects | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28065). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@amyeroberts this PR is not finalized, but I'd love to get an early review -- the failing tests are fixed by propagating the changes to models with the `#Copied from` statement. However, it's not a copy/paste job, so if you were to request changes, they could be painful to propagate to the remaining models 😬 \r\n\r\nThe key parts to review now are labeled as `1` and `2` in the PR header 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Mr bot, this is not stale (on hold while the static cache is being worked on, as they will likely have overlapping changes and the static cache is more important)"
] | 1,702 | 1,705 | null | MEMBER | null | # What does this PR do?
This PR applies the changes to `Bart` so it supports the new `Cache` objects. In other works, it is akin to #26681 but for encoder-decoder models.
⚠️ This is a giant PR that can't be separated due to our copy mechanism (🙃), but the review process doesn't need to be daunting. Here's my suggested review order and high-level rationale:
1. Changes in `cache_utils.py`. I've introduced `DynamicCacheWithCrossAttention`, which expands `DynamicCache` [cache object equivalent to the previous `past_key_values` input/output] with the ability to hold a cross-attention cache. This design was intentional: most LLMs (and now even multimodel models) tend to be decoder-only, so this separation will keep the cache class for decoder-only models simpler. It also enable us to be more strict -- I've caught an unintended cache deletion in Whisper thanks to the increased specificity!
2. Changes in `modeling_bart.py`. These changes are the equivalent of the modeling changes in #26681, but for encoder-decoder models.
3. Other changes, which can be reviewed more lightly. They are either related documentation fixes, minor corrections, propagation of bart's changes through `make fix-copies` (plus a few manual changes like adding imports or updating docstrings), or test upgrades for the new `DynamicCacheWithCrossAttention`.
___________________________________________________________________________
The following tests were run locally - includes FA2 and some pretty challenging tests to ensure nothing was broken in the process:
- [x] `RUN_SLOW=1 py.test tests/models/bart/test_modeling_bart.py -vv`
- [x] `RUN_SLOW=1 py.test tests/models/mbart/test_modeling_mbart.py -vv`
- [x] `RUN_SLOW=1 py.test tests/models/whisper/test_modeling_whisper.py -vv`
👉 In any case, we should run the slow CI before merging!
<details>
<summary>Note on Whisper: same failures as in `main`, i.e. (open me)</summary>

</details>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28065/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28065",
"html_url": "https://github.com/huggingface/transformers/pull/28065",
"diff_url": "https://github.com/huggingface/transformers/pull/28065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28065.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28064/comments | https://api.github.com/repos/huggingface/transformers/issues/28064/events | https://github.com/huggingface/transformers/pull/28064 | 2,043,489,693 | PR_kwDOCUB6oc5iGReQ | 28,064 | doc: Correct spelling mistake | {
"login": "caiyili",
"id": 4177513,
"node_id": "MDQ6VXNlcjQxNzc1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4177513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caiyili",
"html_url": "https://github.com/caiyili",
"followers_url": "https://api.github.com/users/caiyili/followers",
"following_url": "https://api.github.com/users/caiyili/following{/other_user}",
"gists_url": "https://api.github.com/users/caiyili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caiyili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caiyili/subscriptions",
"organizations_url": "https://api.github.com/users/caiyili/orgs",
"repos_url": "https://api.github.com/users/caiyili/repos",
"events_url": "https://api.github.com/users/caiyili/events{/privacy}",
"received_events_url": "https://api.github.com/users/caiyili/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28064). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Correct the word "toekn" to "token" in a document. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28064",
"html_url": "https://github.com/huggingface/transformers/pull/28064",
"diff_url": "https://github.com/huggingface/transformers/pull/28064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28064.patch",
"merged_at": 1702645299000
} |
https://api.github.com/repos/huggingface/transformers/issues/28063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28063/comments | https://api.github.com/repos/huggingface/transformers/issues/28063/events | https://github.com/huggingface/transformers/issues/28063 | 2,043,339,236 | I_kwDOCUB6oc55yuHk | 28,063 | can not find dataset_name transformersbook/codeparrot | {
"login": "sxsxsx",
"id": 16790259,
"node_id": "MDQ6VXNlcjE2NzkwMjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16790259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sxsxsx",
"html_url": "https://github.com/sxsxsx",
"followers_url": "https://api.github.com/users/sxsxsx/followers",
"following_url": "https://api.github.com/users/sxsxsx/following{/other_user}",
"gists_url": "https://api.github.com/users/sxsxsx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sxsxsx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sxsxsx/subscriptions",
"organizations_url": "https://api.github.com/users/sxsxsx/orgs",
"repos_url": "https://api.github.com/users/sxsxsx/repos",
"events_url": "https://api.github.com/users/sxsxsx/events{/privacy}",
"received_events_url": "https://api.github.com/users/sxsxsx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @sxsxsx, thanks for raising this issue! \r\n\r\nCould you please provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output? \r\n\r\nIs the script being run `examples/research_projects/codeparrot/scripts/preprocessing.py ? Please note that the research examples are not actively maintained and so may no longer be compatible with the most recent libraries. \r\n\r\nI'm not able to reproduce the issue with loading the dataset. It's possible there was a transient issue with connecting to the hub. Could you try running the code again?\r\n\r\nI'm able to run: \r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"transformersbook/codeparrot\", split=\"train\", streaming=True)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
python scripts/preprocessing.py \
--dataset_name transformersbook/codeparrot \
--output_dir codeparrot-clean
can not find dataset_name transformersbook/codeparrot
the error is
Traceback (most recent call last):
File "scripts/preprocessing.py", line 171, in <module>
ds = load_dataset(args.dataset_name, split="train")
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1627, in load_dataset
builder_instance = load_dataset_builder(
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1464, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1174, in dataset_module_factory
raise e1 from None
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 1156, in dataset_module_factory
return CommunityDatasetModuleFactoryWithoutScript(
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py", line 801, in get_module
else get_patterns_in_dataset_repository(dataset_info)
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/data_files.py", line 473, in get_patterns_in_dataset_repository
return _get_data_files_patterns(resolver)
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/data_files.py", line 101, in _get_data_files_patterns
data_files = pattern_resolver(pattern)
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/data_files.py", line 305, in _resolve_single_pattern_in_dataset_repository
glob_iter = [PurePath(filepath) for filepath in fs.glob(pattern) if fs.isfile(filepath)]
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/fsspec/spec.py", line 606, in glob
pattern = glob_translate(path + ("/" if ends_with_sep else ""))
File "/home/admin/miniconda3/envs/huggingface/lib/python3.8/site-packages/fsspec/utils.py", line 734, in glob_translate
raise ValueError(
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
scripts/preprocessing.py
### Expected behavior
run success | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28063/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28062/comments | https://api.github.com/repos/huggingface/transformers/issues/28062/events | https://github.com/huggingface/transformers/pull/28062 | 2,043,336,160 | PR_kwDOCUB6oc5iFuY8 | 28,062 | Remove SpeechT5 deprecated argument | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28062). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Merging !"
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
`stop_labels` is an unused argument that was supposed to be removed in `4.30.0`, here I remove it!
cc @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28062/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28062",
"html_url": "https://github.com/huggingface/transformers/pull/28062",
"diff_url": "https://github.com/huggingface/transformers/pull/28062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28062.patch",
"merged_at": 1702642506000
} |
https://api.github.com/repos/huggingface/transformers/issues/28061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28061/comments | https://api.github.com/repos/huggingface/transformers/issues/28061/events | https://github.com/huggingface/transformers/pull/28061 | 2,043,330,623 | PR_kwDOCUB6oc5iFtJJ | 28,061 | [`Modeling` / `Mixtral`] Fix GC + PEFT issues with Mixtral | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28061). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks ! As far as I know all models that uses the new cache refactor have been fixed in #28031 + this PR"
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Applies the same fix presented in #28031 for Mixtral, specifically addresses: https://github.com/huggingface/transformers/issues/28023#issuecomment-1856556941
cc @amyeroberts
Fixes: https://github.com/huggingface/trl/issues/1088 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28061/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28061",
"html_url": "https://github.com/huggingface/transformers/pull/28061",
"diff_url": "https://github.com/huggingface/transformers/pull/28061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28061.patch",
"merged_at": 1702636482000
} |
https://api.github.com/repos/huggingface/transformers/issues/28060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28060/comments | https://api.github.com/repos/huggingface/transformers/issues/28060/events | https://github.com/huggingface/transformers/pull/28060 | 2,043,320,287 | PR_kwDOCUB6oc5iFq0H | 28,060 | Skip M4T `test_retain_grad_hidden_states_attentions` | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28060). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for fixing! \r\n\r\nIf training is allowed to happen on the model but it can fail e.g. with attentions being None, could you open an issue to track this? Training should either be prevented with an exception or made possible (probably 1 then the other)",
"Hey @amyeroberts, in theory, training is supported for the tasks that translate inputs (text or audio) into texts, since it's a classic LLM with classic objective.\r\nTo improve training, the model randomly skip layers in the speech encoder block (thus having `None` as attention weights), but it doesn't break training when it happens."
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
After investigating the reasons for the `test_retain_grad_hidden_states_attentions` flaky failure, I realized the speech encoder attentions can be `None` with a non-zero probability when `training=True`. Skipping the test is the fastest fix.
Fixes #28036
cc @gante @amyeroberts @ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28060/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28060/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28060",
"html_url": "https://github.com/huggingface/transformers/pull/28060",
"diff_url": "https://github.com/huggingface/transformers/pull/28060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28060.patch",
"merged_at": 1702647556000
} |
https://api.github.com/repos/huggingface/transformers/issues/28059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28059/comments | https://api.github.com/repos/huggingface/transformers/issues/28059/events | https://github.com/huggingface/transformers/pull/28059 | 2,043,291,530 | PR_kwDOCUB6oc5iFkT_ | 28,059 | [Flax LLaMA] Fix attn dropout | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Attention dropout was not activated in Flax LLaMA, despite it being so in PyTorch LLaMA: https://github.com/huggingface/transformers/blob/1a585c1222a56bcaecc070966d558d4a9d862e83/src/transformers/models/llama/modeling_llama.py#L430
=> this PR unifies the implementations across frameworks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28059",
"html_url": "https://github.com/huggingface/transformers/pull/28059",
"diff_url": "https://github.com/huggingface/transformers/pull/28059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28059.patch",
"merged_at": 1702637857000
} |
https://api.github.com/repos/huggingface/transformers/issues/28058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28058/comments | https://api.github.com/repos/huggingface/transformers/issues/28058/events | https://github.com/huggingface/transformers/issues/28058 | 2,043,256,422 | I_kwDOCUB6oc55yZ5m | 28,058 | Mixtral: Reduce and Increase Expert Models | {
"login": "minato-ellie",
"id": 82735346,
"node_id": "MDQ6VXNlcjgyNzM1MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minato-ellie",
"html_url": "https://github.com/minato-ellie",
"followers_url": "https://api.github.com/users/minato-ellie/followers",
"following_url": "https://api.github.com/users/minato-ellie/following{/other_user}",
"gists_url": "https://api.github.com/users/minato-ellie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minato-ellie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minato-ellie/subscriptions",
"organizations_url": "https://api.github.com/users/minato-ellie/orgs",
"repos_url": "https://api.github.com/users/minato-ellie/repos",
"events_url": "https://api.github.com/users/minato-ellie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minato-ellie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @younesbelkada @ArthurZucker ",
"Hi @minato-ellie \r\nThis could makes sense yes, do you know if it has been empirically proven that decreasing the number of experts do not lead in performance degradation? I also wonder if this does not make the HF mixtral implementation \"too modulable\" which goes against transformers philosophy",
"@younesbelkada \r\n\r\nI created #28092 .\r\n\r\nI tried it and it seems that I can pass it forward using the new model, but I can't output the results. \r\nI didn't try to finetune it because I couldn't use GPU over the weekend.\r\n\r\nAlso, someone have created mixtral by merging 4 mistral models as experts and it seems to work well.\r\nhttps://huggingface.co/chargoddard/mixtralnt-4x7b-test",
"I'll try to finetune it when I can use gpu, and let you know the results.",
"Ok perfect, let us know how it goes!"
] | 1,702 | 1,702 | null | NONE | null | ### Feature request
Add methods to MixtralSparseMoeBlock, MixtralDecoderLayer, and MixtralModel for reducing (and enlarging) the number of expert models.
Implement a mechanism to decrease the number of expert models by removing corresponding rows in gate weights. This should enable the removal expert models by id.
https://github.com/huggingface/transformers/blob/1a585c1222a56bcaecc070966d558d4a9d862e83/src/transformers/models/mixtral/modeling_mixtral.py#L688-L710
### Motivation
This will allow scaling down or up the model size from a pre-trained model while preserving existing weights, eliminating the need to retrain from scratch.
### Your contribution
I am willing to contribute a pr, but need a few time.
I would like to know if such a PR is likely to be accepted, before I start working on it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28058/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28057/comments | https://api.github.com/repos/huggingface/transformers/issues/28057/events | https://github.com/huggingface/transformers/pull/28057 | 2,042,915,548 | PR_kwDOCUB6oc5iEScS | 28,057 | fix ffmpeg_microphone under WSL2 (use pulseaudio) | {
"login": "jamon",
"id": 272949,
"node_id": "MDQ6VXNlcjI3Mjk0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/272949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamon",
"html_url": "https://github.com/jamon",
"followers_url": "https://api.github.com/users/jamon/followers",
"following_url": "https://api.github.com/users/jamon/following{/other_user}",
"gists_url": "https://api.github.com/users/jamon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamon/subscriptions",
"organizations_url": "https://api.github.com/users/jamon/orgs",
"repos_url": "https://api.github.com/users/jamon/repos",
"events_url": "https://api.github.com/users/jamon/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Narsil @sanchit-gandhi ",
"Pulse or Also has nothing to do with WSL2. You could use either on regular linux too.\r\n\r\nI think this is the point where this demo should stop trying to support everything that might exist. Giving a good error message when this triggers might be better imo.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,708 | 1,708 | NONE | null | # Fix ffmpeg_microphone under WSL2
This attempts to detect if it is running under WSL2 and defaults to using pulseaudio with an input of "RDPSource" in order to work with WSL2.
@Narsil - tagging you since you contributed most of this file. I'm also happy to update this to just accept the format and input as parameters if you prefer (and either keep or roll back the defaults for WSL2). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28057/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28057",
"html_url": "https://github.com/huggingface/transformers/pull/28057",
"diff_url": "https://github.com/huggingface/transformers/pull/28057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28057.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28056/comments | https://api.github.com/repos/huggingface/transformers/issues/28056/events | https://github.com/huggingface/transformers/issues/28056 | 2,042,880,695 | I_kwDOCUB6oc55w-K3 | 28,056 | Transformers 4.36 use_cache issue | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions",
"organizations_url": "https://api.github.com/users/dakinggg/orgs",
"repos_url": "https://api.github.com/users/dakinggg/repos",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakinggg/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Thanks, pinging @gante as well as he worked on the cache refactoring, let’s keep this in mind ",
"Hi @dakinggg \r\nThanks very much for reporting, I believe one of your issue (or maybe both of them!) would be solved with https://github.com/huggingface/transformers/pull/28031\r\nCan you try with transformers main?",
"Hey @younesbelkada unfortunately I don't think that fix will work for me, as I use a different training framework to handle activation checkpointing.\r\n\r\nIt'd be great to understand and fix the root cause so that transformers models are fully usable with raw pytorch. Thanks as always for the quick responses!",
"Thanks @dakinggg ok sounds great, I'll spend some time to understand to rootcause of it and why it used to fail on transformers main and provide an update here!",
"Hi @dakinggg \r\n\r\nI had a deeper look, consider the snippet below:\r\n\r\n```python\r\nimport torch\r\nfrom torch.optim import Adam\r\nfrom transformers import BitsAndBytesConfig\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nfrom peft import get_peft_config, get_peft_model, LoraConfig, TaskType\r\n\r\nMODEL_ID = \"meta-llama/Llama-2-7b-hf\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_ID)\r\ninputs = tokenizer(\"hello world what's up\", return_tensors=\"pt\")\r\ninputs = {k: v.to(\"cuda\") for k, v in inputs.items()}\r\nprint(inputs)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map=\"auto\", attn_implementation=\"eager\", torch_dtype=torch.float16)\r\npeft_config = LoraConfig(task_type=TaskType.CAUSAL_LM, target_modules=['q_proj', 'v_proj'], inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1)\r\nmodel = get_peft_model(model, peft_config)\r\nmodel.print_trainable_parameters()\r\nmodel.gradient_checkpointing_enable()\r\nmodel.enable_input_require_grads()\r\n\r\noptimizer = Adam(model.parameters(), lr=1e-5)\r\nmodel.train()\r\n\r\nfor i in range(10):\r\n outputs = model(labels=inputs['input_ids'], **inputs)\r\n loss = outputs.loss\r\n print(loss)\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n```\r\n\r\nin case the fix in #28031 is not applied what will happen (step by step):\r\n\r\n1- `outputs = model(labels=inputs['input_ids'], **inputs)` will work perfectly fine because:\r\n1.1- `use_cache` is most of the case set to `True` on all model configs, therefore it will pass this logic: https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/llama/modeling_llama.py#L1004 and the model will create a non-`None` `past_key_values`.\r\n1.2- `use_cache` will be force-set to `False` here: https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/llama/modeling_llama.py#L1046 but it is too late because `past_key_values` have been already created above.\r\n1.3- Since `past_key_values` is set to a non-`None` value, it will pass this line as well, https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/llama/modeling_llama.py#L708 therefore populating `past_key_value` for each layer. Note at that point the past_key_values will have a shape of `batch_size, 1, seq_len, seq_len`\r\n\r\nOnce that all past key values are populated, the script will call `loss.backward()` and somehow it fails because:\r\n2- all module's forward are called again\r\n2.1- it ends up with the attention layers being called, since in the previous state the `past_key_values` were non-None, this line is called: https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/llama/modeling_llama.py#L701 leading to `kv_seq_len` being set to `2*seq_len`\r\n2.2- ValueError raised here: https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/llama/modeling_llama.py#L714C13-L714C69 since the shapes do not match anymore\r\n\r\nI don't 100% master what is going on under the hood when one uses torch's GC but the fix that I proposed in #28031 circumvents this issue, by making sure there are no dummy past_key_values are created in case we are under gradient checkpointing and training regime. Hence, force-setting `use_cache` to False above the line here: https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/llama/modeling_llama.py#L1004 fixes the issue, as we have been always doing it before the cache refactor. \r\n\r\nThe fix proposed worked for peft but should be universal to all training frameworks, except if you patch LLama/Mistral modeling classes with other classes, which in that case you should apply the same patch there as well. \r\n\r\nLet me know if anything is unclear!\r\n\r\n",
"That mostly makes sense...I'm didn't quite understand why it wasn't an issue in previous versions though. Shouldn't we just never compute past kv during training? regardless of gradient checkpointing or not. Even if it worked, its not good to be creating past kv when we're not generating, as it uses significant extra memory.\r\n\r\nAs to why the model's forward gets called again, that is because when you activation checkpointing, you don't save all of the activations for the backward pass, only some of them, and then you recompute the rest.",
"Thanks @dakinggg for your reply! \r\nThe reason why it was not failing before is that here: https://github.com/huggingface/transformers/blob/4d806dba8ca6ba714fd7b95c112dd8514136c9af/src/transformers/models/llama/modeling_llama.py#L893 `past_key_values` was always `None` during training leading to that block never being called, whereas now, past_key_values are always created during training since the model will fallback to config's `use_cache` to create `past_key_values`\r\nThanks also for explaining about GC, it makes sense. ",
"Ahh I see. So it seems to me that the proper fix is to go back to the old behavior where `past_key_values` is always null during training. We don't ever want to create them unless we are doing generation. I can certainly explicitly set `use_cache=False` in my code, but this will be a huge pitfall for others if that old behavior is not maintained.",
"Related, IMO the proper place to default `use_cache` to `True` is in `prepare_inputs_for_generation`, not in the model config.",
"Yep, we'll add a fix",
"Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"not sure if this has been fixed",
"Not yet, I'll be doing some more refactoring to past key values, in hope to fix these issues as well",
"Do we know why it produces higher loss? Should we use 4.35.2 before the refactoring is done?",
"Using flash attention, yes we know where the issue comes from: #28142 and more [details](https://github.com/huggingface/transformers/pull/28142#issuecomment-1869513914) was fixed on main and should be released this week",
"I meet the same peoblem, I install the transformers with the main branch, but it doesn't work. Has this problem been solved? thanks! @ArthurZucker "
] | 1,702 | 1,707 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Sorry that I don't really have a minimal reproducer here as I'm in another training framework, but I still think this might be useful for you.
Running training on llama2 7b, with activation checkpointing, has some issues in 4.36. Comparing to training with 4.35.2
- if using flash attention, training produces higher loss, is slower, and uses more memory
- if not using flash attention, crashes with `ValueError: Attention mask should be of size (2, 1, 4096, 8192), but is torch.Size([2, 1, 4096, 4096])`
If I explicitly set `use_cache=False` (shouldn't have any impact during training because there is no cache), results with 4.36 are similar to 4.35.2.
### Expected behavior
No regression from 4.35.2 -> 4.36. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28056/timeline | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.