url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/27244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27244/comments
https://api.github.com/repos/huggingface/transformers/issues/27244/events
https://github.com/huggingface/transformers/pull/27244
1,974,399,421
PR_kwDOCUB6oc5ecO8H
27,244
Add VITS training script example
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27244). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,704
1,704
COLLABORATOR
null
# What does this PR do? Vits is a TTS model that have been supported in HF for a few months now. MMS-tts, Meta's models on a thousand languages, also uses VITS architecture. This PR aims to add Vits training support, following [community interest on the hub](https://huggingface.co/facebook/mms-tts/discussions/1#653c1b47c0d69fe83ded4161). Vits is a peculiar model, a GAN, a VAE a flow-based, all at the same time. To train it, one needs a discriminator, which this PR adds to Transformers. I'll comment on the architecture and on some modeling choices on the comments below. I still need to write the description of the training code, but I believe the implementation choices I've made should have our core maintainers approval first! cc @amyeroberts and @sanchit-gandhi !
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27244/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27244", "html_url": "https://github.com/huggingface/transformers/pull/27244", "diff_url": "https://github.com/huggingface/transformers/pull/27244.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27244.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27243/comments
https://api.github.com/repos/huggingface/transformers/issues/27243/events
https://github.com/huggingface/transformers/pull/27243
1,974,377,929
PR_kwDOCUB6oc5ecKRL
27,243
fix-deprecated-exllama-arg
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
MEMBER
null
# What does this PR do ? This PR fixes the logic with disable_exlllama so that it is BC compatible. It fixes the case were the user overwrite the `GPTQConfig`. ```py checkpoint = "marcsun13/opt-350m-gptq-4bit" quantization_config = GPTQConfig(bits=4, disable_exllama=True ) # True print(quantization_config.use_exllama) # False print(quantization_config.disable_exllama) -> this will lead to an error as both are defined. model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27243", "html_url": "https://github.com/huggingface/transformers/pull/27243", "diff_url": "https://github.com/huggingface/transformers/pull/27243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27243.patch", "merged_at": 1698938612000 }
https://api.github.com/repos/huggingface/transformers/issues/27242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27242/comments
https://api.github.com/repos/huggingface/transformers/issues/27242/events
https://github.com/huggingface/transformers/pull/27242
1,974,372,710
PR_kwDOCUB6oc5ecJIn
27,242
Bin format tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,704
1,704
MEMBER
null
As requested in https://github.com/huggingface/transformers/pull/27231#issuecomment-1790681029 and https://github.com/huggingface/transformers/pull/27231#pullrequestreview-1710357492
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27242/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27242", "html_url": "https://github.com/huggingface/transformers/pull/27242", "diff_url": "https://github.com/huggingface/transformers/pull/27242.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27242.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27241/comments
https://api.github.com/repos/huggingface/transformers/issues/27241/events
https://github.com/huggingface/transformers/issues/27241
1,974,242,301
I_kwDOCUB6oc51rIv9
27,241
Typo of layernorm naming.
{ "login": "RunpeiDong", "id": 37994246, "node_id": "MDQ6VXNlcjM3OTk0MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/37994246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RunpeiDong", "html_url": "https://github.com/RunpeiDong", "followers_url": "https://api.github.com/users/RunpeiDong/followers", "following_url": "https://api.github.com/users/RunpeiDong/following{/other_user}", "gists_url": "https://api.github.com/users/RunpeiDong/gists{/gist_id}", "starred_url": "https://api.github.com/users/RunpeiDong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RunpeiDong/subscriptions", "organizations_url": "https://api.github.com/users/RunpeiDong/orgs", "repos_url": "https://api.github.com/users/RunpeiDong/repos", "events_url": "https://api.github.com/users/RunpeiDong/events{/privacy}", "received_events_url": "https://api.github.com/users/RunpeiDong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @RunpeiDong, please see this related issue: #27190 \r\n", "> Hi @RunpeiDong, please see this related issue: #27190\r\n\r\nThanks @vanpelt. I understand the tedious work to do if we want to address typos in previous models. I guess we can only try to avoid it in future models." ]
1,698
1,698
1,698
NONE
null
the `pre_layrnorm` should be `pre_layernorm`. Many of them are written wrongly, but the `post_layernorm` is all written correctly. Why is it? Should it be fixed? Some examples of this typo: https://github.com/huggingface/transformers/blob/4557a0dede92ce985576fac478b754d76bba3c18/src/transformers/models/clip/modeling_clip.py#L844C57-L844C57
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27241/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27240/comments
https://api.github.com/repos/huggingface/transformers/issues/27240/events
https://github.com/huggingface/transformers/pull/27240
1,974,025,067
PR_kwDOCUB6oc5ea8YJ
27,240
Fixing m4t.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @Narsil!" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Fixed m4t which had a disparity in configuration for SpeechToText. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27240/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27240", "html_url": "https://github.com/huggingface/transformers/pull/27240", "diff_url": "https://github.com/huggingface/transformers/pull/27240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27240.patch", "merged_at": 1698935538000 }
https://api.github.com/repos/huggingface/transformers/issues/27239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27239/comments
https://api.github.com/repos/huggingface/transformers/issues/27239/events
https://github.com/huggingface/transformers/issues/27239
1,974,007,921
I_kwDOCUB6oc51qPhx
27,239
Inconsistencies in Evaluation: Manual Inference vs. Huggingface Pipeline
{ "login": "monk1337", "id": 17107749, "node_id": "MDQ6VXNlcjE3MTA3NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monk1337", "html_url": "https://github.com/monk1337", "followers_url": "https://api.github.com/users/monk1337/followers", "following_url": "https://api.github.com/users/monk1337/following{/other_user}", "gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monk1337/subscriptions", "organizations_url": "https://api.github.com/users/monk1337/orgs", "repos_url": "https://api.github.com/users/monk1337/repos", "events_url": "https://api.github.com/users/monk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/monk1337/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The issue has been solved. I had to use `AutomaticSpeechRecognitionPipeline`", "Thanks for sharing the solution! " ]
1,698
1,698
1,698
NONE
null
Hi, I noticed a significant difference between the Word Error Rate (WER) scores when evaluating with manual inference and when using a custom pipeline. I would appreciate any guidance on whether I might be making a mistake in my approach. 1. Manual Inference: ``` model.eval() for step, batch in enumerate(tqdm(eval_dataloader)): with torch.cuda.amp.autocast(): with torch.no_grad(): generated_tokens = ( model.generate( input_features=batch["input_features"].to("cuda"), forced_decoder_ids=forced_decoder_ids, max_new_tokens=255, ) .cpu() .numpy() ) labels = batch["labels"].cpu().numpy() labels = np.where(labels != -100, labels, processor.tokenizer.pad_token_id) decoded_preds = processor.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) decoded_labels = processor.tokenizer.batch_decode(labels, skip_special_tokens=True) predictions.extend(decoded_preds) references.extend(decoded_labels) normalized_predictions.extend([normalizer(pred).strip() for pred in decoded_preds]) normalized_references.extend([normalizer(label).strip() for label in decoded_labels]) del generated_tokens, labels, batch gc.collect() wer = 100 * metric.compute(predictions=predictions, references=references) normalized_wer = 100 * metric.compute(predictions=normalized_predictions, references=normalized_references) eval_metrics = {"eval/wer": wer, "eval/normalized_wer": normalized_wer} print(f"{wer=} and {normalized_wer=}") print(eval_metrics) ``` The score is `{'eval/wer': 167.3913043478261, 'eval/normalized_wer': 75.28089887640449}` 2. Pipeline Evaluation: ``` class TranscriptionPipeline: def __init__(self, model_name): self.pipe = self._inference_pipeline(model_name) def _inference_pipeline(self, model_name): return pipeline(model=model_name) def transcribe(self, audio_path): result = self.pipe(audio_path) return result def wer_score(self, audio_path, ground_truth): result = self.transcribe(audio_path)["text"] wer = 100 * WER_score.compute(predictions=[result], references=[ground_truth]) return {"wer": wer} def wer_score_bulk(self, audio_paths, ground_truths): results = [result_.get("text", "") for result_ in self.transcribe(audio_paths)] wer = 100 * WER_score.compute(predictions=results, references=ground_truths) return {"wer": wer} def evaluate_dataset(self, dataset): audio_paths = [] sentences = [] # Populate the lists for item in dataset: audio_paths.append(item['audio']['path']) sentences.append(item['sentence']) # Get WER score using the bulk method result = self.wer_score_bulk(audio_paths, sentences) return result ty = TranscriptionPipeline("aaditya/Whisper_a948aa64-4c75-4246-ab98-0a48a9f2cd14") ty.evaluate_dataset(common_voice_["test"]) ``` While Pipeline evaluation score is `{'wer': 106.5217391304348}` Concern: The discrepancy between the two WER scores is quite large. Could someone help identify if I've made an error in my approach or clarify why such a difference might occur? ### Who can help? pipelines: @Narsil @Vaibhavs10 @sanchit-gandhi ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ### Expected behavior WER should be same.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27239/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27238/comments
https://api.github.com/repos/huggingface/transformers/issues/27238/events
https://github.com/huggingface/transformers/issues/27238
1,974,001,073
I_kwDOCUB6oc51qN2x
27,238
Infomer with different output_size
{ "login": "antonio-navarra", "id": 65104103, "node_id": "MDQ6VXNlcjY1MTA0MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/65104103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antonio-navarra", "html_url": "https://github.com/antonio-navarra", "followers_url": "https://api.github.com/users/antonio-navarra/followers", "following_url": "https://api.github.com/users/antonio-navarra/following{/other_user}", "gists_url": "https://api.github.com/users/antonio-navarra/gists{/gist_id}", "starred_url": "https://api.github.com/users/antonio-navarra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antonio-navarra/subscriptions", "organizations_url": "https://api.github.com/users/antonio-navarra/orgs", "repos_url": "https://api.github.com/users/antonio-navarra/repos", "events_url": "https://api.github.com/users/antonio-navarra/events{/privacy}", "received_events_url": "https://api.github.com/users/antonio-navarra/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "cc @kashif ", "@antonio-navarra yes so my intial setup was to have any extra dynamic covariates (which will be part of the input vector) as dynamic real features (with the caviate that they are known at inference time). For covariates that are only know in the context window, then yes one would need some extra features.... \r\n\r\nwhat was your setup and can it be done via dynamic real features?", "Kashif,\r\n\r\nmy setup is that I have a number of physical fields, e.g Field1, Field2, etc.., each one is described by a different number of feature and relative time series. Say, to fix the ideas that I have Field1 (6 features), Field2 (11) and so on.. The length of the time series is the same for all of them.\r\n\r\nThe target is to predict the Fields. In the present formulation , the target is the same number of features over some prediction time\r\n\r\nTrain (Field1, Field2) —> Predict (Field1, Field2)\r\n\r\nI had some successes, but I thought that it might be interesting to try\r\n\r\nTrain (Field1, Field2) —> Predict (Field1)\r\n\r\nThe extra feature cannot be used as dynamical real features because they are not known at prediction time…\r\n\r\nThat’s the situation, I am glad for any help…\r\n\r\nBest\r\n\r\nAntonio\r\n\r\n> On 2 Nov 2023, at 17:08, Kashif Rasul ***@***.***> wrote:\r\n> \r\n> \r\n> @antonio-navarra <https://github.com/antonio-navarra> yes so my intial setup was to have any extra dynamic covariates (which will be part of the input vector) as dynamic real features (with the caviate that they are known at inference time). For covariates that are only know in the context window, then yes one would need some extra features....\r\n> \r\n> what was your setup and can it be done via dynamic real features?\r\n> \r\n> —\r\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/27238#issuecomment-1791031332>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/APQWRZ74DOFOWM77XPQE7O3YCPAOJAVCNFSM6AAAAAA62V6TVGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOJRGAZTCMZTGI>.\r\n> You are receiving this because you were mentioned.\r\n> \r\n\r\n", "The time series dataset processing has to be improved. Currently it asks for target, what if what we care about it multi-target (i.e., multivariate series), even the new PatchTSMixer is entirely unusable, no idea how to get the data formatted. Went through all your helper notebooks, not helpful at all. For example for PatchTSMixer it would be extremely helful to have an example that goes from: ``numpy_data # Reshaped to (num_samples, lookback_window, num_features)``\r\n\r\nto ``trainer = Trainer(model=model, train_dataset=train_scaled, eval_dataset=valid_scaled)``", "This is the current notebook, I think a real explanatory notebook would very quickly increase the uptake for TS hugging. https://github.com/huggingface/notebooks/blob/main/examples/time_series_datasets.ipynb", "@firmai so the informer example is a multivariate setup, that takes as input the multivariate vector and on the output side the emission head is an independent diagonal. The patchtstmixer is multivariate and a blog post https://github.com/huggingface/blog/pull/1740 with a notebook is in the review stages. Could you kindly have a look?", "Perfect, I knew there was something missing, I guess I just jumped on the train to early. Thanks. " ]
1,698
1,706
null
NONE
null
### Feature request The Infomer transformers seems to perform well on time series multivariate forecasting also in my application, however, it would be nice to have the option to have an `output_size` for the features different from the `input_size`. ### Motivation The reasons is that for the multivariate forecasting problem i am considering it would make sense to focus the predictions only on the variables/features of interest. ### Your contribution I will be happy to work and contribute what I can. I have inspected the code -- I am using InfomerforPrediction -- but it i snot immediately obvious to me where to insert the last fully connected layer for the reduction. If somebody can give some guidance, i will be happy to try it...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27238/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27237/comments
https://api.github.com/repos/huggingface/transformers/issues/27237/events
https://github.com/huggingface/transformers/issues/27237
1,973,983,206
I_kwDOCUB6oc51qJfm
27,237
Infomer fails with LAG=0
{ "login": "antonio-navarra", "id": 65104103, "node_id": "MDQ6VXNlcjY1MTA0MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/65104103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antonio-navarra", "html_url": "https://github.com/antonio-navarra", "followers_url": "https://api.github.com/users/antonio-navarra/followers", "following_url": "https://api.github.com/users/antonio-navarra/following{/other_user}", "gists_url": "https://api.github.com/users/antonio-navarra/gists{/gist_id}", "starred_url": "https://api.github.com/users/antonio-navarra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antonio-navarra/subscriptions", "organizations_url": "https://api.github.com/users/antonio-navarra/orgs", "repos_url": "https://api.github.com/users/antonio-navarra/repos", "events_url": "https://api.github.com/users/antonio-navarra/events{/privacy}", "received_events_url": "https://api.github.com/users/antonio-navarra/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[ { "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false } ]
[ "Hi @antonio-navarra, thanks for raising this issue! \r\n\r\nCould you provide a minimal code example we can run to reproduce the issue? Without that we'll be unable to help. \r\n\r\ncc @gante ", "@amyeroberts FYI time-series models often have their own stand-alone `generate` methods (e.g. [this one for Informer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/informer/modeling_informer.py#L1879)) :)\r\n\r\nI'm passing the cc along to our time-series expert, @kashif ", "yes currently there is a need for 1 time step in the enc-dec setting to kick-start the generation process... and thus edge cases happen when this is not the case etc. Thus i think it might be useful to have at least a lag=[1] and thus at inference time a context windows of 2 time steps... \r\n\r\ncan you confirm it works with `[1]` @antonio-navarra ?", "@Kashif,\r\nYes with any lags >0 works. In my case I use it to make forecasts of atmospheric data sets LAGS=[1] with an input sequence of 30 time steps gives good results.\r\n\r\nProbably, LAGS=0 is a little bit contrived, but then it would be useful to have a check and mention it in the docs.\r\n\r\n—Antonio\r\n\r\n> On 2 Nov 2023, at 17:46, Kashif Rasul ***@***.***> wrote:\r\n> \r\n> \r\n> yes currently there is a need for 1 time step in the enc-dec setting to kick-start the generation process... and thus edge cases happen when this is not the case etc. Thus i think it might be useful to have at least a lag=[1] and thus at inference time a context windows of 2 time steps...\r\n> \r\n> can you confirm it works with [1] @antonio-navarra <https://github.com/antonio-navarra> ?\r\n> \r\n> —\r\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/27237#issuecomment-1791100530>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/APQWRZ2UQB4CO4AH46EC6SLYCPE6PAVCNFSM6AAAAAA62VRJMGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOJRGEYDANJTGA>.\r\n> You are receiving this because you were mentioned.\r\n> \r\n\r\n", "ok good to know... yes i'll see how to make a check for zero or even negative lags... thanks for the heads up!", "#27237 Solved ✅ (https://github.com/huggingface/transformers/pull/28766)\r\n\r\nI suggest a resolution for situations where sequence_lags=[0]. In such instances, the length of the context past sequence aligns with the overall sequence length. Let me provide some clarification on this matter.\r\n\r\nThe issue stems from line `1230` in `modeling_time_series_transformer.py.` In cases where the lag parameter is set to 0, the index is assigned a value of -1. This results in only one data point being lagged, creating a discrepancy when using model.generate(). For example, if the size is 48, selecting a lag of [0] produces 49, rendering it unsuitable for model.generate().\r\n\r\n```\r\nsequence_length = sequence.shape[1]\r\nindices = [lag - shift for lag in self.config.lags_sequence]\r\nif max(indices) + subsequences_length > sequence_length:\r\n raise ValueError(\r\n f\"lags cannot go further than history length, found lag {max(indices)} \"\r\n f\"while history length is only {sequence_length}\"\r\n )\r\n```\r\nWe can modify the code as shown below to rectify index 0 when negative values are encountered ✅:\r\n```\r\nsequence_length = sequence.shape[1]\r\n# (Khalid Oublal) -> addressed the issue regarding the scenario where lag equals 0.\r\n# The previous implementation was: indices = [lag - shift for lag in self.config.lags_sequence]\r\nindices = [lag - shift if lag > 0 else 0 for lag in self.config.lags_sequence]\r\nif max(indices) + subsequences_length > sequence_length:\r\n raise ValueError(\r\n f\"lags cannot go further than history length, found lag {max(indices)} \"\r\n f\"while history length is only {sequence_length}\"\r\n )\r\n```\r\n### Check DataLoader\r\n\r\nIn the analysis below, it's evident that there are no lags indicated by `sequence_lags=[0]`. The length of the context in this batch matches the provided context length.\r\n\r\n![Screenshot 2024-01-29 at 21 58 42](https://github.com/huggingface/transformers/assets/76509145/0079c3a8-eed6-4d82-bb41-fc6dcaf04a5a)\r\n\r\n### Confirming Training Status\r\n\r\nBelow, it's apparent that the training is progressing smoothly. Some additional print statements were added to verify that the lags are 0, implying the indices should be `[0]`.\r\n\r\n![Screenshot 2024-01-29 at 21 54 03](https://github.com/huggingface/transformers/assets/76509145/f159c725-ff81-4a11-89e0-5ef59ac0763e)\r\n\r\n### Generating with `model.generate()`\r\n\r\nNow, with `sequence_lags=[0]`, we observe that predictions can be made without any issues.\r\n\r\n![Screenshot 2024-01-29 at 21 55 31](https://github.com/huggingface/transformers/assets/76509145/dfe7915b-8859-41ad-9664-f72f6c4754c7)\r\n\r\nBest,\r\nkhalid oublal" ]
1,698
1,706
null
NONE
null
### System Info Transformer 4.32.1, Anaconda, MacOS ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The Infomer for prediction fails if LAGS are set to [0]. The model converge nicely, but fails in the generating phase: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[18], line 5 1 #Creates Plots 3 model.load_state_dict(torch.load(file)) ----> 5 out_test = predict(model, test_dataloader,Tpredict, device=torch.device('cpu')).mean(axis=1) 6 out_val = predict(model, val_dataloader,Tpredict,device=torch.device('cpu')).mean(axis=1) 7 out_train = predict(model, train_dataloader,Tpredict,device=torch.device('cpu')).mean(axis=1) Cell In[17], line 18, in predict(model, val_loader, Tpredict, device) 12 pasobs = torch.ones([batch_size,TIN,MIN],dtype=torch.float32, device=device) 14 # during inference, one only provides past values 15 # as well as possible additional features 16 # the model autoregressively generates future values ---> 18 output = model.generate( 19 past_values=src, 20 past_time_features=pasft, 21 past_observed_mask=pasobs, 22 future_time_features=futft) 25 if i == 0 : 26 temp = output['sequences'] File /opt/anaconda3/envs/AI/lib/python3.9/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File /opt/anaconda3/envs/AI/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py:2089, in InformerForPrediction.generate(self, past_values, past_time_features, future_time_features, past_observed_mask, static_categorical_features, static_real_features, output_attentions, output_hidden_states) 2086 lags_shape = lagged_sequence.shape 2087 reshaped_lagged_sequence = lagged_sequence.reshape(lags_shape[0], lags_shape[1], -1) -> 2089 decoder_input = torch.cat((reshaped_lagged_sequence, repeated_features[:, : k + 1]), dim=-1) 2091 dec_output = decoder(inputs_embeds=decoder_input, encoder_hidden_states=repeated_enc_last_hidden) 2092 dec_last_hidden = dec_output.last_hidden_state RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 30 but got size 1 for tensor number 1 in the list. I think I traced the problem to the hardwired 'shift=1' into `generate` ### Expected behavior I expect the generation to go forward also for LAGS=0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27237/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27236/comments
https://api.github.com/repos/huggingface/transformers/issues/27236/events
https://github.com/huggingface/transformers/pull/27236
1,973,972,102
PR_kwDOCUB6oc5eaw4G
27,236
Wrap `_prepare_4d_causal_attention_mask` as a leaf function
{ "login": "michaelbenayoun", "id": 25418079, "node_id": "MDQ6VXNlcjI1NDE4MDc5", "avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelbenayoun", "html_url": "https://github.com/michaelbenayoun", "followers_url": "https://api.github.com/users/michaelbenayoun/followers", "following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}", "gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions", "organizations_url": "https://api.github.com/users/michaelbenayoun/orgs", "repos_url": "https://api.github.com/users/michaelbenayoun/repos", "events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelbenayoun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No it needs to happen at the top-module level.", "It is, the only difference it implies is that it will not be possible to edit this function via `torch.fx`. ", "_The documentation is not available anymore as the PR was closed or merged._", "Actually I have a workaround that works for my purposes in `optimum-neuron`, I can reverse the changes or keep them like that, as you wish. It's not a big deal in any case.", "I think it's fine to leave as-is - people might want to use torch tracing outside of `optimum-neuron`." ]
1,698
1,698
1,698
MEMBER
null
# What does this PR do? This wraps the `_prepare_4d_causal_attention_mask` as a FX leaf function for similar reasons than [here](https://github.com/huggingface/transformers/pull/26414#issuecomment-1737443846). The only consequence it has is that it will not be possible to edit this function by using `torch.fx`. It is not a big deal at all, but I will remove this constraint as soon as possible.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27236/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27236", "html_url": "https://github.com/huggingface/transformers/pull/27236", "diff_url": "https://github.com/huggingface/transformers/pull/27236.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27236.patch", "merged_at": 1698926610000 }
https://api.github.com/repos/huggingface/transformers/issues/27235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27235/comments
https://api.github.com/repos/huggingface/transformers/issues/27235/events
https://github.com/huggingface/transformers/pull/27235
1,973,961,384
PR_kwDOCUB6oc5eauiz
27,235
[`tests`] Extend training tests to include half-precision (bf16) and automatic mixed precision training
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27235). All of your documentation changes will be reflected on that endpoint.", "I did not had time to properly finish this PR, right now the blocker is that it appears for some architectures that some gradients are set to `None` when using GC even if I make sure that the forward pass has been correctly called. Except skipping the test for these archs, I am not sure how we should proceed here", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? As per title and to specifically address: https://github.com/huggingface/transformers/pull/27220#pullrequestreview-1709756019 This PR aims to extend the current testing suite to include pure bfloat16 training (I think it is not possible to perform pure float16 training) + automatic mixed precision training through the `torch.amp` API: https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html That way we can easily check which model supports mixed precision training and support that from day 0 of the model integration cc @amyeroberts @ydshieh Draft for now as many tests are failing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27235/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27235/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27235", "html_url": "https://github.com/huggingface/transformers/pull/27235", "diff_url": "https://github.com/huggingface/transformers/pull/27235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27235.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27234/comments
https://api.github.com/repos/huggingface/transformers/issues/27234/events
https://github.com/huggingface/transformers/pull/27234
1,973,940,301
PR_kwDOCUB6oc5eaqAB
27,234
[`core` / `Quantization`] Fix for 8bit serialization tests
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Currently on main, 8 bit serialization is broken due to the switching of safetensors serialization by default. Link to failing job: https://github.com/huggingface/transformers/actions/runs/6727555396/job/18285709702 For 8bit models, some other objects (such as strings) are stored in the state dict. The fix is to simply deal with non-tensor case by appending the pointer address of the non-tensor object Added also regression tests to make sure previous behaviour is preserved. With this fix, all failing tests with respect to 8bit serialization now pass. cc @amyeroberts @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27234/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27234", "html_url": "https://github.com/huggingface/transformers/pull/27234", "diff_url": "https://github.com/huggingface/transformers/pull/27234.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27234.patch", "merged_at": 1698923031000 }
https://api.github.com/repos/huggingface/transformers/issues/27233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27233/comments
https://api.github.com/repos/huggingface/transformers/issues/27233/events
https://github.com/huggingface/transformers/issues/27233
1,973,885,544
I_kwDOCUB6oc51pxpo
27,233
wandb watch frequency
{ "login": "yuanenming", "id": 26831266, "node_id": "MDQ6VXNlcjI2ODMxMjY2", "avatar_url": "https://avatars.githubusercontent.com/u/26831266?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuanenming", "html_url": "https://github.com/yuanenming", "followers_url": "https://api.github.com/users/yuanenming/followers", "following_url": "https://api.github.com/users/yuanenming/following{/other_user}", "gists_url": "https://api.github.com/users/yuanenming/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuanenming/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuanenming/subscriptions", "organizations_url": "https://api.github.com/users/yuanenming/orgs", "repos_url": "https://api.github.com/users/yuanenming/repos", "events_url": "https://api.github.com/users/yuanenming/events{/privacy}", "received_events_url": "https://api.github.com/users/yuanenming/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @yuanenming, thanks for raising this issue! \r\n\r\nThe integrations are maintained by the contributors, however in this case I can see @muellerzr modified the line. I suspect it's a pragmatic number which is reasonable and suits most cases. \r\n\r\nThe great thing about open source code is that you can modify it as you like! If you want to log things more frequently the number can be changed in a fork or your local copy. If this is a feature you think would be useful to the wider community, then please feel welcome to open a PR to make this configurable and we'll be happy to review. ", "Hi, Thank you for your prompt reply.\r\nYes I have modified it locally. BTW, I find it is reasonable because it takes ~20 mins to log all the gradients and weights distributions for LLaMA2-70B. " ]
1,698
1,698
1,698
NONE
null
https://github.com/huggingface/transformers/blob/af3de8d87c717c4bb090f037d0d89413c195a42f/src/transformers/integrations/integration_utils.py#L751 The minimal watch frequency is 100 by default. I think people may want to monitor the gradients more frequently. So why it has to be larger than 100?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27233/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27232/comments
https://api.github.com/repos/huggingface/transformers/issues/27232/events
https://github.com/huggingface/transformers/pull/27232
1,973,861,651
PR_kwDOCUB6oc5eaY6a
27,232
Revert "Fix conflicts in fuyu_follow_up_image_processing"
{ "login": "molbap", "id": 39954772, "node_id": "MDQ6VXNlcjM5OTU0Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/molbap", "html_url": "https://github.com/molbap", "followers_url": "https://api.github.com/users/molbap/followers", "following_url": "https://api.github.com/users/molbap/following{/other_user}", "gists_url": "https://api.github.com/users/molbap/gists{/gist_id}", "starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/molbap/subscriptions", "organizations_url": "https://api.github.com/users/molbap/orgs", "repos_url": "https://api.github.com/users/molbap/repos", "events_url": "https://api.github.com/users/molbap/events{/privacy}", "received_events_url": "https://api.github.com/users/molbap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27232). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
Reverts huggingface/transformers#27228
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27232/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27232", "html_url": "https://github.com/huggingface/transformers/pull/27232", "diff_url": "https://github.com/huggingface/transformers/pull/27232.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27232.patch", "merged_at": 1698918040000 }
https://api.github.com/repos/huggingface/transformers/issues/27231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27231/comments
https://api.github.com/repos/huggingface/transformers/issues/27231/events
https://github.com/huggingface/transformers/pull/27231
1,973,858,740
PR_kwDOCUB6oc5eaYSM
27,231
Fix safetensors failing tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The tests all passed, and overall looks good despite I am not familiar with this part.\r\n\r\nI am a bit worried tests like `test_save_load_keys_to_ignore_on_save` will now only test with `safetensors` and never torch bin format anymore. If this is the case, I believe we need to test both cases, but that could be done in a follow up PR.", "Thanks for the review!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27231). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
MEMBER
null
cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27231/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27231", "html_url": "https://github.com/huggingface/transformers/pull/27231", "diff_url": "https://github.com/huggingface/transformers/pull/27231.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27231.patch", "merged_at": 1698933790000 }
https://api.github.com/repos/huggingface/transformers/issues/27230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27230/comments
https://api.github.com/repos/huggingface/transformers/issues/27230/events
https://github.com/huggingface/transformers/issues/27230
1,973,845,038
I_kwDOCUB6oc51pnwu
27,230
Inconsistent results between LlamaTokenizer and LlamaTokenizerFast
{ "login": "zzzzzzrc", "id": 33709455, "node_id": "MDQ6VXNlcjMzNzA5NDU1", "avatar_url": "https://avatars.githubusercontent.com/u/33709455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zzzzzzrc", "html_url": "https://github.com/zzzzzzrc", "followers_url": "https://api.github.com/users/zzzzzzrc/followers", "following_url": "https://api.github.com/users/zzzzzzrc/following{/other_user}", "gists_url": "https://api.github.com/users/zzzzzzrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/zzzzzzrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zzzzzzrc/subscriptions", "organizations_url": "https://api.github.com/users/zzzzzzrc/orgs", "repos_url": "https://api.github.com/users/zzzzzzrc/repos", "events_url": "https://api.github.com/users/zzzzzzrc/events{/privacy}", "received_events_url": "https://api.github.com/users/zzzzzzrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! This is a duplicate of #26318, #26455, #25881, and will be fixed in #26678 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info transformers==4.34.1 tokenizers==0.14.1 ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import transformers fast = transformers.AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', use_fast=True) slow = transformers.AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf', use_fast=False) >>> fast.encode("abs</s>USER") [1, 6425, 2, 3148, 1001] >>> slow.encode("abs</s>USER") [1, 6425, 2, 11889] >>> fast.decode([1, 6425, 2, 3148, 1001]) '<s> abs</s> USER' >>> slow.decode([1, 6425, 2, 11889]) '<s> abs</s>USER' ``` fast tokenizer would add a whitespace between eos and USER ### Expected behavior fast and slow tokenizer get the same results
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27230/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27229/comments
https://api.github.com/repos/huggingface/transformers/issues/27229/events
https://github.com/huggingface/transformers/pull/27229
1,973,780,668
PR_kwDOCUB6oc5eaHoL
27,229
[merge preview] Fuyu follow up image processing
{ "login": "pcuenca", "id": 1177582, "node_id": "MDQ6VXNlcjExNzc1ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcuenca", "html_url": "https://github.com/pcuenca", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "repos_url": "https://api.github.com/users/pcuenca/repos", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
MEMBER
null
@molbap this is what the branch looks like after merge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27229/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/27229/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27229", "html_url": "https://github.com/huggingface/transformers/pull/27229", "diff_url": "https://github.com/huggingface/transformers/pull/27229.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27229.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27228/comments
https://api.github.com/repos/huggingface/transformers/issues/27228/events
https://github.com/huggingface/transformers/pull/27228
1,973,771,374
PR_kwDOCUB6oc5eaFpW
27,228
Fix conflicts in fuyu_follow_up_image_processing
{ "login": "pcuenca", "id": 1177582, "node_id": "MDQ6VXNlcjExNzc1ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcuenca", "html_url": "https://github.com/pcuenca", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "repos_url": "https://api.github.com/users/pcuenca/repos", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
MEMBER
null
Merged main and fixed the conflicts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27228/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27228/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27228", "html_url": "https://github.com/huggingface/transformers/pull/27228", "diff_url": "https://github.com/huggingface/transformers/pull/27228.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27228.patch", "merged_at": 1698917135000 }
https://api.github.com/repos/huggingface/transformers/issues/27227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27227/comments
https://api.github.com/repos/huggingface/transformers/issues/27227/events
https://github.com/huggingface/transformers/pull/27227
1,973,600,918
PR_kwDOCUB6oc5eZhVH
27,227
make `LlamaDynamicNTKScalingRotaryEmbedding` work correctly
{ "login": "bzantium", "id": 19511788, "node_id": "MDQ6VXNlcjE5NTExNzg4", "avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bzantium", "html_url": "https://github.com/bzantium", "followers_url": "https://api.github.com/users/bzantium/followers", "following_url": "https://api.github.com/users/bzantium/following{/other_user}", "gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}", "starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bzantium/subscriptions", "organizations_url": "https://api.github.com/users/bzantium/orgs", "repos_url": "https://api.github.com/users/bzantium/repos", "events_url": "https://api.github.com/users/bzantium/events{/privacy}", "received_events_url": "https://api.github.com/users/bzantium/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great PR @bzantium!\r\nI also tested RoPE scaling and realized the cache related code of it was wrong before.\r\nthis version. looks much more better.", "cc @gante ", "Hi @bzantium @hyunwoongko 👋 \r\n\r\nThere has been more than one issue raised regarding DynamicNTK as implemented by the authors and here in `transformers`:\r\n1 - When `use_cache=True`, `past_key_values` is not using the right rotation for sequence lengths larger than the original model length -- https://github.com/huggingface/transformers/issues/25104\r\n2 - [The one this PR addresses] When increasing then decreasing the sequence length, the base is not scaled back -- https://github.com/huggingface/transformers/pull/27033\r\n\r\nThere are solutions to both problems, but in both cases, they massively decrease the inference speed, which makes it a net negative change. I suspect this PR suffers from the same slowdowns as in https://github.com/huggingface/transformers/pull/27033 -- feel free to use the same test script as in there, taking [this comment](https://github.com/huggingface/transformers/pull/27033#issuecomment-1782738869) into consideration.\r\n\r\nIf the results are positive, I'd be happy to merge.", "Thanks for the comment @gante!\r\nEven though this PR would be slower than current code, the results are very different. I tested with `scaling_type=dynamic, scaling_factor=1` without truncation on the [longchat](https://github.com/DachengLi1/LongChat). The results are following:\r\n\r\nlines | 50 | 100 | 200 | 300 | 400 | 500 | 600 | 680\r\n-- | -- | -- | -- | -- | -- | -- | -- | --\r\noriginal | 42 | 6 | 14 | 10 | 4 | 0 | 0 | 0\r\nThis PR | 98 | 100 | 38 | 12 | 6 | 4 | 2 | 0\r\n\r\nSince this PR code resulted in far better performance, I think it should be fixed.", "Hey @bzantium \r\n\r\nAs I've written above, I will only consider merging after showing conclusive results that the change is a net positive -- with perplexity plots and execution time measurements. So far, you've only shown results in line with https://github.com/huggingface/transformers/pull/27033" ]
1,698
1,700
1,700
CONTRIBUTOR
null
Fixes #27226 to: @ArthurZucker, @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27227/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27227/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27227", "html_url": "https://github.com/huggingface/transformers/pull/27227", "diff_url": "https://github.com/huggingface/transformers/pull/27227.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27227.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27226/comments
https://api.github.com/repos/huggingface/transformers/issues/27226/events
https://github.com/huggingface/transformers/issues/27226
1,973,589,065
I_kwDOCUB6oc51opRJ
27,226
Current implementation for `DynamicNTKScalingRotaryEmbedding` in modeling_llama.py does not update cos, sin correctly.
{ "login": "bzantium", "id": 19511788, "node_id": "MDQ6VXNlcjE5NTExNzg4", "avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bzantium", "html_url": "https://github.com/bzantium", "followers_url": "https://api.github.com/users/bzantium/followers", "following_url": "https://api.github.com/users/bzantium/following{/other_user}", "gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}", "starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bzantium/subscriptions", "organizations_url": "https://api.github.com/users/bzantium/orgs", "repos_url": "https://api.github.com/users/bzantium/repos", "events_url": "https://api.github.com/users/bzantium/events{/privacy}", "received_events_url": "https://api.github.com/users/bzantium/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Seems to be a duplicate of #27003 and was mentioned in #25306. " ]
1,698
1,700
1,700
CONTRIBUTOR
null
`DynamicNTKScalingRotaryEmbedding` is originally designed to update `base` and `inv_freq` with dynamic_length (seq_len / self.max_position_embeddings) for every input sequence. However, current implementation only updates `base` and `inv_freq` only when `seq_len > self.max_seq_len_cached`. ```python class LlamaRotaryEmbedding(nn.Module): def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): super().__init__() self.dim = dim self.max_position_embeddings = max_position_embeddings self.base = base inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)) self.register_buffer("inv_freq", inv_freq, persistent=False) # Build here to make `torch.jit.trace` work. self._set_cos_sin_cache( seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype() ) def _set_cos_sin_cache(self, seq_len, device, dtype): self.max_seq_len_cached = seq_len t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype) freqs = torch.einsum("i,j->ij", t, self.inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1) self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False) self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False) def forward(self, x, seq_len=None): # x: [bs, num_attention_heads, seq_len, head_size] if seq_len > self.max_seq_len_cached: self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype) return ( self.cos_cached[:seq_len].to(dtype=x.dtype), self.sin_cached[:seq_len].to(dtype=x.dtype), ) class LlamaDynamicNTKScalingRotaryEmbedding(LlamaRotaryEmbedding): """LlamaRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla""" def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0): self.scaling_factor = scaling_factor super().__init__(dim, max_position_embeddings, base, device) def _set_cos_sin_cache(self, seq_len, device, dtype): self.max_seq_len_cached = seq_len if seq_len > self.max_position_embeddings: base = self.base * ( (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1) ) ** (self.dim / (self.dim - 2)) inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim)) self.register_buffer("inv_freq", inv_freq, persistent=False) t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype) freqs = torch.einsum("i,j->ij", t, self.inv_freq) # Different from paper, but it uses a different permutation in order to obtain the same calculation emb = torch.cat((freqs, freqs), dim=-1) self.register_buffer("cos_cached", emb.cos().to(dtype), persistent=False) self.register_buffer("sin_cached", emb.sin().to(dtype), persistent=False) ``` Thus, it needs to be fixed like the following: ```python class LlamaDynamicNTKScalingRotaryEmbedding(LlamaRotaryEmbedding): """LlamaRotaryEmbedding extended with Dynamic NTK scaling. Credits to the Reddit users /u/bloc97 and /u/emozilla""" def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0): self.scaling_factor = scaling_factor super().__init__(dim, max_position_embeddings, base, device) def forward(self, x, seq_len=None): if seq_len <= self.max_position_embeddings: dynamic = 1.0 else: dynamic = seq_len / self.max_position_embeddings base = self.base * ( (self.scaling_factor * dynamic) - (self.scaling_factor - 1) ) ** (self.dim / (self.dim - 2)) inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(x.device) / self.dim)) t = torch.arange(seq_len, device=x.device, dtype=inv_freq.dtype) freqs = torch.einsum("i,j->ij", t, inv_freq) emb = torch.cat((freqs, freqs), dim=-1) cos, sin = emb.cos(), emb.sin() return cos[:seq_len].to(dtype=x.dtype), sin[:seq_len].to(dtype=x.dtype) ``` to: @ArthurZucker @younesbelkada ### Expected behavior When using DynamicNTK for llama, cos and sin would be updated correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27226/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27225/comments
https://api.github.com/repos/huggingface/transformers/issues/27225/events
https://github.com/huggingface/transformers/issues/27225
1,973,493,126
I_kwDOCUB6oc51oR2G
27,225
Transformers: Streamers: Output for yield generator or similar, not just stdoutput
{ "login": "gidzr", "id": 83053994, "node_id": "MDQ6VXNlcjgzMDUzOTk0", "avatar_url": "https://avatars.githubusercontent.com/u/83053994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gidzr", "html_url": "https://github.com/gidzr", "followers_url": "https://api.github.com/users/gidzr/followers", "following_url": "https://api.github.com/users/gidzr/following{/other_user}", "gists_url": "https://api.github.com/users/gidzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/gidzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gidzr/subscriptions", "organizations_url": "https://api.github.com/users/gidzr/orgs", "repos_url": "https://api.github.com/users/gidzr/repos", "events_url": "https://api.github.com/users/gidzr/events{/privacy}", "received_events_url": "https://api.github.com/users/gidzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,698
1,698
1,698
NONE
null
### Feature request Currently streaming goes to stdoutput, which can't be captured and pushed as streaming function to return to another function I've been trying this type of approach to capture what's streamed to stdout and then stream the capture with a yield, but it's not working. Even though this approach works with a timer clock (1,2,3,4...) which does stream back to the parent function, the transformer textStreamer doesn't let me do this. > f = io.StringIO() > with contextlib.redirect_stdout(f): (https://github.com/huggingface/transformers/blob/main/src/transformers/generation/streamers.py, https://huggingface.co/docs/transformers/v4.34.1/en/internal/generation_utils#transformers.TextStreamer) Would it be possible to return the streamed output as a generated element, which could be yielded or printed to stdout? Much neater. ### Motivation Help with calling streaming transformers as a callback, and more easily deal with the output for other types of functions and manipulation. ### Your contribution I think I've found it!! https://huggingface.co/docs/transformers/internal/generation_utils#transformers.TextIteratorStreamer.example ``` from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer from threading import Thread tok = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") inputs = tok(["An increasing sequence: one,"], return_tensors="pt") streamer = TextIteratorStreamer(tok) # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way. generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20) thread = Thread(target=model.generate, kwargs=generation_kwargs) thread.start() generated_text = "" for new_text in streamer: generated_text += new_text generated_text ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27225/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/27225/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27224/comments
https://api.github.com/repos/huggingface/transformers/issues/27224/events
https://github.com/huggingface/transformers/issues/27224
1,973,313,468
I_kwDOCUB6oc51nl-8
27,224
Handle ground truth pixel values equal to 255 in MaskFormer
{ "login": "surajbijjahalli", "id": 9429149, "node_id": "MDQ6VXNlcjk0MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/9429149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surajbijjahalli", "html_url": "https://github.com/surajbijjahalli", "followers_url": "https://api.github.com/users/surajbijjahalli/followers", "following_url": "https://api.github.com/users/surajbijjahalli/following{/other_user}", "gists_url": "https://api.github.com/users/surajbijjahalli/gists{/gist_id}", "starred_url": "https://api.github.com/users/surajbijjahalli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surajbijjahalli/subscriptions", "organizations_url": "https://api.github.com/users/surajbijjahalli/orgs", "repos_url": "https://api.github.com/users/surajbijjahalli/repos", "events_url": "https://api.github.com/users/surajbijjahalli/events{/privacy}", "received_events_url": "https://api.github.com/users/surajbijjahalli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nYes MaskFormer's image processor supports the [ignore_index](https://huggingface.co/docs/transformers/main/model_doc/maskformer#transformers.MaskFormerImageProcessor.ignore_index) argument, which indicates which value in the ground truth segmentation maps needs to be ignored. MaskFormer has a \"binary mask classification\" paradigm, which means that it will convert semantic segmentation maps (i.e. maps where every pixel is labeled with a certain class) to a set of binary masks (one for each class present in the ground truth segmentation map).\r\n\r\nSo in your case, if the ground truth images indicate with 0 = background, and 255 = class to be segmented, then I'd advise to instantiate `MaskFormerImageProcessor` with `ignore_index=0`. There's no need to set `reduce_labels` to `True`.\r\n\r\nBtw it would be cool to push the ISIC2018 dataset to the 🤗 hub, see here for more info on how to do that: https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation#note-on-custom-data", "Hi Niels. Thanks so much for the advice. I have already pushed the ISIC dataset to the hub at `surajbijjahalli/ISIC2018`. I have been using your excellent tutorial on maskformer for semantic segmentation. I have successfully created dataloaders without any issues. \r\nHowever, I am now experiencing the following error when I try and do a forward pass of the batch through the model. I get the following error:\r\n```\r\nIndexError Traceback (most recent call last)\r\n\r\n[<ipython-input-23-281a4b0e28fc>](https://localhost:8080/#) in <cell line: 10>()\r\n 8 \r\n 9 # Sanity check the output of the untrained model\r\n---> 10 outputs = model(batch[\"pixel_values\"],class_labels=batch[\"class_labels\"],mask_labels=batch[\"mask_labels\"])\r\n 11 \r\n 12 \r\n\r\n10 frames\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/maskformer/modeling_maskformer.py](https://localhost:8080/#) in forward(self, pixel_values, mask_labels, class_labels, pixel_mask, output_auxiliary_logits, output_hidden_states, output_attentions, return_dict)\r\n 1912 \r\n 1913 if mask_labels is not None and class_labels is not None:\r\n-> 1914 loss_dict: Dict[str, Tensor] = self.get_loss_dict(\r\n 1915 masks_queries_logits, class_queries_logits, mask_labels, class_labels, auxiliary_logits\r\n 1916 )\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/maskformer/modeling_maskformer.py](https://localhost:8080/#) in get_loss_dict(self, masks_queries_logits, class_queries_logits, mask_labels, class_labels, auxiliary_logits)\r\n 1734 auxiliary_logits: Dict[str, Tensor],\r\n 1735 ) -> Dict[str, Tensor]:\r\n-> 1736 loss_dict: Dict[str, Tensor] = self.criterion(\r\n 1737 masks_queries_logits, class_queries_logits, mask_labels, class_labels, auxiliary_logits\r\n 1738 )\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/maskformer/modeling_maskformer.py](https://localhost:8080/#) in forward(self, masks_queries_logits, class_queries_logits, mask_labels, class_labels, auxiliary_predictions)\r\n 1170 \r\n 1171 # retrieve the matching between the outputs of the last layer and the labels\r\n-> 1172 indices = self.matcher(masks_queries_logits, class_queries_logits, mask_labels, class_labels)\r\n 1173 # compute the average number of target masks for normalization purposes\r\n 1174 num_masks: Number = self.get_num_masks(class_labels, device=class_labels[0].device)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n 1519 \r\n 1520 def _call_impl(self, *args, **kwargs):\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1528 \r\n 1529 try:\r\n\r\n[/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py](https://localhost:8080/#) in decorate_context(*args, **kwargs)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n 116 \r\n 117 return decorate_context\r\n\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/maskformer/modeling_maskformer.py](https://localhost:8080/#) in forward(self, masks_queries_logits, class_queries_logits, mask_labels, class_labels)\r\n 952 # but approximate it in 1 - proba[target class].\r\n 953 # The 1 is a constant that doesn't change the matching, it can be ommitted.\r\n--> 954 cost_class = -pred_probs[:, labels]\r\n 955 # flatten spatial dimension \"q h w -> q (h w)\"\r\n 956 pred_mask_flat = pred_mask.flatten(1) # [num_queries, height*width]\r\n\r\nIndexError: index 255 is out of bounds for dimension 0 with size 3\r\n```\r\nThe same code works without this error when I try it on a different dataset with more classes e.g. `surajbijjahalli/semantic_seg_ATL` is a dataset of different coral species and I have successfully used your same tutorial to train a model on that dataset. However it does not have a pixel value of 255 in the ground truth segmentation maps, which is why i wondered whether a value of 255 is being handled differently, leading to my error. \r\nWould be grateful if you could point me in the right direction. ", "This seems to have to do with the class labels rather than the binary masks. Could you check what `model.config.num_labels` is? The class labels which you provide to the model must be between 0 and `model.config.num_labels - 1`.", "`model.config.num_labels` = 2 in my case. The class labels in the ground truth masks are 0 for the background and 255 for the lesion (the class to be segmented). The id2label file is {\"0\":\"BACKGROUND\",\"255\": \"LESION\"}.\r\n\r\nIf I understand correctly, does this mean that I would have to change the pixel values in the ground truth masks from 255 to 1 ? \r\n\r\nAs far as I can see, there would be two ways to do this:\r\n1. Set the mask pixel values to 1 locally i.e. mask[mask==255]=1 before pushing the dataset to hugging face. Change id2label to reflect this as well.\r\nOR\r\n2. Work with the existing dataset and change the mask pixel values after downloading the dataset from hugging face. This will probably mean iterating through each sample in the dataset, changing 255 to 1. \r\n\r\nThe first option will not require me to make changes to my code, but I will have to process the dataset differently before pushing to hub. The second option requires me to write routines for handling class label=255. Probably more work but more robust in the long term.\r\nIs there any simpler option I am missing ? \r\n\r\nThanks again!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
I am trying to use Maskformer for segmenting lesions on the ISIC2018 dataset (https://challenge.isic-archive.com/data/#2018). The training images are RGB and the training ground truth annotations are binary .png images where the pixel value of 0 is the background, and the pixel value of 255 denotes the lesion (i.e. the class to be segmented). I remember reading somewhere that pixel values of 255 are automatically ignored in MaskFormer. I am confused as to how to set up the id2label file and the parameters for the preprocessor. i.e. should I set reduce_labels to True and should ignore_index be set to 0 ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27224/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27223/comments
https://api.github.com/repos/huggingface/transformers/issues/27223/events
https://github.com/huggingface/transformers/pull/27223
1,973,244,264
PR_kwDOCUB6oc5eYWK3
27,223
Updated albert.md doc for ALBERT model
{ "login": "ENate", "id": 6941995, "node_id": "MDQ6VXNlcjY5NDE5OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/6941995?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ENate", "html_url": "https://github.com/ENate", "followers_url": "https://api.github.com/users/ENate/followers", "following_url": "https://api.github.com/users/ENate/following{/other_user}", "gists_url": "https://api.github.com/users/ENate/gists{/gist_id}", "starred_url": "https://api.github.com/users/ENate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ENate/subscriptions", "organizations_url": "https://api.github.com/users/ENate/orgs", "repos_url": "https://api.github.com/users/ENate/repos", "events_url": "https://api.github.com/users/ENate/events{/privacy}", "received_events_url": "https://api.github.com/users/ENate/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27223). All of your documentation changes will be reflected on that endpoint.", "Hi @stevhliu. Can I close this PR?", "Thanks again for your contribution!", "No worries :) Looking forward to more.", "Hi @stevhliu. Do you think it may be great to modify the docs in such a way that we can have multiple tabs for examples in Jax, Tensorflow or Pytorch or this is not necessary? For example, each tab will consist of examples written in a particular framework?", "Hi @ENate, we currently already do that where appropriate with framework-specific code blocks such as the ones seen [here](https://huggingface.co/docs/transformers/quicktour#use-another-model-and-tokenizer-in-the-pipeline) :)", "Okay right :) I was thinking about tabs for each framework. Like on clicking on the tab, someone using the docs can easily see the examples related to either TensorFlow or PyTorch. But thanks " ]
1,698
1,704
1,700
CONTRIBUTOR
null
# What does this PR do? I updated the ```albert.md``` doc assigned to me by @stevhliu by adding the vital links and notes on how to use the ALBERT model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. It was discussed on Github on the following link: ``` https://github.com/huggingface/transformers/issues/20055 ``` - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). Yes - [ ] Did you write any new necessary tests? No ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27223/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27223", "html_url": "https://github.com/huggingface/transformers/pull/27223", "diff_url": "https://github.com/huggingface/transformers/pull/27223.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27223.patch", "merged_at": 1700163877000 }
https://api.github.com/repos/huggingface/transformers/issues/27222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27222/comments
https://api.github.com/repos/huggingface/transformers/issues/27222/events
https://github.com/huggingface/transformers/pull/27222
1,973,224,952
PR_kwDOCUB6oc5eYSEx
27,222
Fix tokenizer export for LLamaTokenizerFast
{ "login": "mayank31398", "id": 32954280, "node_id": "MDQ6VXNlcjMyOTU0Mjgw", "avatar_url": "https://avatars.githubusercontent.com/u/32954280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayank31398", "html_url": "https://github.com/mayank31398", "followers_url": "https://api.github.com/users/mayank31398/followers", "following_url": "https://api.github.com/users/mayank31398/following{/other_user}", "gists_url": "https://api.github.com/users/mayank31398/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayank31398/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayank31398/subscriptions", "organizations_url": "https://api.github.com/users/mayank31398/orgs", "repos_url": "https://api.github.com/users/mayank31398/repos", "events_url": "https://api.github.com/users/mayank31398/events{/privacy}", "received_events_url": "https://api.github.com/users/mayank31398/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@mayank31398 Could you add the same fix to [code llama's fast tokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/code_llama/tokenization_code_llama_fast.py)? ", "done @amyeroberts ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27222). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts can you merge this?\r\nthis is blocking us :)" ]
1,698
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Currently, LLamaTokenizerFast is not exporting `add_bos_token` and `add_eos_token`. This will fix this issue. Fixes # (issue) https://github.com/huggingface/transformers/issues/23833 https://github.com/huggingface/transformers/pull/23855#issuecomment-1789796762 this is not saving add_eos_token and add_bos_token. @sgugger @ArthurZucker Can you guys take a look at this? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27222/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27222", "html_url": "https://github.com/huggingface/transformers/pull/27222", "diff_url": "https://github.com/huggingface/transformers/pull/27222.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27222.patch", "merged_at": 1699262778000 }
https://api.github.com/repos/huggingface/transformers/issues/27221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27221/comments
https://api.github.com/repos/huggingface/transformers/issues/27221/events
https://github.com/huggingface/transformers/pull/27221
1,973,188,411
PR_kwDOCUB6oc5eYKL6
27,221
Add LLaVA model to transformers
{ "login": "mattmazzola", "id": 2856501, "node_id": "MDQ6VXNlcjI4NTY1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2856501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mattmazzola", "html_url": "https://github.com/mattmazzola", "followers_url": "https://api.github.com/users/mattmazzola/followers", "following_url": "https://api.github.com/users/mattmazzola/following{/other_user}", "gists_url": "https://api.github.com/users/mattmazzola/gists{/gist_id}", "starred_url": "https://api.github.com/users/mattmazzola/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mattmazzola/subscriptions", "organizations_url": "https://api.github.com/users/mattmazzola/orgs", "repos_url": "https://api.github.com/users/mattmazzola/repos", "events_url": "https://api.github.com/users/mattmazzola/events{/privacy}", "received_events_url": "https://api.github.com/users/mattmazzola/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @mattmazzola \r\nThank you very much for your great work! I think that there is already a contributor working on adding Llava in transformers: https://github.com/huggingface/transformers/pull/25789 - perhaps you can follow up with them and check the current status on that PR and see how you can collaborate? 🙏 " ]
1,698
1,701
1,701
NONE
null
# What does this PR do? Add LLaVA model to transformers There are 4 pretrained models - [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) - [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) - [liuhaotian/llava-v1.5-7b-lora](https://huggingface.co/liuhaotian/llava-v1.5-7b-lora) - [liuhaotian/llava-v1.5-13b-lora](https://huggingface.co/liuhaotian/llava-v1.5-13b-lora) Fixes # (issue) #22848 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27221/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27221", "html_url": "https://github.com/huggingface/transformers/pull/27221", "diff_url": "https://github.com/huggingface/transformers/pull/27221.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27221.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27220
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27220/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27220/comments
https://api.github.com/repos/huggingface/transformers/issues/27220/events
https://github.com/huggingface/transformers/pull/27220
1,973,177,650
PR_kwDOCUB6oc5eYH8e
27,220
Fix switch transformer mixed precision issue
{ "login": "timlee0212", "id": 22514478, "node_id": "MDQ6VXNlcjIyNTE0NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/22514478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timlee0212", "html_url": "https://github.com/timlee0212", "followers_url": "https://api.github.com/users/timlee0212/followers", "following_url": "https://api.github.com/users/timlee0212/following{/other_user}", "gists_url": "https://api.github.com/users/timlee0212/gists{/gist_id}", "starred_url": "https://api.github.com/users/timlee0212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timlee0212/subscriptions", "organizations_url": "https://api.github.com/users/timlee0212/orgs", "repos_url": "https://api.github.com/users/timlee0212/repos", "events_url": "https://api.github.com/users/timlee0212/events{/privacy}", "received_events_url": "https://api.github.com/users/timlee0212/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27220). All of your documentation changes will be reflected on that endpoint.", "Hmm per my understanding it seems we do not have CI tests for mixed precision training. \r\nI think that we could easily adapt the current existing `test_training` : https://github.com/huggingface/transformers/blob/af3de8d87c717c4bb090f037d0d89413c195a42f/tests/test_modeling_common.py#L581 test to also cover the mixed precision case (bf16 should work on our CI runners) - happy to work on that on a separate PR!", "@younesbelkada Yes please! ", "@younesbelkada I used Accelerate for mixed precision training. I think it should use torch.cuda.amp as the default backend." ]
1,698
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Fix the issue of dtype mismatch when using mixed precision training for switch transformer model. Fixes #27219 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] **[No Update Required]** Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] **[No New Test Required]** Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27220/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27220", "html_url": "https://github.com/huggingface/transformers/pull/27220", "diff_url": "https://github.com/huggingface/transformers/pull/27220.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27220.patch", "merged_at": 1699020033000 }
https://api.github.com/repos/huggingface/transformers/issues/27219
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27219/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27219/comments
https://api.github.com/repos/huggingface/transformers/issues/27219/events
https://github.com/huggingface/transformers/issues/27219
1,973,137,970
I_kwDOCUB6oc51m7Iy
27,219
Wrong Data Type for Switch Transformer in Mixed Precision Training
{ "login": "timlee0212", "id": 22514478, "node_id": "MDQ6VXNlcjIyNTE0NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/22514478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timlee0212", "html_url": "https://github.com/timlee0212", "followers_url": "https://api.github.com/users/timlee0212/followers", "following_url": "https://api.github.com/users/timlee0212/following{/other_user}", "gists_url": "https://api.github.com/users/timlee0212/gists{/gist_id}", "starred_url": "https://api.github.com/users/timlee0212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timlee0212/subscriptions", "organizations_url": "https://api.github.com/users/timlee0212/orgs", "repos_url": "https://api.github.com/users/timlee0212/repos", "events_url": "https://api.github.com/users/timlee0212/events{/privacy}", "received_events_url": "https://api.github.com/users/timlee0212/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @timlee0212 \r\nThanks for the issue and the fix proposal, I think the fix you suggested makes sense, would you like to submit a PR for the fix? Otherwise, happy to do it!", "Sure. I'm happy to submit a PR. \r\n\r\nThanks for your in-time reply!" ]
1,698
1,699
1,699
CONTRIBUTOR
null
### System Info - `transformers` version: 4.32.1 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 2 - machine_rank: 0 - num_machines: 1 - gpu_ids: 0,1 - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.1.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Use DDP through accelerate and Trainer ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Run any training script with mixed-precision enabled (I enabled BF16 through `accelerate config` here) ``` accelerate launch examples/pytorch/translation/run_translation.py --model_name_or_path google/switch-base-8 --do_train --do_eval --source_lang en --target_lang de --dataset_name wmt16 --dataset_config_name de-en --output_dir /tmp/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate ``` 2. Exception will happen, ``` ...(Call Stack omitted) File "***/miniconda3/envs/moe/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "***/miniconda3/envs/moe/lib/python3.10/site-packages/transformers/models/switch_transformers/modeling_switch_transformers.py", line 322, in forward next_states[token_indices] = expert(hidden_states[token_indices]) RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and BFloat16 for the source. ``` ### Expected behavior The training script should run without the dtype mismatch issue. A workaround I found is: In [src/transformers/models/switch_transformers/modeling_switch_transformers.py#L321](https://github.com/huggingface/transformers/blob/af3de8d87c717c4bb090f037d0d89413c195a42f/src/transformers/models/switch_transformers/modeling_switch_transformers.py#L321) Change from: ``` next_states[token_indices] = expert(hidden_states[token_indices]) ``` to: ``` next_states[token_indices] = expert(hidden_states[token_indices]).to(next_states.dtype) ``` The output of each expert is cast to BF16 (or FP16), but the hidden states are maintained in FP32. I think there should be one more step here to complete the conversion.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27219/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27218
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27218/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27218/comments
https://api.github.com/repos/huggingface/transformers/issues/27218/events
https://github.com/huggingface/transformers/issues/27218
1,973,049,451
I_kwDOCUB6oc51mlhr
27,218
Sentence_transformers tracing inference issue
{ "login": "yzGao22", "id": 104778707, "node_id": "U_kgDOBj7L0w", "avatar_url": "https://avatars.githubusercontent.com/u/104778707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yzGao22", "html_url": "https://github.com/yzGao22", "followers_url": "https://api.github.com/users/yzGao22/followers", "following_url": "https://api.github.com/users/yzGao22/following{/other_user}", "gists_url": "https://api.github.com/users/yzGao22/gists{/gist_id}", "starred_url": "https://api.github.com/users/yzGao22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzGao22/subscriptions", "organizations_url": "https://api.github.com/users/yzGao22/orgs", "repos_url": "https://api.github.com/users/yzGao22/repos", "events_url": "https://api.github.com/users/yzGao22/events{/privacy}", "received_events_url": "https://api.github.com/users/yzGao22/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @yzGao22, thanks for raising this issue! \r\n\r\nFor the code examples, the only difference I see is how the inputs are passed to the model. In the first case, using `**` to unpack the dictionary means the inputs are passed in as keywords and is equivalent to: \r\n\r\n```\r\nori_out = embed_model(\r\n input_ids=input_ids, \r\n token_type_ids=token_type_ids, \r\n attention_mask=attention_mask\r\n)\r\n```\r\n\r\nIn the second case, the values of the dictionary are taken, cast as a tuple and passed using `*` to unpack, which is equivalent to:\r\n\r\n```\r\nori_out = embed_model(input_ids, token_type_ids, attention_mask)\r\n```\r\n\r\nIf you look at the forward signature for [BertModel](https://github.com/huggingface/transformers/blob/4557a0dede92ce985576fac478b754d76bba3c18/src/transformers/models/bert/modeling_bert.py#L910), you'll notice that the order of the input arguments is input_ids, attention_mask, token_type_ids, and so passing in as positional arguments using `* neuron_inputs` results in the wrong arrays being assigned to the forward arguments. \r\n\r\nYou can create the input tuple by simply doing:\r\n\r\n```py\r\nneuron_inputs =(embeddings['input_ids'], embeddings['attention_mask'], embeddings['token_type_ids'])\r\n```\r\n\r\ncc @philschmid as it seems the current code in transformers is out-of-sync with the linked blog :) ", "The problem solved. Thank you @amyeroberts " ]
1,698
1,698
1,698
NONE
null
### System Info - `transformers` version: 4.34.0 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The issue occurs when I tried to neuron trace a sentence_transformers model. The model page: https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 The tracing guide: https://github.com/philschmid/sentence-transformers-huggingface-inferentia/blob/main/sagemaker-notebook.ipynb The inference code from these 2 pages are different which makes the traced sentence_transformers model output different sentence embedding, compared to the original model. Inference code from model page: ``` model_id = "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2" tokenizer = AutoTokenizer.from_pretrained(model_id) embed_model = AutoModel.from_pretrained(model_id, torchscript=True) dummy_input = "dummy input which will be padded later" max_length = 512 embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt") ori_out = embed_model(**embeddings) sentence_embeddings = mean_pooling(ori_out, embeddings['attention_mask']) print(sentence_embeddings[0].tolist()) ``` Inference code from the tracing guide: ``` model_id = "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2" tokenizer = AutoTokenizer.from_pretrained(model_id) embed_model = AutoModel.from_pretrained(model_id, torchscript=True) dummy_input = "dummy input which will be padded later" max_length = 512 embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt") neuron_inputs = tuple(embeddings.values()) ori_out = embed_model(*neuron_inputs)#for tuple sentence_embeddings = mean_pooling(ori_out, embeddings['attention_mask']) print(sentence_embeddings[0].tolist()) ``` These 2 codes above output 2 different sentence embedding for a same input sentence. ### Expected behavior The inference output for traced model should be same as the original model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27218/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27217
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27217/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27217/comments
https://api.github.com/repos/huggingface/transformers/issues/27217/events
https://github.com/huggingface/transformers/issues/27217
1,972,893,074
I_kwDOCUB6oc51l_WS
27,217
T5Tokenizer from_pretrained Error
{ "login": "TristynAlxander", "id": 4155314, "node_id": "MDQ6VXNlcjQxNTUzMTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4155314?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TristynAlxander", "html_url": "https://github.com/TristynAlxander", "followers_url": "https://api.github.com/users/TristynAlxander/followers", "following_url": "https://api.github.com/users/TristynAlxander/following{/other_user}", "gists_url": "https://api.github.com/users/TristynAlxander/gists{/gist_id}", "starred_url": "https://api.github.com/users/TristynAlxander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TristynAlxander/subscriptions", "organizations_url": "https://api.github.com/users/TristynAlxander/orgs", "repos_url": "https://api.github.com/users/TristynAlxander/repos", "events_url": "https://api.github.com/users/TristynAlxander/events{/privacy}", "received_events_url": "https://api.github.com/users/TristynAlxander/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @TristynAlxander, thanks for raising this issue! \r\n\r\nCould you provide us with a model path or tokenizer files we can use to replicate this error on our end? ", "cc @ArthurZucker ", "Greetings, \r\n\r\nThe above is the simplified version that reproduces the error. \r\n\r\nThe `/home/username/downloads/large_language_models/flan-t5-large` file contains the following:\r\n\r\n -rw-rw-r-- 1 user user 797 Nov 1 13:48 config.json\r\n -rw-rw-r-- 1 user user 3001197502 Nov 1 13:48 pytorch_model.bin\r\n -rw-rw-r-- 1 user user 2201 Nov 1 13:48 special_tokens_map.json\r\n -rw-rw-r-- 1 user user 20771 Nov 1 13:48 tokenizer_config.json\r\n -rw-rw-r-- 1 user user 2422192 Nov 1 13:48 tokenizer.json\r\n\r\nThose files were downloaded with the following code: \r\n\r\n model_path = \"/home/username/downloads/large_language_models/flan-t5-large\"\r\n huggingface_path = \"google/flan-t5-large\"\r\n device = \"cuda\" # for GPU usage or \"cpu\" for CPU usage\r\n is_dir = os.path.exists(model_path) and os.path.isdir(model_path)\r\n if( not is_dir ): \r\n from transformers import AutoTokenizer,AutoModel\r\n # Fetch\r\n tokenizer = AutoTokenizer.from_pretrained(huggingface_path)\r\n model = AutoModel.from_pretrained(huggingface_path).to(device)\r\n # Save \r\n tokenizer.save_pretrained(model_path)\r\n model.save_pretrained(model_path)\r\n\r\nIt didn't occur to me that this portion might have caused the problem, so I apologize if I've messed something up here.", "Hey! Seems like you did not download the full content of the repo. The [spiece.model](https://huggingface.co/google/flan-t5-large/blob/main/spiece.model) is missing, which you absolutely need to use the `T5Tokenizer` class. If you are using `AutoTokenizer` the fast files will be fetched, not the slow ones. ", "Unless you add `from_slow = True` in the call to from_pretrained 😉 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info Operating System: Linux Mint 21.1 Kernel: Linux 6.2.0-36-generic pip list: Package Version ------------------ ------------ accelerate 0.24.1 certifi 2023.7.22 charset-normalizer 3.3.1 filelock 3.13.1 fsspec 2023.10.0 huggingface-hub 0.17.3 idna 3.4 Jinja2 3.1.2 MarkupSafe 2.1.2 mpmath 1.3.0 networkx 3.0 numpy 1.26.1 packaging 23.2 Pillow 9.3.0 pip 22.3.1 psutil 5.9.6 PyYAML 6.0.1 regex 2023.10.3 requests 2.31.0 safetensors 0.4.0 sentencepiece 0.1.99 setuptools 65.5.0 sympy 1.12 tokenizers 0.14.1 torch 2.1.0+cu118 torchaudio 2.1.0+cu118 torchvision 0.16.0+cu118 tqdm 4.66.1 transformers 4.34.1 triton 2.1.0 typing_extensions 4.8.0 urllib3 2.0.7 Note: Computer was restarted after sentencepiece was installed. No change. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code: model_path = "/home/username/downloads/large_language_models/flan-t5-large" from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained(model_path) ### Expected behavior Actual Behavior: Traceback (most recent call last): File "/home/username/downloads/large_language_models/temp.py", line 3, in <module> tokenizer = T5Tokenizer.from_pretrained(model_path) File "/opt/.pyenv_root/versions/LLM/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2017, in from_pretrained return cls._from_pretrained( File "/opt/.pyenv_root/versions/LLM/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2249, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/opt/.pyenv_root/versions/LLM/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py", line 166, in __init__ self.sp_model.Load(vocab_file) File "/opt/.pyenv_root/versions/LLM/lib/python3.10/site-packages/sentencepiece/__init__.py", line 905, in Load return self.LoadFromFile(model_file) File "/opt/.pyenv_root/versions/LLM/lib/python3.10/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) TypeError: not a string Expected Behavior: Return of T5Tokenizer. No Error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27217/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27216
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27216/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27216/comments
https://api.github.com/repos/huggingface/transformers/issues/27216/events
https://github.com/huggingface/transformers/pull/27216
1,972,891,203
PR_kwDOCUB6oc5eXJOv
27,216
Remove redundant code from T5 encoder mask creation
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Done :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27216). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Removes redundant code in the creation of the encoder attention mask for T5 as discussed in https://github.com/huggingface/transformers/issues/27211. @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27216/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27216", "html_url": "https://github.com/huggingface/transformers/pull/27216", "diff_url": "https://github.com/huggingface/transformers/pull/27216.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27216.patch", "merged_at": 1698940901000 }
https://api.github.com/repos/huggingface/transformers/issues/27215
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27215/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27215/comments
https://api.github.com/repos/huggingface/transformers/issues/27215/events
https://github.com/huggingface/transformers/pull/27215
1,972,757,566
PR_kwDOCUB6oc5eWsE8
27,215
translate peft.md to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu \r\n\r\nhi, here is another pr for peft.md translation.\r\n\r\nAnd I just check the _toctree.yml file, and I think I will first finish section `Tutorials` and `Developer guides`. I think it may be helpful for people who want to try `transformers`. Then I will continue to work on anther sections. When we finsihed all translation work with other contributors, I think it's better to look back and update some link info, small typos or wrong translation words.\r\n\r\nJust try to do some work by the end of this year. Many people is using transformers as it's the trend for DL work, it's better to give more detailed docs with variety language. And I think more people will do contribution when they can get enough help from amounts of translated docs.\r\n\r\nBest", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27215). All of your documentation changes will be reflected on that endpoint.", "@stevhliu \r\n\r\nHi, I have fixed problems.\r\n\r\nAnd for `<br>`, I just think it is too compact to read easily. But I still delete this line.\r\n\r\nBest" ]
1,698
1,699
1,698
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27215/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27215", "html_url": "https://github.com/huggingface/transformers/pull/27215", "diff_url": "https://github.com/huggingface/transformers/pull/27215.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27215.patch", "merged_at": 1698946949000 }
https://api.github.com/repos/huggingface/transformers/issues/27214
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27214/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27214/comments
https://api.github.com/repos/huggingface/transformers/issues/27214/events
https://github.com/huggingface/transformers/issues/27214
1,972,692,136
I_kwDOCUB6oc51lOSo
27,214
None of PyTorch, TensorFlow >= 2.0, or Flax have been found
{ "login": "Yoloex", "id": 104444196, "node_id": "U_kgDOBjmxJA", "avatar_url": "https://avatars.githubusercontent.com/u/104444196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yoloex", "html_url": "https://github.com/Yoloex", "followers_url": "https://api.github.com/users/Yoloex/followers", "following_url": "https://api.github.com/users/Yoloex/following{/other_user}", "gists_url": "https://api.github.com/users/Yoloex/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yoloex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yoloex/subscriptions", "organizations_url": "https://api.github.com/users/Yoloex/orgs", "repos_url": "https://api.github.com/users/Yoloex/repos", "events_url": "https://api.github.com/users/Yoloex/events{/privacy}", "received_events_url": "https://api.github.com/users/Yoloex/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Yoloex, thanks for raising this issue. \r\n\r\nIt looks that pytorch can't be found in your environment, so this likely isn't a transformers issue. \r\n\r\nIf you try to import torch in your python session does it work? e.g.:\r\n```py\r\nimport torch\r\nprint(torch.__version__)\r\n```\r\nor `python -c \"import torch; print(torch.__version__)\"`\r\n\r\n?\r\n", "Hi @amyeroberts,\r\nThanks for taking a look at my issue.\r\n`torch` installed correctly using `pip` and its version is 2.1.0.", "@Yoloex Interesting - from the installation steps in the issue, it seems that the installed version of torch should be 1.13.1. \r\n\r\nAnd what happens if you do this: \r\n\r\n```\r\nimport torch\r\nfrom transformers import is_torch_available\r\n\r\nprint(torch.__version__)\r\nprint(is_torch_available())\r\n```\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @amyeroberts, is there a way to just disable this warning? I am using `transformers` solely for the tokenisers and have no need for Tensorflow or Torch. I get several questions by others on this warning on a few of my services, and repeating myself is starting to get old.", "Hi @winstxnhdw, \r\n\r\nOne option is to update the warning to just be `logger.warning_once` which would mean you would only see it once per python session. Would you like to open a PR to make this change? This way you get the github contribution", "> Hi @winstxnhdw, \n> \n> \n> \n> One option is to update the warning to just be `logger.warning_once` which would mean you would only see it once per python session. Would you like to open a PR to make this change? This way you get the github contribution\n\nThanks for taking the time to reply but rather just logging it once, I'd like the option to disable it entirely. Ideally, this warning should only be printed for modules that rely on these dependencies." ]
1,698
1,708
1,702
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.0 - Huggingface_hub version: 0.18.0 - Safetensors version: 0.4.0 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction On Win 10 1. conda create -n NAME python==3.10 2. conda activate NAME 3. pip install transformers[torch] 4. pip install torch-1.13.1+cu117-cp310-cp310-win_amd64.whl 5. Run following script ``` from transformers import pipeline captioner = pipeline("image-to-text",model="‎Microsoft/trocr-large-printed") ``` This will show ``` At least one of TensorFlow 2.0 or PyTorch should be installed. To install TensorFlow 2.0, read the instructions at https://www.tensorflow.org/install/ To install PyTorch, read the instructions at https://pytorch.org/. ``` ### Expected behavior As I installed pytorch, this shouldn't show any errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27214/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27213
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27213/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27213/comments
https://api.github.com/repos/huggingface/transformers/issues/27213/events
https://github.com/huggingface/transformers/pull/27213
1,972,671,638
PR_kwDOCUB6oc5eWZZ9
27,213
[docs] Custom model doc update
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,699
1,699
CONTRIBUTOR
null
This PR adds a documentation update to address the feedback left [here](https://huggingface.co/jinaai/jina-embeddings-v2-base-en/discussions/5#65380596ffe3e0513108cd3a), quote: ``` The clarity of the transformers documentation on custom models could be improved by describing the syntax of auto_map in config.json. We never used the register functions and would just directly modify the config.json. The behaviour that allowed one to use code on the Hub from another repo using "--" doesn't seem to be documented anywhere but we figured out that it was possible because save_pretrained saves in this format when using a custom model. The feature does seem to be pretty new though (I believe ~6 months ago?) so maybe that is why it hasn't been too well documented yet. But we think that if it was better communicated to users that it was possible to do this, more people would develop on the Hub as we did. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27213/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27213", "html_url": "https://github.com/huggingface/transformers/pull/27213", "diff_url": "https://github.com/huggingface/transformers/pull/27213.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27213.patch", "merged_at": 1699012994000 }
https://api.github.com/repos/huggingface/transformers/issues/27212
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27212/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27212/comments
https://api.github.com/repos/huggingface/transformers/issues/27212/events
https://github.com/huggingface/transformers/issues/27212
1,972,592,972
I_kwDOCUB6oc51k2FM
27,212
LlamaForCausalLM at fp16 w/ FlashAttention gives NAN loss
{ "login": "as3eem", "id": 25168245, "node_id": "MDQ6VXNlcjI1MTY4MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/25168245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/as3eem", "html_url": "https://github.com/as3eem", "followers_url": "https://api.github.com/users/as3eem/followers", "following_url": "https://api.github.com/users/as3eem/following{/other_user}", "gists_url": "https://api.github.com/users/as3eem/gists{/gist_id}", "starred_url": "https://api.github.com/users/as3eem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/as3eem/subscriptions", "organizations_url": "https://api.github.com/users/as3eem/orgs", "repos_url": "https://api.github.com/users/as3eem/repos", "events_url": "https://api.github.com/users/as3eem/events{/privacy}", "received_events_url": "https://api.github.com/users/as3eem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @as3eem \r\nI think that pure fp16 training should be avoided, for training in half-precision you should either perform pure bf16 fine-tuning or use automatic mixed precision.", "Also if you are using padding, some of the nan could be fixed by #27114", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I am pretty sure this is now fixed by https://github.com/huggingface/transformers/pull/28142 as @pacman100 managed to make it work !\r\nYou need to load the model in full-precision and train the model with `fp16=True` (i.e. with autocast), make sure to use transformers main!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,706
1,706
NONE
null
### System Info - `transformers` version: 4.34.1 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.2 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? cc: @SunMarc @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` tokenizerCheckpoint = LlamaTokenizer.from_pretrained(MODEL) modelCheckpoint = LlamaForCausalLM.from_pretrained(MODEL, use_flash_attention_2=True, torch_dtype=torch.float16) optimizer = optim.AdamW(model.parameters(), lr=config.LEARNING_RATE) device = config.DEVICE model.to(device) clip_value = 1 # tried other values as well # print(model.parameters()) model.train() for epoch in range(config.NUM_EPOCHS): total_loss = 0 for batch_idx, batch in enumerate(train_dataloader): optimizer.zero_grad() input_ids, attention_mask, labels = batch input_ids, attention_mask, labels = input_ids.to(device), attention_mask.to(device), labels.to(device) outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels) loss = outputs.loss loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip_value) optimizer.step() total_loss += loss.item() average_loss = total_loss / len(train_dataloader) print(f"Epoch {epoch + 1}/{config.NUM_EPOCHS}, Average Training Loss: {average_loss}") ``` **Constraint:** The task was supposed to be executed in a very vanilla way without using a PEFT wrapper or trainer class so as to customize some parts in the future. **After many surveys:** [Due to fp16 data type, gradients receive value equivalent to -/+ inf and hence nan logits as well as loss] **What didn't work?** torch.cuda.amp.GradScaler() gradient clamping reduced learning rate ### Expected behavior receive non-nan values in logits.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27212/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27212/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27211
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27211/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27211/comments
https://api.github.com/repos/huggingface/transformers/issues/27211/events
https://github.com/huggingface/transformers/issues/27211
1,972,521,332
I_kwDOCUB6oc51kkl0
27,211
T5 automatic creation of the `encoder_attention_mask`
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @pietrolesci, thanks for raising this issue.\r\n\r\nIndeed, it seems the second mask creation is redundant. It was probably just a copy-pasta that got added along the way. Would you like to open a PR to fix it? This way you get the github contribution for having eagle eyes 🦅 👀 ", "Hi @amyeroberts, thanks for your swift reply and for confirming. Yes, I will open a PR later today :)", "Hi @amyeroberts, PR here: https://github.com/huggingface/transformers/pull/27216 :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
CONTRIBUTOR
null
Hi there, I am going through the T5 implementation to try to replicate it and noticed that the two code blocks reported below do the same thing when creating the `encoder_attention_mask`. Is this wanted? https://github.com/huggingface/transformers/blob/037fb7d0e1086146612a716ef914305160134e9c/src/transformers/models/t5/modeling_t5.py#L1029-L1033 https://github.com/huggingface/transformers/blob/037fb7d0e1086146612a716ef914305160134e9c/src/transformers/models/t5/modeling_t5.py#L1045-L1049
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27211/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27210
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27210/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27210/comments
https://api.github.com/repos/huggingface/transformers/issues/27210/events
https://github.com/huggingface/transformers/issues/27210
1,972,490,508
I_kwDOCUB6oc51kdEM
27,210
RWKV : support attention_mask
{ "login": "pfeatherstone", "id": 45853521, "node_id": "MDQ6VXNlcjQ1ODUzNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/45853521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pfeatherstone", "html_url": "https://github.com/pfeatherstone", "followers_url": "https://api.github.com/users/pfeatherstone/followers", "following_url": "https://api.github.com/users/pfeatherstone/following{/other_user}", "gists_url": "https://api.github.com/users/pfeatherstone/gists{/gist_id}", "starred_url": "https://api.github.com/users/pfeatherstone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pfeatherstone/subscriptions", "organizations_url": "https://api.github.com/users/pfeatherstone/orgs", "repos_url": "https://api.github.com/users/pfeatherstone/repos", "events_url": "https://api.github.com/users/pfeatherstone/events{/privacy}", "received_events_url": "https://api.github.com/users/pfeatherstone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Or is it that you don't need `attention_mask` because RWKV is causal by definition? So you can just ignore padded tokens?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @younesbelkada if you remember about this! 🤗 ", "RWKV does not use attention so no attention mask 😉 " ]
1,698
1,702
1,702
NONE
null
### Feature request At the moment, `attention_mask` in `RwkvModel` is ignored. It would be great if this were supported. ### Motivation I want to be able to train on a padded batch. ### Your contribution RWKV is a bit alien to me
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27210/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27209
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27209/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27209/comments
https://api.github.com/repos/huggingface/transformers/issues/27209/events
https://github.com/huggingface/transformers/pull/27209
1,972,484,369
PR_kwDOCUB6oc5eVwXZ
27,209
chore: add py.typed in compliance with PEP 561
{ "login": "niqodea", "id": 112023843, "node_id": "U_kgDOBq1ZIw", "avatar_url": "https://avatars.githubusercontent.com/u/112023843?v=4", "gravatar_id": "", "url": "https://api.github.com/users/niqodea", "html_url": "https://github.com/niqodea", "followers_url": "https://api.github.com/users/niqodea/followers", "following_url": "https://api.github.com/users/niqodea/following{/other_user}", "gists_url": "https://api.github.com/users/niqodea/gists{/gist_id}", "starred_url": "https://api.github.com/users/niqodea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/niqodea/subscriptions", "organizations_url": "https://api.github.com/users/niqodea/orgs", "repos_url": "https://api.github.com/users/niqodea/repos", "events_url": "https://api.github.com/users/niqodea/events{/privacy}", "received_events_url": "https://api.github.com/users/niqodea/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @niqodea, thanks for opening this PR! \r\n\r\nTransformers doesn't officially support type checking in the library. Type annotations are used as documentation but we don't guarantee that they are consistent and checked across the repo. See: #18485 ", "Ah, just noticed that there has already been an attempt to integrate py.typed previously ([link](https://github.com/huggingface/transformers/pull/18485)). However, I currently don't see much complaining in my project as of now. Maybe we can try integrating it again? Feel free to close this otherwise.", "@amyeroberts yep, makes sense. Thanks for your review!" ]
1,698
1,698
1,698
NONE
null
# What does this PR do? Analogously to what has been done in [this PR](https://github.com/huggingface/diffusers/pull/5326) for diffusers, do the same for transformers. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27209", "html_url": "https://github.com/huggingface/transformers/pull/27209", "diff_url": "https://github.com/huggingface/transformers/pull/27209.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27209.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27208
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27208/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27208/comments
https://api.github.com/repos/huggingface/transformers/issues/27208/events
https://github.com/huggingface/transformers/pull/27208
1,972,437,843
PR_kwDOCUB6oc5eVmO7
27,208
Reproducible checkpoint for npu
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27208). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts Good day, this PR is ready to be merged :-)" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Save npu's RNG states when saving a checkpoint and set after all the data skip phase when resuming training. Links to https://github.com/huggingface/transformers/pull/11582 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> cc @muellerzr
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27208/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27208", "html_url": "https://github.com/huggingface/transformers/pull/27208", "diff_url": "https://github.com/huggingface/transformers/pull/27208.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27208.patch", "merged_at": 1698920833000 }
https://api.github.com/repos/huggingface/transformers/issues/27207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27207/comments
https://api.github.com/repos/huggingface/transformers/issues/27207/events
https://github.com/huggingface/transformers/pull/27207
1,972,383,165
PR_kwDOCUB6oc5eVaUV
27,207
fix docstring in get_oneformer_resize_output_image_size func
{ "login": "wesleylp", "id": 33898112, "node_id": "MDQ6VXNlcjMzODk4MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/33898112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wesleylp", "html_url": "https://github.com/wesleylp", "followers_url": "https://api.github.com/users/wesleylp/followers", "following_url": "https://api.github.com/users/wesleylp/following{/other_user}", "gists_url": "https://api.github.com/users/wesleylp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wesleylp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wesleylp/subscriptions", "organizations_url": "https://api.github.com/users/wesleylp/orgs", "repos_url": "https://api.github.com/users/wesleylp/repos", "events_url": "https://api.github.com/users/wesleylp/events{/privacy}", "received_events_url": "https://api.github.com/users/wesleylp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27207). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? fix docstring in get_oneformer_resize_output_image_size func ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27207/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27207", "html_url": "https://github.com/huggingface/transformers/pull/27207", "diff_url": "https://github.com/huggingface/transformers/pull/27207.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27207.patch", "merged_at": 1698852674000 }
https://api.github.com/repos/huggingface/transformers/issues/27206
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27206/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27206/comments
https://api.github.com/repos/huggingface/transformers/issues/27206/events
https://github.com/huggingface/transformers/issues/27206
1,972,257,582
I_kwDOCUB6oc51jkMu
27,206
get_peft_model with PEFT Finetuning for Tensorflow fails
{ "login": "SrikanthChellappa", "id": 37934673, "node_id": "MDQ6VXNlcjM3OTM0Njcz", "avatar_url": "https://avatars.githubusercontent.com/u/37934673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SrikanthChellappa", "html_url": "https://github.com/SrikanthChellappa", "followers_url": "https://api.github.com/users/SrikanthChellappa/followers", "following_url": "https://api.github.com/users/SrikanthChellappa/following{/other_user}", "gists_url": "https://api.github.com/users/SrikanthChellappa/gists{/gist_id}", "starred_url": "https://api.github.com/users/SrikanthChellappa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SrikanthChellappa/subscriptions", "organizations_url": "https://api.github.com/users/SrikanthChellappa/orgs", "repos_url": "https://api.github.com/users/SrikanthChellappa/repos", "events_url": "https://api.github.com/users/SrikanthChellappa/events{/privacy}", "received_events_url": "https://api.github.com/users/SrikanthChellappa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @SrikanthChellappa, thanks for raising this issue! \r\n\r\nI believe that PEFT is currently only supported for PyTorch models, but @younesbelkada can confirm", "Yes I second what @amyeroberts said, you can't use PEFT + Tensorfllow, PEFT only supports PyTorch", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Yes I second what @amyeroberts said, you can't use PEFT + Tensorfllow, PEFT only supports PyTorch\n\ncan we expect tf support sooner?" ]
1,698
1,706
1,701
NONE
null
### System Info Transformers Version: 4.34.1 Python: 3.11.3 ### Who can help? @Rocketknight1 @Gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction My code model_name='google/flan-t5-base' original_model = TFAutoModelForSeq2SeqLM.from_pretrained(model_name, dtype=tf.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_name) from peft import LoraConfig, get_peft_model, TaskType lora_config = LoraConfig( r=32, # Rank lora_alpha=32, target_modules=["q", "v"], lora_dropout=0.05, bias="none", task_type=TaskType.SEQ_2_SEQ_LM # FLAN-T5 ) peft_model = get_peft_model(original_model, lora_config) print(print_number_of_trainable_model_parameters(peft_model)) Error Stack --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[8], line 1 ----> 1 peft_model = get_peft_model(original_model, lora_config) 2 #print(print_number_of_trainable_model_parameters(peft_model)) File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\peft\mapping.py:106, in get_peft_model(model, peft_config, adapter_name) 104 if peft_config.is_prompt_learning: 105 peft_config = _prepare_prompt_learning_config(peft_config, model_config) --> 106 return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name) File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\peft\peft_model.py:1056, in PeftModelForSeq2SeqLM.__init__(self, model, peft_config, adapter_name) 1055 def __init__(self, model, peft_config: PeftConfig, adapter_name="default"): -> 1056 super().__init__(model, peft_config, adapter_name) 1057 self.base_model_prepare_inputs_for_generation = self.base_model.prepare_inputs_for_generation 1058 self.base_model_prepare_encoder_decoder_kwargs_for_generation = ( 1059 self.base_model._prepare_encoder_decoder_kwargs_for_generation 1060 ) File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\peft\peft_model.py:111, in PeftModel.__init__(self, model, peft_config, adapter_name) 109 if not peft_config.is_prompt_learning: 110 self.peft_config[adapter_name] = peft_config --> 111 self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type]( 112 self.base_model, self.peft_config, adapter_name 113 ) 114 self.set_additional_trainable_modules(peft_config, adapter_name) 115 else: File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\peft\tuners\lora.py:274, in LoraModel.__init__(self, model, config, adapter_name) 273 def __init__(self, model, config, adapter_name) -> None: --> 274 super().__init__(model, config, adapter_name) File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\peft\tuners\tuners_utils.py:88, in BaseTuner.__init__(self, model, peft_config, adapter_name) 85 if not hasattr(self, "config"): 86 self.config = {"model_type": "custom"} ---> 88 self.inject_adapter(self.model, adapter_name) 90 # Copy the peft_config in the injected model. 91 self.model.peft_config = self.peft_config File C:\ProgramData\anaconda3\envs\tfenv\Lib\site-packages\peft\tuners\tuners_utils.py:199, in BaseTuner.inject_adapter(self, model, adapter_name) 196 self._check_new_adapter_config(peft_config) 198 is_target_modules_in_base_model = False --> 199 key_list = [key for key, _ in model.named_modules()] 201 model_config = getattr(model, "config", {"model_type": "custom"}) 202 if hasattr(model_config, "to_dict"): AttributeError: 'TFT5ForConditionalGeneration' object has no attribute 'named_modules' ### Expected behavior A peft model should be created
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27206/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27205
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27205/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27205/comments
https://api.github.com/repos/huggingface/transformers/issues/27205/events
https://github.com/huggingface/transformers/issues/27205
1,972,238,631
I_kwDOCUB6oc51jfkn
27,205
processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble") may have some problem
{ "login": "syyxtl", "id": 66545456, "node_id": "MDQ6VXNlcjY2NTQ1NDU2", "avatar_url": "https://avatars.githubusercontent.com/u/66545456?v=4", "gravatar_id": "", "url": "https://api.github.com/users/syyxtl", "html_url": "https://github.com/syyxtl", "followers_url": "https://api.github.com/users/syyxtl/followers", "following_url": "https://api.github.com/users/syyxtl/following{/other_user}", "gists_url": "https://api.github.com/users/syyxtl/gists{/gist_id}", "starred_url": "https://api.github.com/users/syyxtl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syyxtl/subscriptions", "organizations_url": "https://api.github.com/users/syyxtl/orgs", "repos_url": "https://api.github.com/users/syyxtl/repos", "events_url": "https://api.github.com/users/syyxtl/events{/privacy}", "received_events_url": "https://api.github.com/users/syyxtl/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" }, { "id": 3551105283, "node_id": "LA_kwDOCUB6oc7TqZED", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Documentation%20Issue", "name": "Good First Documentation Issue", "color": "AB0BA8", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @NielsRogge ", "my ori image size=[1080, 1920]\r\nafter preprocess size=[960, 960]\r\n\r\nexp1:\r\ntarget_sizes = torch.Tensor([[1080, 1920]])\r\nresults = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.2)\r\n![image](https://github.com/huggingface/transformers/assets/66545456/ace8fb36-e49a-4a7f-b3cc-cf86a1d74f6c)\r\n\r\nexp2:\r\ntarget_sizes = torch.Tensor([[1920, 1920]])\r\nresults = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.2)\r\n![image](https://github.com/huggingface/transformers/assets/66545456/71d5b6dd-ba54-42e0-aaf4-e089d49e65a0)\r\n", "Hi,\r\n\r\nDo you have a fully reproducible script? Note that the target sizes should be (height, width) rather than (width, height).", "> Hi,\r\n> \r\n> Do you have a fully reproducible script? Note that the target sizes should be (height, width) rather than (width, height).\r\n\r\n@NielsRogge \r\n I use the official script:\r\nhttps://huggingface.co/docs/transformers/main/model_doc/owlv2\r\n![image](https://github.com/huggingface/transformers/assets/66545456/aa89d62b-e461-4560-93f1-dcc337922dc3)\r\n\r\nIn the above example, I may have written the order backwards", "> Hi,\r\n> \r\n> Do you have a fully reproducible script? Note that the target sizes should be (height, width) rather than (width, height).\r\n\r\n@NielsRogge \r\ni may change the script below\r\n![image](https://github.com/huggingface/transformers/assets/66545456/7c9a6b3b-0f0a-4247-bfb5-0f9453fb71d9)\r\n", "Hi, noticed the same issue and looked into the cause:\r\n\r\nThe problem is coming from padding the image in the preprocessor. After the padding the model returns the coordinates within padded square. Thus, always when padding was used in preprocessor, the target sizes for post_process functions should be [(max(W, H), max(W,H))] as noted above. Please add this note to the documentation and/or the example as even the sample image bounding boxes are off due to this. ", "Hi, is this change already done? If not, can I take this up?", "No not yet, you can take it up!", "@itsadarshms @NielsRogge As this is a recently added model, the fix for this should be to update the processor to accept target sizes in `(h, w)` rather than updating the documentation. `(h, w)` is the standard format for our image processing. ", "It already does accept `target_sizes` in (height, width) format, see [here](https://github.com/huggingface/transformers/blob/35551f9a0f66a22de4971b4a51b3c172d3b87f95/src/transformers/models/owlv2/image_processing_owlv2.py#L505). There's nothing wrong with the `post_process_object_detection` method, see my [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/OWLv2/Zero_and_one_shot_object_detection_with_OWLv2.ipynb) for an illustration.\r\n\r\nThe only reason, as pointed out by @michal-stary, is that the preprocessor internally pads the image. Hence visualizations need to be shown on the padded image, rather than the original image. See also my notebook for an illustration of that.\r\n\r\nHence to fix this, one needs to update the code snippet to set the `target_sizes` based on the padded image rather than the original one. I've opened a PR to address this." ]
1,698
1,701
1,701
NONE
null
![image](https://github.com/huggingface/transformers/assets/66545456/c979a108-4e14-44da-9767-f27b2faf325b) if the Owlv2Processor have the "padding" operator, the post_process_object_detection may can not directly the following: boxes = boxes * scale_fct[:, None, :] I found the y-axis have some shift
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27205/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27204
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27204/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27204/comments
https://api.github.com/repos/huggingface/transformers/issues/27204/events
https://github.com/huggingface/transformers/pull/27204
1,972,044,023
PR_kwDOCUB6oc5eUPug
27,204
Fix CPU offload + disk offload tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts @patrickvonplaten if you feel uneasy with merging this right before the release, I'm fine with reverting the safetensors serialization by default to let it sit on `main` for a while longer. The release is going to be very packed already so it's fine for me.", "@LysandreJik The change LGTM and seems to address some underlying issues. Re default safetensors serialization, I'm happy for it to be part of this release as long as some of the slow tests on the most popular models (bert, llama, wav2vec2, whisper, clip etc.) are good. ", "Thanks both for your reviews! I'll go ahead and merge this, sorry but you'll have the conflict Patrick :grin: " ]
1,698
1,698
1,698
MEMBER
null
Passing to safetensors serialization by default highlighted a few issues that we have with safetensors. This PR fixes the issue, which is principally linked to weight sharing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27204/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27204", "html_url": "https://github.com/huggingface/transformers/pull/27204", "diff_url": "https://github.com/huggingface/transformers/pull/27204.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27204.patch", "merged_at": 1698863123000 }
https://api.github.com/repos/huggingface/transformers/issues/27203
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27203/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27203/comments
https://api.github.com/repos/huggingface/transformers/issues/27203/events
https://github.com/huggingface/transformers/pull/27203
1,972,042,484
PR_kwDOCUB6oc5eUPYl
27,203
[Whisper, Bart, MBart] Add Flash Attention 2
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Ok ran some more tests and it should be good now. I'm getting some flaky behavior with the flash attention tests on my RTX 4090 (especially extreme for Whisper). We should maybe think about how we can make them more robust now that we've added some more models (cc @younesbelkada) " ]
1,698
1,698
1,698
MEMBER
null
# What does this PR do? This PR adds Flash Attention for Whisper, Bart & MBart. Whisper depends on Bart and MBart quite a bit for Flash Attention like 20+ other model architectures. As this is the first PR that adds Flash Attention 2 to a encoder-decoder model, I wanted to make sure it's done for the two template models (Bart and MBart) as well so that Whisper (and all other encoder-decoder models that follow) don't loose their "# Copied from" statements. Note that while this PR changes 27 files, only 4 files are really relevant to review because all other files are just consequences of the "# Copied from mechanism": The following there files fully implement Flash Attention 2: - src/transformers/models/bart/modeling_bart.py - src/transformers/models/mbart/modeling_mbart.py - src/transformers/models/whisper/modeling_whisper.py The test files is restructured so that Flash Attention 2 tests can nicely run for different kinds of models (audio & nlp as well as decoder-only and encoder-decoder). - tests/test_modeling_common.py I ran the following tests to make sure everything works as expected: ``` CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest tests/models/whisper/test_modeling_whisper.py CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest tests/models/mbart/test_modeling_mbart.py CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest tests/models/bart/test_modeling_bart.py ``` as well as: ``` RUN_SLOW=1 pytest -m flash_attn_test tests ``` All tests pass that also pass on "main". The only failures are related to disk offloading which should be fixed in: https://github.com/huggingface/transformers/pull/27204 There are some "error not raised" failures for flash attn and mistral, but they are also present in "main" and seem to be related to this PR: https://github.com/huggingface/transformers/pull/27125 (cc @younesbelkada), I'd suggest to also fix those in another PR. Other CI test failures are unrelated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27203/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27203/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27203", "html_url": "https://github.com/huggingface/transformers/pull/27203", "diff_url": "https://github.com/huggingface/transformers/pull/27203.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27203.patch", "merged_at": 1698868981000 }
https://api.github.com/repos/huggingface/transformers/issues/27202
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27202/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27202/comments
https://api.github.com/repos/huggingface/transformers/issues/27202/events
https://github.com/huggingface/transformers/pull/27202
1,972,001,425
PR_kwDOCUB6oc5eUGYX
27,202
fix:fix Deployment of web stream interface application, issue with me…
{ "login": "bjleah", "id": 90740329, "node_id": "MDQ6VXNlcjkwNzQwMzI5", "avatar_url": "https://avatars.githubusercontent.com/u/90740329?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bjleah", "html_url": "https://github.com/bjleah", "followers_url": "https://api.github.com/users/bjleah/followers", "following_url": "https://api.github.com/users/bjleah/following{/other_user}", "gists_url": "https://api.github.com/users/bjleah/gists{/gist_id}", "starred_url": "https://api.github.com/users/bjleah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bjleah/subscriptions", "organizations_url": "https://api.github.com/users/bjleah/orgs", "repos_url": "https://api.github.com/users/bjleah/repos", "events_url": "https://api.github.com/users/bjleah/events{/privacy}", "received_events_url": "https://api.github.com/users/bjleah/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Hi @bjleah 👋 \r\n\r\nThe current version of streaming in `generate` is a contraption that we want to properly replace in the near future. As such, we don't want to add more code in `generate` to handle a few special cases in streaming -- my suggestion would be to keep the changes you added in your local version of `transformers` (and point your project to your repository instead) :)\r\n\r\nFYI, our next version of streaming will be a simple iterator returned from `generate`, from which the user can easily manipulate the tokens as they wish 🤗 ", "@gante Thank you, We'll look forward to the new streaming iterator in the next release. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
"When I deploy the Baichuan model using the streaming interface, I noticed that after each call, the GPU memory usage increases by more than ten gigabytes. Due to the inherent characteristics of the GPU, it's necessary to release the GPU memory during the token return process to ensure stable GPU memory usage."
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27202/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27202", "html_url": "https://github.com/huggingface/transformers/pull/27202", "diff_url": "https://github.com/huggingface/transformers/pull/27202.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27202.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27201
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27201/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27201/comments
https://api.github.com/repos/huggingface/transformers/issues/27201/events
https://github.com/huggingface/transformers/pull/27201
1,971,906,826
PR_kwDOCUB6oc5eTx7S
27,201
🌐 [i18n-ZH] Translate troubleshooting.md into Chinese
{ "login": "yyLeaves", "id": 76979429, "node_id": "MDQ6VXNlcjc2OTc5NDI5", "avatar_url": "https://avatars.githubusercontent.com/u/76979429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yyLeaves", "html_url": "https://github.com/yyLeaves", "followers_url": "https://api.github.com/users/yyLeaves/followers", "following_url": "https://api.github.com/users/yyLeaves/following{/other_user}", "gists_url": "https://api.github.com/users/yyLeaves/gists{/gist_id}", "starred_url": "https://api.github.com/users/yyLeaves/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yyLeaves/subscriptions", "organizations_url": "https://api.github.com/users/yyLeaves/orgs", "repos_url": "https://api.github.com/users/yyLeaves/repos", "events_url": "https://api.github.com/users/yyLeaves/events{/privacy}", "received_events_url": "https://api.github.com/users/yyLeaves/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Translate troubleshooting.md into Chinese part of #20095 ## Who can review? Documentation: @stevhliu Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27201/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27201", "html_url": "https://github.com/huggingface/transformers/pull/27201", "diff_url": "https://github.com/huggingface/transformers/pull/27201.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27201.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27200
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27200/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27200/comments
https://api.github.com/repos/huggingface/transformers/issues/27200/events
https://github.com/huggingface/transformers/issues/27200
1,971,790,212
I_kwDOCUB6oc51hyGE
27,200
dataset download error in speech recognition examples
{ "login": "oshindow", "id": 49552492, "node_id": "MDQ6VXNlcjQ5NTUyNDky", "avatar_url": "https://avatars.githubusercontent.com/u/49552492?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oshindow", "html_url": "https://github.com/oshindow", "followers_url": "https://api.github.com/users/oshindow/followers", "following_url": "https://api.github.com/users/oshindow/following{/other_user}", "gists_url": "https://api.github.com/users/oshindow/gists{/gist_id}", "starred_url": "https://api.github.com/users/oshindow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oshindow/subscriptions", "organizations_url": "https://api.github.com/users/oshindow/orgs", "repos_url": "https://api.github.com/users/oshindow/repos", "events_url": "https://api.github.com/users/oshindow/events{/privacy}", "received_events_url": "https://api.github.com/users/oshindow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @oshindow, thanks for raising this issue! \r\n\r\nIndeed, the example scripts are pointing to the deprecated dataset. Would you like to open a PR to fix this? This way you get the github contribution for spotting the fix. ", "> Hi @oshindow, thanks for raising this issue!\r\n> \r\n> Indeed, the example scripts are pointing to the deprecated dataset. Would you like to open a PR to fix this? This way you get the github contribution for spotting the fix.\r\n\r\nI would like to! Can you let me know how to do this?", "@oshindow Great! You'll need to update all the references to `\"common_voice\"` to `\"mozilla-foundation/common_voice_11_0\"` in example README: https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.35.0.dev0 - Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17 - Python version: 3.8.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @stevhliu and @MKhalusova ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_ctc.py \ --dataset_name="common_voice" \ --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ --dataset_config_name="tr" \ --output_dir="./wav2vec2-common_voice-tr-demo" \ --overwrite_output_dir \ --num_train_epochs="15" \ --per_device_train_batch_size="16" \ --gradient_accumulation_steps="2" \ --learning_rate="3e-4" \ --warmup_steps="500" \ --evaluation_strategy="steps" \ --text_column_name="sentence" \ --length_column_name="input_length" \ --save_steps="400" \ --eval_steps="100" \ --layerdrop="0.0" \ --save_total_limit="3" \ --freeze_feature_encoder \ --gradient_checkpointing \ --chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \ --fp16 \ --group_by_length \ --push_to_hub \ --do_train --do_eval ### Expected behavior When I run the default command, which set `dataset_name` as "common_voice", and I got a warning: ``` /home/xintong/.cache/huggingface/modules/datasets_modules/datasets/common_voice/220833898d6a60c50f621126e51fb22eb2dfe5244392c70dccd8e6e2f055f4bf/common_voice.py:634: FutureWarning: This version of the Common Voice dataset is deprecated. You can download the latest one with >>> load_dataset("mozilla-foundation/common_voice_11_0", "en") warnings.warn( Generating train split: 0%| | 0/1831 [00:00<?, ? examples/s] Traceback (most recent call last): File "/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py", line 2578, in next tarinfo = self.tarinfo.fromtarfile(self) File "/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py", line 1283, in fromtarfile obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) File "/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py", line 1221, in frombuf raise TruncatedHeaderError("truncated header") tarfile.TruncatedHeaderError: truncated header ``` I modified this into `mozilla-foundation/common_voice_11_0`, it passed. ``` Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.13k/8.13k [00:00<00:00, 30.3MB/s] Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.4k/14.4k [00:00<00:00, 19.2MB/s] Downloading extra modules: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.44k/3.44k [00:00<00:00, 19.9MB/s] Downloading extra modules: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 60.9k/60.9k [00:00<00:00, 304kB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.2k/12.2k [00:00<00:00, 25.6MB/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 568M/568M [00:07<00:00, 71.7MB/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 233M/233M [00:02<00:00, 78.6MB/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 285M/285M [00:04<00:00, 67.7MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.86M/4.86M [00:00<00:00, 73.3MB/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 109M/109M [00:01<00:00, 80.4MB/s] Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:21<00:00, 4.24s/it] Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:07<00:00, 1.54s/it] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.76M/5.76M [00:00<00:00, 56.0MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.17M/2.17M [00:00<00:00, 54.1MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.18M/2.18M [00:00<00:00, 64.3MB/s] Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32.8k/32.8k [00:00<00:00, 53.1MB/s] Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 800k/800k [00:00<00:00, 59.8MB/s] Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:05<00:00, 1.01s/it] Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 2954.98it/s] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27200/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27199
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27199/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27199/comments
https://api.github.com/repos/huggingface/transformers/issues/27199/events
https://github.com/huggingface/transformers/issues/27199
1,971,700,940
I_kwDOCUB6oc51hcTM
27,199
Failed to load Llama2 with customized device_map
{ "login": "polarispw", "id": 78252964, "node_id": "MDQ6VXNlcjc4MjUyOTY0", "avatar_url": "https://avatars.githubusercontent.com/u/78252964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polarispw", "html_url": "https://github.com/polarispw", "followers_url": "https://api.github.com/users/polarispw/followers", "following_url": "https://api.github.com/users/polarispw/following{/other_user}", "gists_url": "https://api.github.com/users/polarispw/gists{/gist_id}", "starred_url": "https://api.github.com/users/polarispw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polarispw/subscriptions", "organizations_url": "https://api.github.com/users/polarispw/orgs", "repos_url": "https://api.github.com/users/polarispw/repos", "events_url": "https://api.github.com/users/polarispw/events{/privacy}", "received_events_url": "https://api.github.com/users/polarispw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @polarispw, thanks for raising this issue! \r\n\r\nI think this bug - specifically the logic in `expand_device_map` - should be resolved with the merging of #27204\r\n\r\nIf you modify `expand_device_map` to:\r\n\r\n```\r\ndef expand_device_map(device_map, param_names):\r\n \"\"\"\r\n Expand a device map to return the correspondance parameter name to device.\r\n \"\"\"\r\n new_device_map = {}\r\n for module, device in device_map.items():\r\n new_device_map.update(\r\n {p: device for p in param_names if p == module or p.startswith(f\"{module}.\") or module == \"\"}\r\n )\r\n return new_device_map\r\n```\r\n\r\nare you able to successfully run your code? ", "Thanks for your answers, but I don't think it works because those module names in `device_map` are still mismatched with those in `param_names`.\r\nBy the way, I created my own `device_map` by using the following func:\r\n```\r\ndef generate_device_map(model, model_config, start_layer=0, no_split_module_classes=[\"LlamaDecoderLayer\"]):\r\n device_map = infer_auto_device_map(model, no_split_module_classes=no_split_module_classes)\r\n my_device_map = device_map\r\n my_device_map[\"embed_tokens\"] = \"cpu\"\r\n my_device_map[\"norm\"] = \"cpu\"\r\n start = start_layer # from 0 to 7\r\n gpu_layers = range(start * 4, start * 4 + 4)\r\n for i in range(model_config.num_hidden_layers):\r\n if i in gpu_layers:\r\n my_device_map[f\"layers.{i}\"] = 0\r\n elif my_device_map[f\"layers.{i}\"] == 0:\r\n my_device_map[f\"layers.{i}\"] = \"cpu\"\r\n check_device_map(model, my_device_map)\r\n return my_device_map\r\n```\r\nIt means the names in `device_map` comes from the `accelerate.utils.infer_auto_device_map`. I can avoid this issue by:\r\n```\r\ndef expand_device_map(device_map, param_names):\r\n \"\"\"\r\n Expand a device map to return the correspondance parameter name to device.\r\n \"\"\"\r\n new_device_map = {}\r\n for module, device in device_map.items():\r\n new_device_map.update({p: device for p in param_names if p == module or p.startswith(f\"{module}.\") or \\\r\n p.startswith(f\"model.{module}.\")}) # adding here to adopt prefix\r\n return new_device_map\r\n\r\n\r\ndef get_disk_only_shard_files(device_map, sharded_metadata):\r\n \"\"\"\r\n Returns the list of shard files containing only weights offloaded to disk.\r\n \"\"\"\r\n files_content = collections.defaultdict(list)\r\n for weight_name, filename in sharded_metadata[\"weight_map\"].items():\r\n while len(weight_name) > 0 and weight_name not in device_map:\r\n weight_name = weight_name.split(\"model.\")[-1] # adding here to remove prefix\r\n weight_name = \".\".join(weight_name.split(\".\")[:-1])\r\n files_content[filename].append(device_map[weight_name])\r\n\r\n return [fname for fname, devices in files_content.items() if set(devices) == {\"disk\"}]\r\n```\r\nBut it will definitely affect other models' loading. So maybe you can connect with the Accelerate team and ask for aligned names after `infer_auto_device_map`?", "@polarispw Thanks for such a detailed explanation - I was hoping there might be an easy fix, but alas it's never that easy! \r\n\r\nAs it relates to accelerate, passing this to @muellerzr and @pacman100 ", "cc @SunMarc ", "Hi @polarispw, this happens because you are using `AutoModel` and not `AutoModelForCausalLM`. The weights that are saved correspond to `AutoModelForCausalLM` model, hence the `lm_head` and the prefix `model.` for the parameters. I don't know if this is what you had in mind but you won't be able to fine-tune your model when offloaded weights to disk. If your use case is just for inference, I suggest you to switch to `AutoModelForCausalLM`. \r\n\r\nRight now, there is indeed an issue with offloading params to disk using `safetensors` + loading weights from a derived model into the base model. Does it makes sense to fix this issue considering that we can only do inference when offloading to disk ? @amyeroberts If yes, I know how to fix this and I can submit a PR. ", "@SunMarc that is the point and I am just loading the model to check some param matrices inside. I focus on the base model.\r\n\r\nI'd be happy if the auto conversion from derived to base model could be supported, as it might be more in line with the user's intuitive thoughts. It seems that when mapping only includes GPU and CPU, the logic does not go this way and causes an error", "@SunMarc Yes - I think it makes sense to fix this, in particular for loading weights into the base model. " ]
1,698
1,699
1,699
NONE
null
### System Info - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.13.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker; @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Load Llama2 on a PC with 4060(8GB) and 32GB RAM 2. Passing device_map to AutoModel.from_pretrained() 3. Using "auto" and GPU memory + RAM is not enough, or just manually locating some layers on the disk 4. Receive the following info: ``` "E:\anaconda\envs\transformers\lib\site-packages\transformers\modeling_utils.py", line 3633, in _load_pretrained_model offload_index = { File "E:\anaconda\envs\transformers\lib\site-packages\transformers\modeling_utils.py", line 3636, in <dictcomp> if param_device_map[p] == "disk" KeyError: 'lm_head.weight' ``` I have checked it and found there were no items in para_device_map, which is created by `expand_device_map()` in line 3621. ``` def expand_device_map(device_map, param_names): """ Expand a device map to return the correspondance parameter name to device. """ new_device_map = {} for module, device in device_map.items(): new_device_map.update({p: device for p in param_names if p == module or p.startswith(f"{module}.")}) return new_device_map ``` **The key is that the update func failed to extract p and device:** All the param_names have "model" as prefix and the lm_head was not included. ``` device_map: {'embed_tokens': 'cpu', 'layers.0': 0, 'layers.1': 0, 'layers.2': 0, 'layers.3': 0, 'layers.4': 'cpu', 'layers.5': 'cpu', 'layers.6': 'cpu', 'layers.7': 'cpu', 'layers.8': 'cpu', 'layers.9': 'cpu', 'layers.10': 'cpu', 'layers.11': 'cpu', 'layers.12': 'cpu', 'layers.13': 'cpu', 'layers.14': 'cpu', 'layers.15': 'cpu', 'layers.16': 'cpu', 'layers.17': 'cpu', 'layers.18': 'cpu', 'layers.19': 'cpu', 'layers.20': 'cpu', 'layers.21': 'cpu', 'layers.22': 'cpu', 'layers.23': 'cpu', 'layers.24': 'cpu', 'layers.25': 'disk', 'layers.26': 'disk', 'layers.27': 'disk', 'layers.28': 'disk', 'layers.29': 'disk', 'layers.30': 'disk', 'layers.31': 'disk', 'norm': 'cpu'} para_names: ['lm_head.weight', 'model.embed_tokens.weight', 'model.layers.0.input_layernorm.weight', 'model.layers.0.mlp.down_proj.weight', 'model.layers.0.mlp.gate_proj.weight', 'model.layers.0.mlp.up_proj.weight', 'model.layers.0.post_attention_layernorm.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.0.self_attn.o_proj.weight', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.0.self_attn.rotary_emb.inv_freq', 'model.layers.0.self_attn.v_proj.weight', ...} ``` ### Expected behavior I wonder if there are some methods to avoid it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27199/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27198
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27198/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27198/comments
https://api.github.com/repos/huggingface/transformers/issues/27198/events
https://github.com/huggingface/transformers/issues/27198
1,971,635,395
I_kwDOCUB6oc51hMTD
27,198
Saving and loading the model from local throws error
{ "login": "SrikanthChellappa", "id": 37934673, "node_id": "MDQ6VXNlcjM3OTM0Njcz", "avatar_url": "https://avatars.githubusercontent.com/u/37934673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SrikanthChellappa", "html_url": "https://github.com/SrikanthChellappa", "followers_url": "https://api.github.com/users/SrikanthChellappa/followers", "following_url": "https://api.github.com/users/SrikanthChellappa/following{/other_user}", "gists_url": "https://api.github.com/users/SrikanthChellappa/gists{/gist_id}", "starred_url": "https://api.github.com/users/SrikanthChellappa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SrikanthChellappa/subscriptions", "organizations_url": "https://api.github.com/users/SrikanthChellappa/orgs", "repos_url": "https://api.github.com/users/SrikanthChellappa/repos", "events_url": "https://api.github.com/users/SrikanthChellappa/events{/privacy}", "received_events_url": "https://api.github.com/users/SrikanthChellappa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry Pls ignore the above error stack and consider the below one.\r\n\r\nI am getting ValueError and the updated error stack is given below\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[2], line 2\r\n 1 from tensorflow.keras.models import load_model\r\n----> 2 model=load_model('FlanT5-Chatbot_model.keras')\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_api.py:254, in load_model(filepath, custom_objects, compile, safe_mode, **kwargs)\r\n 249 if kwargs:\r\n 250 raise ValueError(\r\n 251 \"The following argument(s) are not supported \"\r\n 252 f\"with the native Keras format: {list(kwargs.keys())}\"\r\n 253 )\r\n--> 254 return saving_lib.load_model(\r\n 255 filepath,\r\n 256 custom_objects=custom_objects,\r\n 257 compile=compile,\r\n 258 safe_mode=safe_mode,\r\n 259 )\r\n 261 # Legacy case.\r\n 262 return legacy_sm_saving_lib.load_model(\r\n 263 filepath, custom_objects=custom_objects, compile=compile, **kwargs\r\n 264 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:281, in load_model(filepath, custom_objects, compile, safe_mode)\r\n 278 asset_store.close()\r\n 280 except Exception as e:\r\n--> 281 raise e\r\n 282 else:\r\n 283 return model\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:269, in load_model(filepath, custom_objects, compile, safe_mode)\r\n 266 else:\r\n 267 asset_store = None\r\n--> 269 _load_state(\r\n 270 model,\r\n 271 weights_store=weights_store,\r\n 272 assets_store=asset_store,\r\n 273 inner_path=\"\",\r\n 274 visited_trackables=set(),\r\n 275 )\r\n 276 weights_store.close()\r\n 277 if asset_store:\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:457, in _load_state(trackable, weights_store, assets_store, inner_path, skip_mismatch, visited_trackables)\r\n 455 for child_attr, child_obj in _walk_trackable(trackable):\r\n 456 if _is_keras_trackable(child_obj):\r\n--> 457 _load_state(\r\n 458 child_obj,\r\n 459 weights_store,\r\n 460 assets_store,\r\n 461 inner_path=tf.io.gfile.join(inner_path, child_attr),\r\n 462 skip_mismatch=skip_mismatch,\r\n 463 visited_trackables=visited_trackables,\r\n 464 )\r\n 465 elif isinstance(child_obj, (list, dict, tuple, set)):\r\n 466 _load_container_state(\r\n 467 child_obj,\r\n 468 weights_store,\r\n (...)\r\n 472 visited_trackables=visited_trackables,\r\n 473 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:466, in _load_state(trackable, weights_store, assets_store, inner_path, skip_mismatch, visited_trackables)\r\n 457 _load_state(\r\n 458 child_obj,\r\n 459 weights_store,\r\n (...)\r\n 463 visited_trackables=visited_trackables,\r\n 464 )\r\n 465 elif isinstance(child_obj, (list, dict, tuple, set)):\r\n--> 466 _load_container_state(\r\n 467 child_obj,\r\n 468 weights_store,\r\n 469 assets_store,\r\n 470 inner_path=tf.io.gfile.join(inner_path, child_attr),\r\n 471 skip_mismatch=skip_mismatch,\r\n 472 visited_trackables=visited_trackables,\r\n 473 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:534, in _load_container_state(container, weights_store, assets_store, inner_path, skip_mismatch, visited_trackables)\r\n 532 else:\r\n 533 used_names[name] = 0\r\n--> 534 _load_state(\r\n 535 trackable,\r\n 536 weights_store,\r\n 537 assets_store,\r\n 538 inner_path=tf.io.gfile.join(inner_path, name),\r\n 539 skip_mismatch=skip_mismatch,\r\n 540 visited_trackables=visited_trackables,\r\n 541 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:466, in _load_state(trackable, weights_store, assets_store, inner_path, skip_mismatch, visited_trackables)\r\n 457 _load_state(\r\n 458 child_obj,\r\n 459 weights_store,\r\n (...)\r\n 463 visited_trackables=visited_trackables,\r\n 464 )\r\n 465 elif isinstance(child_obj, (list, dict, tuple, set)):\r\n--> 466 _load_container_state(\r\n 467 child_obj,\r\n 468 weights_store,\r\n 469 assets_store,\r\n 470 inner_path=tf.io.gfile.join(inner_path, child_attr),\r\n 471 skip_mismatch=skip_mismatch,\r\n 472 visited_trackables=visited_trackables,\r\n 473 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:534, in _load_container_state(container, weights_store, assets_store, inner_path, skip_mismatch, visited_trackables)\r\n 532 else:\r\n 533 used_names[name] = 0\r\n--> 534 _load_state(\r\n 535 trackable,\r\n 536 weights_store,\r\n 537 assets_store,\r\n 538 inner_path=tf.io.gfile.join(inner_path, name),\r\n 539 skip_mismatch=skip_mismatch,\r\n 540 visited_trackables=visited_trackables,\r\n 541 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:457, in _load_state(trackable, weights_store, assets_store, inner_path, skip_mismatch, visited_trackables)\r\n 455 for child_attr, child_obj in _walk_trackable(trackable):\r\n 456 if _is_keras_trackable(child_obj):\r\n--> 457 _load_state(\r\n 458 child_obj,\r\n 459 weights_store,\r\n 460 assets_store,\r\n 461 inner_path=tf.io.gfile.join(inner_path, child_attr),\r\n 462 skip_mismatch=skip_mismatch,\r\n 463 visited_trackables=visited_trackables,\r\n 464 )\r\n 465 elif isinstance(child_obj, (list, dict, tuple, set)):\r\n 466 _load_container_state(\r\n 467 child_obj,\r\n 468 weights_store,\r\n (...)\r\n 472 visited_trackables=visited_trackables,\r\n 473 )\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\saving\\saving_lib.py:435, in _load_state(trackable, weights_store, assets_store, inner_path, skip_mismatch, visited_trackables)\r\n 428 warnings.warn(\r\n 429 f\"Could not load weights in object {trackable}. \"\r\n 430 \"Skipping object. \"\r\n 431 f\"Exception encountered: {e}\",\r\n 432 stacklevel=2,\r\n 433 )\r\n 434 else:\r\n--> 435 trackable.load_own_variables(weights_store.get(inner_path))\r\n 437 if hasattr(trackable, \"load_assets\") and assets_store:\r\n 438 if skip_mismatch:\r\n\r\nFile ~\\AppData\\Roaming\\Python\\Python311\\site-packages\\keras\\src\\engine\\base_layer.py:3531, in Layer.load_own_variables(self, store)\r\n 3529 all_vars = self._trainable_weights + self._non_trainable_weights\r\n 3530 if len(store.keys()) != len(all_vars):\r\n-> 3531 raise ValueError(\r\n 3532 f\"Layer '{self.name}' expected {len(all_vars)} variables, \"\r\n 3533 \"but received \"\r\n 3534 f\"{len(store.keys())} variables during loading. \"\r\n 3535 f\"Expected: {[v.name for v in all_vars]}\"\r\n 3536 )\r\n 3537 for i, v in enumerate(all_vars):\r\n 3538 # TODO(rchao): check shapes and raise errors.\r\n 3539 v.assign(store[f\"{i}\"])\r\n\r\nValueError: Layer 'SelfAttention' expected 0 variables, but received 1 variables during loading. Expected: []\r\n ", "Hi @SrikanthChellappa, thanks for raising this issue! \r\n\r\nThe errors your receiving are all from within the keras library, not `transformers` and the code example provided uses just tensorflow/keras functionality. As such, this isn't an issue for this repo. \r\n\r\nIf you do wish to use a model from the transformers library, the recommended way to load and save models in transformers is by using the `from_pretrained` and `save_pretrained` methods. This will save out all the necessary files, including the weights and model config e.g.: \r\n\r\n```py\r\nimport BertConfig, BertModel\r\n\r\n# Create a model\r\nconfig = BertConfig(num_attention_heads=4)\r\nmodel = BertModel(config)\r\n\r\n# Save out the model\r\nmodel.save_pretrained(\"my_model_name\")\r\n\r\n# Load a saved model\r\nmodel = BertModel.from_pretrained(\"my_model_name\")\r\n```\r\n\r\nIf you want to know more, I suggest looking through the [quicktour in the docs](https://huggingface.co/docs/transformers/quicktour).", "Thanks @amyeroberts ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info Python version 3.11.3 Transformer version 4.34.1 ### Who can help? @Rocketknight1 @gan ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Can you pls assist me on how to save the model with .keras or .h5 extensions to load it again. I am able to save the model as .keras buty it throws error when i load it again. Pls see the code used as below. Kindly assist I was able to save my model earlier using model.save('FlanT5-Chatbot_model.keras') When i tried loading the model again as below from tensorflow.keras.models import load_model model=load_model('FlanT5-Chatbot_model.keras') I am getting "ModuleNotFoundError" error. Error stack is given below Error Stack --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\saving\serialization_lib.py:800, in _retrieve_class_or_fn(name, registered_name, module, obj_type, full_config, custom_objects) 799 try: --> 800 mod = importlib.import_module(module) 801 except ModuleNotFoundError: File C:\ProgramData\anaconda3\Lib\importlib\__init__.py:126, in import_module(name, package) 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) File <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:1128, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds) File <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:1128, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds) File <frozen importlib._bootstrap>:1206, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:1178, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:1142, in _find_and_load_unlocked(name, import_) ModuleNotFoundError: No module named 'transformers.models' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) Cell In[4], line 2 1 from tensorflow.keras.models import load_model ----> 2 model=load_model('FlanT5-Chatbot_model.keras') File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\saving\saving_api.py:254, in load_model(filepath, custom_objects, compile, safe_mode, **kwargs) 249 if kwargs: 250 raise ValueError( 251 "The following argument(s) are not supported " 252 f"with the native Keras format: {list(kwargs.keys())}" 253 ) --> 254 return saving_lib.load_model( 255 filepath, 256 custom_objects=custom_objects, 257 compile=compile, 258 safe_mode=safe_mode, 259 ) 261 # Legacy case. 262 return legacy_sm_saving_lib.load_model( 263 filepath, custom_objects=custom_objects, compile=compile, **kwargs 264 ) File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\saving\saving_lib.py:281, in load_model(filepath, custom_objects, compile, safe_mode) 278 asset_store.close() 280 except Exception as e: --> 281 raise e 282 else: 283 return model File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\saving\saving_lib.py:246, in load_model(filepath, custom_objects, compile, safe_mode) 244 # Construct the model from the configuration file in the archive. 245 with ObjectSharingScope(): --> 246 model = deserialize_keras_object( 247 config_dict, custom_objects, safe_mode=safe_mode 248 ) 250 all_filenames = zf.namelist() 251 if _VARS_FNAME + ".h5" in all_filenames: File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\saving\serialization_lib.py:705, in deserialize_keras_object(config, custom_objects, safe_mode, **kwargs) 702 if obj is not None: 703 return obj --> 705 cls = _retrieve_class_or_fn( 706 class_name, 707 registered_name, 708 module, 709 obj_type="class", 710 full_config=config, 711 custom_objects=custom_objects, 712 ) 714 if isinstance(cls, types.FunctionType): 715 return cls File ~\AppData\Roaming\Python\Python311\site-packages\keras\src\saving\serialization_lib.py:802, in _retrieve_class_or_fn(name, registered_name, module, obj_type, full_config, custom_objects) 800 mod = importlib.import_module(module) 801 except ModuleNotFoundError: --> 802 raise TypeError( 803 f"Could not deserialize {obj_type} '{name}' because " 804 f"its parent module {module} cannot be imported. " 805 f"Full object config: {full_config}" 806 ) 807 obj = vars(mod).get(name, None) 809 if obj is None: 810 # Special case for keras.metrics.metrics TypeError: Could not deserialize class 'TFT5ForConditionalGeneration' because its parent module transformers.models.t5.modeling_tf_t5 cannot be imported. Full object config: {'module': 'transformers.models.t5.modeling_tf_t5', 'class_name': 'TFT5ForConditionalGeneration', 'config': {'vocab_size': 32128, 'd_model': 768, 'd_kv': 64, 'd_ff': 2048, 'num_layers': 12, 'num_decoder_layers': 12, 'num_heads': 12, 'relative_attention_num_buckets': 32, 'relative_attention_max_distance': 128, 'dropout_rate': 0.1, 'classifier_dropout': 0.0, 'layer_norm_epsilon': 1e-06, 'initializer_factor': 1.0, 'feed_forward_proj': 'gated-gelu', 'use_cache': True, 'dense_act_fn': 'gelu_new', 'is_gated_act': True, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'chunk_size_feed_forward': 0, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['T5ForConditionalGeneration'], 'finetuning_task': None, 'id2label': {'0': 'LABEL_0', '1': 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': 0, 'task_specific_params': {'summarization': {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}, 'translation_en_to_de': {'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to German: '}, 'translation_en_to_fr': {'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to French: '}, 'translation_en_to_ro': {'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to Romanian: '}}, 'problem_type': None, '_name_or_path': 'google/flan-t5-base', 'transformers_version': '4.34.1', 'model_type': 't5', 'n_positions': 512, 'output_past': True}, 'registered_name': 'TFT5ForConditionalGeneration', 'compile_config': {'optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': 1.9999999494757503e-05, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'loss': {'module': 'builtins', 'class_name': 'function', 'config': 'dummy_loss', 'registered_name': 'function'}, 'metrics': None, 'loss_weights': None, 'weighted_metrics': None, 'run_eagerly': None, 'steps_per_execution': None, 'jit_compile': None}} ​ ### Expected behavior Should be able to load the model and continue with predictions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27198/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27197
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27197/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27197/comments
https://api.github.com/repos/huggingface/transformers/issues/27197/events
https://github.com/huggingface/transformers/issues/27197
1,971,614,038
I_kwDOCUB6oc51hHFW
27,197
4.34 not compatible with accelerate
{ "login": "SingL3", "id": 20473466, "node_id": "MDQ6VXNlcjIwNDczNDY2", "avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SingL3", "html_url": "https://github.com/SingL3", "followers_url": "https://api.github.com/users/SingL3/followers", "following_url": "https://api.github.com/users/SingL3/following{/other_user}", "gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}", "starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SingL3/subscriptions", "organizations_url": "https://api.github.com/users/SingL3/orgs", "repos_url": "https://api.github.com/users/SingL3/repos", "events_url": "https://api.github.com/users/SingL3/events{/privacy}", "received_events_url": "https://api.github.com/users/SingL3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Setting `fsdp_cpu_ram_efficient_loading: false` in `fsdp_config` didnt work.", "@SingL3 I confronted similar problem, see [here](https://github.com/huggingface/transformers/issues/27166). is version 4.33.0 ok? how many a100 gpus do you use to train 13b llama? Do you know what's minimal gpus to run 70b?", "@fancyerii Yep, 4.33 works. \r\nIt depends on the `max_seq_len` to train 70b. My experience is, with 2048 length, 2 nodes (16 A100 80G) is needed.\r\nI check your issue and the blog you followed, I have done the same and I can run 70b successfully on 2 nodes. Didnt check if one nodes works.", "thanks. by the way, could you tell me the speed of 13b model when using a 8 gpus node? how much time it take to do a single batch(if per_device_train_batch_size is 1)?", "Didnt run under that config. I have trained using 4 gpus with `per_device_train_batch_size=2`(I think 4 also work) and deepspeed Zero 3. Took 5.5 hours for 5 epochs on dolly 15k.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info old env: ``` - `transformers` version: 4.33.0 - Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` new env(all latest version now): ``` - `transformers` version: 4.34.1 - Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am running a training mission of 34b model using `accelerate` on 2 nodes of A100 80G. My accelerate config: ```yaml compute_environment: LOCAL_MACHINE debug: false distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: true fsdp_offload_params: false fsdp_sharding_strategy: 1 fsdp_state_dict_type: SHARDED_STATE_DICT fsdp_sync_module_states: true fsdp_use_orig_params: true machine_rank: 0 main_training_function: main mixed_precision: 'bf16' num_machines: 1 num_processes: 8 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` launch command: ``` LAUNCHER="accelerate launch \ --config_file configs/fsdp_config.yaml \ --main_process_ip $MASTER_ADDR \ --main_process_port $MASTER_PORT \ --machine_rank $RANK \ --num_processes $NUM_PROCESSES \ --num_machines $NNODES \ ``` I can run successfully with the old env listed above, but when I run with the new env(`accelerate` 0.23.0 and 0.24.1), I got this error: ``` pt-mc7v1e5q-worker-0 logs: Traceback (most recent call last): pt-mc7v1e5q-worker-0 logs: File "/mnt/home/xxxxxxxxx/rlhf/LLMFinetune/llmft/commands/run.py", line 17, in <module> pt-mc7v1e5q-worker-0 logs: APIS.build(**config.api) pt-mc7v1e5q-worker-0 logs: File "/mnt/home/xxxxxxxxx/rlhf/LLMFinetune/llmft/utils/registry.py", line 64, in build pt-mc7v1e5q-worker-0 logs: return obj(**kwargs) pt-mc7v1e5q-worker-0 logs: File "/mnt/home/xxxxxxxxx/rlhf/LLMFinetune/llmft/apis/train_sft.py", line 36, in train_sft pt-mc7v1e5q-worker-0 logs: tokenizer, model = build_model(tokenizer, model, model_wrapper) pt-mc7v1e5q-worker-0 logs: File "/mnt/home/xxxxxxxxx/rlhf/LLMFinetune/llmft/models/utils.py", line 69, in build_model pt-mc7v1e5q-worker-0 logs: tokenizer, model = MODELS.build(**model, tokenizer=tokenizer) pt-mc7v1e5q-worker-0 logs: File "/mnt/home/xxxxxxxxx/rlhf/LLMFinetune/llmft/utils/registry.py", line 64, in build pt-mc7v1e5q-worker-0 logs: return obj(**kwargs) pt-mc7v1e5q-worker-0 logs: File "/mnt/home/xxxxxxxxx/rlhf/LLMFinetune/llmft/models/auto_causal_lm.py", line 16, in auto_causal_lm pt-mc7v1e5q-worker-0 logs: model = AutoModelForCausalLM.from_pretrained(model_path, **kwargs) pt-mc7v1e5q-worker-0 logs: File "/mnt/data/conda/envs/megatron_transformers434/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 560, in from_pretrained pt-mc7v1e5q-worker-0 logs: return model_class.from_pretrained( pt-mc7v1e5q-worker-0 logs: File "/mnt/data/conda/envs/megatron_transformers434/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3307, in from_pretrained pt-mc7v1e5q-worker-0 logs: ) = cls._load_pretrained_model( pt-mc7v1e5q-worker-0 logs: File "/mnt/data/conda/envs/megatron_transformers434/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3695, in _load_pretrained_model pt-mc7v1e5q-worker-0 logs: new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( pt-mc7v1e5q-worker-0 logs: File "/mnt/data/conda/envs/megatron_transformers434/lib/python3.9/site-packages/transformers/modeling_utils.py", line 741, in _load_state_dict_into_meta_model pt-mc7v1e5q-worker-0 logs: set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) pt-mc7v1e5q-worker-0 logs: File "/mnt/data/conda/envs/megatron_transformers434/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 317, in set_module_tensor_to_device pt-mc7v1e5q-worker-0 logs: new_value = value.to(device) pt-mc7v1e5q-worker-0 logs: NotImplementedError: Cannot copy out of meta tensor; no data! ``` ### Expected behavior Run successfully with either old or new env.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27197/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27196
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27196/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27196/comments
https://api.github.com/repos/huggingface/transformers/issues/27196/events
https://github.com/huggingface/transformers/pull/27196
1,971,423,259
PR_kwDOCUB6oc5eSLdD
27,196
Fix docstring get maskformer resize output image size
{ "login": "wesleylp", "id": 33898112, "node_id": "MDQ6VXNlcjMzODk4MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/33898112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wesleylp", "html_url": "https://github.com/wesleylp", "followers_url": "https://api.github.com/users/wesleylp/followers", "following_url": "https://api.github.com/users/wesleylp/following{/other_user}", "gists_url": "https://api.github.com/users/wesleylp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wesleylp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wesleylp/subscriptions", "organizations_url": "https://api.github.com/users/wesleylp/orgs", "repos_url": "https://api.github.com/users/wesleylp/repos", "events_url": "https://api.github.com/users/wesleylp/events{/privacy}", "received_events_url": "https://api.github.com/users/wesleylp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27196). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Fix `get_maskformer_resize_output_image_size docstring` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27196/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27196", "html_url": "https://github.com/huggingface/transformers/pull/27196", "diff_url": "https://github.com/huggingface/transformers/pull/27196.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27196.patch", "merged_at": 1698841575000 }
https://api.github.com/repos/huggingface/transformers/issues/27195
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27195/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27195/comments
https://api.github.com/repos/huggingface/transformers/issues/27195/events
https://github.com/huggingface/transformers/pull/27195
1,971,367,006
PR_kwDOCUB6oc5eR_Zm
27,195
[WhisperForCausalLM] Add WhisperForCausalLM for speculative decoding
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The failing tests seem to be unrelated:\r\n\r\n```\r\nFAILED tests/trainer/test_trainer.py::TrainerIntegrationWithHubTester::test_push_to_hub_with_saves_each_n_steps - Failed: Timeout >120.0s\r\nUNEXPECTED EXCEPTION: ChunkedEncodingError(ProtocolError('Connection broken: IncompleteRead(3430997229 bytes read, 2742372923 more expected)', IncompleteRead(3430997229 bytes read, 2742372923 more expected)))\r\nFAILED tests/models/marian/test_modeling_marian.py::MarianModelTest::test_save_load_keys_to_ignore_on_save - FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpvv597zd7/pytorch_model.bin'\r\nFAILED tests/models/prophetnet/test_modeling_prophetnet.py::ProphetNetModelTest::test_causal_lm_from_pretrained - AssertionError: False is not true\r\nFAILED tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TGenerationTest::test_speech_generation - Failed: Timeout >120.0s\r\nFAILED tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TGenerationTest::test_text_generation - AssertionError: Lists differ: [3, 4, 8, 3, 3, 4, 3, 0] != [3, 4, 8, 7, 1, 11, 7, 18, 18, 3, 0, 0, 0, 0, 0, 0,[74 chars]8, 6]\r\n```", "Not exactly sure what's going on with the docs. They appear just fine with the doc-builder for me:\r\n\r\n![Screenshot from 2023-11-01 11-36-05](https://github.com/huggingface/transformers/assets/23423619/eae31080-fa4f-4a4e-bec3-0744c0722206)\r\n", "Merging as I think Joao is off today and changes in assisted generation are quite minimal IMO. @gante would be great if you could nevertheless take a look once back :-) " ]
1,698
1,698
1,698
MEMBER
null
# What does this PR do? This PR enables speculative decoding for all cases where the assistant model is stripped of its encoder weights as they are shared with the teacher model. For now, Distil-Whisper is the main use case here. In addition a `WhisperForCausalLM` is loaded as it didn't exist yet for Distil-Whisper. The following code should therefore be enabled: ```py from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq, AutoModelForCausalLM from datasets import load_dataset import torch import time # load models and processor processor = AutoProcessor.from_pretrained("openai/whisper-large-v2") model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-large-v2", torch_dtype=torch.float16, low_cpu_mem_usage=True) model.cuda() assistant_model = AutoModelForCausalLM.from_pretrained("patrickvonplaten/whisper-large-v2-32-2", torch_dtype=torch.float16, low_cpu_mem_usage=True) assistant_model.cuda() print(f"Assistant num params compared to teachear {100 * assistant_model.num_parameters() / model.num_parameters()} %.") # load audio file ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = ds[0]["audio"] input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features input_features = input_features.to(dtype=torch.float16, device="cuda") # warm-up _ = model.generate(input_features) # generate token ids with teacher start_time = time.time() predicted_ids = model.generate(input_features) print("Time normal", time.time() - start_time) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) print(transcription) print(20 * "-") start_time = time.time() predicted_ids = model.generate(input_features, assistant_model=assistant_model) print("Time speculative decoding", time.time() - start_time) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) print(transcription) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27195/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27195", "html_url": "https://github.com/huggingface/transformers/pull/27195", "diff_url": "https://github.com/huggingface/transformers/pull/27195.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27195.patch", "merged_at": 1698850914000 }
https://api.github.com/repos/huggingface/transformers/issues/27194
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27194/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27194/comments
https://api.github.com/repos/huggingface/transformers/issues/27194/events
https://github.com/huggingface/transformers/pull/27194
1,971,272,366
PR_kwDOCUB6oc5eRqrM
27,194
Fixed clip typo
{ "login": "Asymtode712", "id": 115717746, "node_id": "U_kgDOBuW2cg", "avatar_url": "https://avatars.githubusercontent.com/u/115717746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Asymtode712", "html_url": "https://github.com/Asymtode712", "followers_url": "https://api.github.com/users/Asymtode712/followers", "following_url": "https://api.github.com/users/Asymtode712/following{/other_user}", "gists_url": "https://api.github.com/users/Asymtode712/gists{/gist_id}", "starred_url": "https://api.github.com/users/Asymtode712/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Asymtode712/subscriptions", "organizations_url": "https://api.github.com/users/Asymtode712/orgs", "repos_url": "https://api.github.com/users/Asymtode712/repos", "events_url": "https://api.github.com/users/Asymtode712/events{/privacy}", "received_events_url": "https://api.github.com/users/Asymtode712/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Asymtode712, thanks for opening this PR!\r\n\r\nUnfortunately, we can't update the names of layers and modules in a model once its weights have been released, as this would prevent them from being properly loaded into the model after the name change." ]
1,698
1,698
1,698
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27190 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27194/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27194", "html_url": "https://github.com/huggingface/transformers/pull/27194", "diff_url": "https://github.com/huggingface/transformers/pull/27194.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27194.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27193/comments
https://api.github.com/repos/huggingface/transformers/issues/27193/events
https://github.com/huggingface/transformers/pull/27193
1,971,179,494
PR_kwDOCUB6oc5eRWsh
27,193
Fix the typos and grammar mistakes in CONTRIBUTING.md.
{ "login": "THEFZNKHAN", "id": 124388165, "node_id": "U_kgDOB2oDRQ", "avatar_url": "https://avatars.githubusercontent.com/u/124388165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/THEFZNKHAN", "html_url": "https://github.com/THEFZNKHAN", "followers_url": "https://api.github.com/users/THEFZNKHAN/followers", "following_url": "https://api.github.com/users/THEFZNKHAN/following{/other_user}", "gists_url": "https://api.github.com/users/THEFZNKHAN/gists{/gist_id}", "starred_url": "https://api.github.com/users/THEFZNKHAN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/THEFZNKHAN/subscriptions", "organizations_url": "https://api.github.com/users/THEFZNKHAN/orgs", "repos_url": "https://api.github.com/users/THEFZNKHAN/repos", "events_url": "https://api.github.com/users/THEFZNKHAN/events{/privacy}", "received_events_url": "https://api.github.com/users/THEFZNKHAN/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts \r\nPlease review my PR\r\nand if any things needs to update or fix then tell me 😊.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27193). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Fix the typos and grammar mistakes in CONTRIBUTING.md. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27193/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27193", "html_url": "https://github.com/huggingface/transformers/pull/27193", "diff_url": "https://github.com/huggingface/transformers/pull/27193.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27193.patch", "merged_at": 1698842543000 }
https://api.github.com/repos/huggingface/transformers/issues/27192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27192/comments
https://api.github.com/repos/huggingface/transformers/issues/27192/events
https://github.com/huggingface/transformers/pull/27192
1,971,117,327
PR_kwDOCUB6oc5eRJQh
27,192
fix docstring in get_maskformer_resize_output_image_size
{ "login": "wesleylp", "id": 33898112, "node_id": "MDQ6VXNlcjMzODk4MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/33898112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wesleylp", "html_url": "https://github.com/wesleylp", "followers_url": "https://api.github.com/users/wesleylp/followers", "following_url": "https://api.github.com/users/wesleylp/following{/other_user}", "gists_url": "https://api.github.com/users/wesleylp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wesleylp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wesleylp/subscriptions", "organizations_url": "https://api.github.com/users/wesleylp/orgs", "repos_url": "https://api.github.com/users/wesleylp/repos", "events_url": "https://api.github.com/users/wesleylp/events{/privacy}", "received_events_url": "https://api.github.com/users/wesleylp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Fixes docstring in `get_maskformer_resize_output_image_size`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts @rafaelpadilla
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27192/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27192", "html_url": "https://github.com/huggingface/transformers/pull/27192", "diff_url": "https://github.com/huggingface/transformers/pull/27192.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27192.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27191/comments
https://api.github.com/repos/huggingface/transformers/issues/27191/events
https://github.com/huggingface/transformers/pull/27191
1,971,042,749
PR_kwDOCUB6oc5eQ42Y
27,191
fixing docstring in get_resize_output_image_size function
{ "login": "wesleylp", "id": 33898112, "node_id": "MDQ6VXNlcjMzODk4MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/33898112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wesleylp", "html_url": "https://github.com/wesleylp", "followers_url": "https://api.github.com/users/wesleylp/followers", "following_url": "https://api.github.com/users/wesleylp/following{/other_user}", "gists_url": "https://api.github.com/users/wesleylp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wesleylp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wesleylp/subscriptions", "organizations_url": "https://api.github.com/users/wesleylp/orgs", "repos_url": "https://api.github.com/users/wesleylp/repos", "events_url": "https://api.github.com/users/wesleylp/events{/privacy}", "received_events_url": "https://api.github.com/users/wesleylp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27191). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Fixes #27185 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts @rafaelpadilla
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27191/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27191", "html_url": "https://github.com/huggingface/transformers/pull/27191", "diff_url": "https://github.com/huggingface/transformers/pull/27191.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27191.patch", "merged_at": 1698842561000 }
https://api.github.com/repos/huggingface/transformers/issues/27190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27190/comments
https://api.github.com/repos/huggingface/transformers/issues/27190/events
https://github.com/huggingface/transformers/issues/27190
1,971,042,371
I_kwDOCUB6oc51e7hD
27,190
CLIP Typo
{ "login": "fabfish", "id": 43961456, "node_id": "MDQ6VXNlcjQzOTYxNDU2", "avatar_url": "https://avatars.githubusercontent.com/u/43961456?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabfish", "html_url": "https://github.com/fabfish", "followers_url": "https://api.github.com/users/fabfish/followers", "following_url": "https://api.github.com/users/fabfish/following{/other_user}", "gists_url": "https://api.github.com/users/fabfish/gists{/gist_id}", "starred_url": "https://api.github.com/users/fabfish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabfish/subscriptions", "organizations_url": "https://api.github.com/users/fabfish/orgs", "repos_url": "https://api.github.com/users/fabfish/repos", "events_url": "https://api.github.com/users/fabfish/events{/privacy}", "received_events_url": "https://api.github.com/users/fabfish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I would like to fix it. Please assign me", "Hi @fabfish, thanks for reporting! \r\n\r\nUnfortunately, we can't update the names of layers and modules in a model once its weights have been released, as this would prevent them from being properly loaded into the model after the name change. ", "> Hi @fabfish, thanks for reporting!\r\n> \r\n> Unfortunately, we can't update the names of layers and modules in a model once its weights have been released, as this would prevent them from being properly loaded into the model after the name change.\r\n\r\nThat's unfortunate, but I get it. Thank you for your response!" ]
1,698
1,698
1,698
NONE
null
Hi, I just found a typo in the CLIP model at https://github.com/huggingface/transformers/blob/50378cbf6c1fd8717a74b36c352f57f9a73e7282/src/transformers/models/clip/modeling_clip.py#L817C26-L817C26 where the "layernorm" is misspelled as "layrnorm" in "pre_layrnorm" but correct in "post_layernorm", guess it would be better to fix it in the code and pretrained weights to keep consistency.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27190/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27189/comments
https://api.github.com/repos/huggingface/transformers/issues/27189/events
https://github.com/huggingface/transformers/pull/27189
1,971,035,399
PR_kwDOCUB6oc5eQ3Mr
27,189
Add `persistent_workers` parameter to `TrainingArguments`
{ "login": "Sorrow321", "id": 20703486, "node_id": "MDQ6VXNlcjIwNzAzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sorrow321", "html_url": "https://github.com/Sorrow321", "followers_url": "https://api.github.com/users/Sorrow321/followers", "following_url": "https://api.github.com/users/Sorrow321/following{/other_user}", "gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions", "organizations_url": "https://api.github.com/users/Sorrow321/orgs", "repos_url": "https://api.github.com/users/Sorrow321/repos", "events_url": "https://api.github.com/users/Sorrow321/events{/privacy}", "received_events_url": "https://api.github.com/users/Sorrow321/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @Sorrow321 🤗 " ]
1,698
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? This PR gives the users the ability to pass `persistent_workers` parameters as `TrainingArguments` parameter to `Trainer`. Without this PR, you have to inherit from `Trainer` class to alter the behavior of `get_train_dataloader` function. This parameters does the [following thing](https://pytorch.org/docs/stable/data.html): > If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. [Explanation](https://discuss.pytorch.org/t/what-are-the-dis-advantages-of-persistent-workers/102110/2?u=ifedorov): > With this option to false, every time your code hits a line line for sample in dataloader:, it will create a brand new set of workers to do this loading and will kill them on exit. > Meaning that if you have multiple dataloaders, the workers will be killed when you are done with one instantly. > > If you make them persist, these workers will stay around (with their state) waiting for another call into that dataloader. > > Setting this to True will improve performances when you call into the dataloader multiple times in a row (as creating the workers is expensive). But it also means that the dataloader will have some persistent state even when it is not used (which can use some RAM depending on your dataset). Fixes [#27058](https://github.com/huggingface/transformers/issues/27058) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). It adds description of the parameter to the docs as well. - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? Yes - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/27058 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). Yes - [ ] Did you write any new necessary tests? No ## Who can review? - trainer: @muellerzr @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27189/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27189/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27189", "html_url": "https://github.com/huggingface/transformers/pull/27189", "diff_url": "https://github.com/huggingface/transformers/pull/27189.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27189.patch", "merged_at": 1701672212000 }
https://api.github.com/repos/huggingface/transformers/issues/27188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27188/comments
https://api.github.com/repos/huggingface/transformers/issues/27188/events
https://github.com/huggingface/transformers/issues/27188
1,971,011,385
I_kwDOCUB6oc51ez85
27,188
RuntimeError: Failed to import transformers.pipelines because of the following error : cannot import name 'Memory' from 'joblib' (unknown location)
{ "login": "tasmiatasrin", "id": 48873597, "node_id": "MDQ6VXNlcjQ4ODczNTk3", "avatar_url": "https://avatars.githubusercontent.com/u/48873597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tasmiatasrin", "html_url": "https://github.com/tasmiatasrin", "followers_url": "https://api.github.com/users/tasmiatasrin/followers", "following_url": "https://api.github.com/users/tasmiatasrin/following{/other_user}", "gists_url": "https://api.github.com/users/tasmiatasrin/gists{/gist_id}", "starred_url": "https://api.github.com/users/tasmiatasrin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tasmiatasrin/subscriptions", "organizations_url": "https://api.github.com/users/tasmiatasrin/orgs", "repos_url": "https://api.github.com/users/tasmiatasrin/repos", "events_url": "https://api.github.com/users/tasmiatasrin/events{/privacy}", "received_events_url": "https://api.github.com/users/tasmiatasrin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Related comment on another issue: https://github.com/huggingface/transformers/issues/14773#issuecomment-1787714316", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info Hello, I installed transformers from the conda channel huggingface in linux. The Transformers version installed: "4.32.1". FYI, Python version: 3.9.15, Conda version: 4.10.3, Conda build version:3.21.5 . ### Who can help? I tried to check whether the installation is working by running the python command given in the huggingface transformers installation page, I am getting the following error: Traceback (most recent call last): File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1130, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 60, in <module> from .document_question_answering import DocumentQuestionAnsweringPipeline File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/pipelines/document_question_answering.py", line 29, in <module> from .question_answering import select_starts_ends File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/pipelines/question_answering.py", line 9, in <module> from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/data/__init__.py", line 26, in <module> from .metrics import glue_compute_metrics, xnli_compute_metrics File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/data/metrics/__init__.py", line 20, in <module> from sklearn.metrics import f1_score, matthews_corrcoef File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/__init__.py", line 83, in <module> from .base import clone File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/base.py", line 19, in <module> from .utils import _IS_32BIT File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/utils/__init__.py", line 19, in <module> from . import _joblib, metadata_routing File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/utils/_joblib.py", line 8, in <module> from joblib import ( ImportError: cannot import name 'Memory' from 'joblib' (unknown location) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1120, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1132, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name 'Memory' from 'joblib' (unknown location) Also, getting the same error for importing transformers.trainer while using image pretraining from https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining. Is it any issue related to my installation? how to fix this? any help would be truly appreciated. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction While trying to run this following code from: https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim.py I am going to use my own datasets with images. ### Expected behavior Traceback (most recent call last): File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1130, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/trainer.py", line 59, in <module> from .data.data_collator import DataCollator, DataCollatorWithPadding, default_data_collator File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/data/__init__.py", line 26, in <module> from .metrics import glue_compute_metrics, xnli_compute_metrics File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/data/metrics/__init__.py", line 20, in <module> from sklearn.metrics import f1_score, matthews_corrcoef File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/__init__.py", line 83, in <module> from .base import clone File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/base.py", line 19, in <module> from .utils import _IS_32BIT File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/utils/__init__.py", line 19, in <module> from . import _joblib, metadata_routing File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/sklearn/utils/_joblib.py", line 8, in <module> from joblib import ( ImportError: cannot import name 'Memory' from 'joblib' (unknown location) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/tasmia/minigrid_agent/Minigrid-master/minigrid_img_trainer/run_mim.py", line 29, in <module> from transformers import ( File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1120, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/tasmia/anaconda3/envs/minigrid_dev/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1132, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'Memory' from 'joblib' (unknown location)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27188/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27187/comments
https://api.github.com/repos/huggingface/transformers/issues/27187/events
https://github.com/huggingface/transformers/issues/27187
1,970,923,189
I_kwDOCUB6oc51eea1
27,187
LlamaTokenizer is unbearably slow when dealing with large strings
{ "login": "LuciferianInk", "id": 94832312, "node_id": "U_kgDOBacGuA", "avatar_url": "https://avatars.githubusercontent.com/u/94832312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LuciferianInk", "html_url": "https://github.com/LuciferianInk", "followers_url": "https://api.github.com/users/LuciferianInk/followers", "following_url": "https://api.github.com/users/LuciferianInk/following{/other_user}", "gists_url": "https://api.github.com/users/LuciferianInk/gists{/gist_id}", "starred_url": "https://api.github.com/users/LuciferianInk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LuciferianInk/subscriptions", "organizations_url": "https://api.github.com/users/LuciferianInk/orgs", "repos_url": "https://api.github.com/users/LuciferianInk/repos", "events_url": "https://api.github.com/users/LuciferianInk/events{/privacy}", "received_events_url": "https://api.github.com/users/LuciferianInk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Alright, well it seems like the clear solution is to use a batched tokenization strategy. Here's what I'm doing now (though I'm not sure if this is the best method):\r\n```\r\nblock_size = 1024\r\nbatch_size = 100000\r\ncontent = <my_very_large_string>\r\n\r\nbatches = [\r\n content[i : i + batch_size] for i in range(0, len(content), batch_size)\r\n]\r\ntokenized_batches = []\r\nwith tqdm(total=len(batches)) as pbar:\r\n for batch in batches:\r\n tokenized = tokenizer(\r\n batch,\r\n max_length=block_size,\r\n stride=stride,\r\n padding=\"max_length\",\r\n return_overflowing_tokens=True,\r\n truncation=True,\r\n return_tensors=\"np\",\r\n )\r\n tokenized_batches.append(tokenized[\"input_ids\"])\r\n pbar.update(1)\r\n\r\ntokens = np.concatenate(tokenized_batches)\r\n```\r\nI would appreciate if anyone could let me know if there is a more-optimal way to do this batching. I can already see a problem in the current implementation, where each batch is going to end with a bunch of padding that wouldn't be there otherwise (if we tokenized the entire string in one batch).", "Hey, this seems to be related to #25873, but yes you should use the `tokenizer.batch_encode` (or just `encode` which supports batch of strings for the fast version, while the slow should be alright. \r\nI can still try to run some benchmarks on whether or not the latest updates have significantly slowed down the encoding / decoding (this is on my todo list!) \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.34.1 - Platform: Linux-6.5.9-arch2-1-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: true - Using distributed or parallel set-up in script?: false ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Here is a simplified example of what I'm trying to do: ``` from transformers import AutoTokenizer text = <very_large_string> tokenizer = AutoTokenizer.from_pretrained( "mistralai/Mistral-7B-v0.1", padding="max_length", padding_side="left", return_overflowing_tokens=True, truncation=True, ) tokenized = tokenizer( text, max_length=1024, stride=64, padding="max_length", return_overflowing_tokens=True, truncation=True, return_tensors="np", ) ``` I also tested the `PY007/TinyLlama-1.1B-intermediate-step-480k-1T` implementation of the `LlamaTokenizer`, and it has the same problem. Tokenization is really, really slow. [Here is a link to my tokenization code](https://github.com/LuciferianInk/aigen/blob/9bc76f074053f6e5907cdf2072b70d7f4eac383a/aigen/TokenDataset.py#L131), and [here is a link to the Dockerfile that represents my entire training environment](https://github.com/0-5788719150923125/vtx/blob/main/Dockerfile.x64). ### Expected behavior When tokenizing very large strings with the `LlamaTokenizerFast`, progress is unbearably slow. Even worse, progress becomes slower and slower as the length of a string increases. For example: **Test String 1** (LlamaTokenizer) 768 batches, 1024 tokens each 4 seconds total 0.005 seconds/batch average **Test String 2** (LlamaTokenizer) 141473 batches, 1024 tokens each 9600 seconds total 0.068 seconds/batch average By comparison, if I use the `GPTNeoXTokenizerFast` tokenizer: **Test String 1** (GPTNeoXTokenizer) 68 batches, 2048 tokens each 0.06 seconds total 0.0009 seconds/batch average For every other tokenizer I've used, performance is tolerable. `LlamaTokenizer`, however, has abysmal performance, and I'm not sure why. Some things I've observed: - Other tokenizers will consume a ton of system memory, as they tokenize large strings. `LlamaTokenizer` doesn't do that. Memory usage stays very low, and constant; it doesn't grow at all. - Adjusting the `TOKENIZERS_PARALLELISM` environment variable seems to have little to no effect. - `use_fast=False` does not fix my problem. I'm not sure what to try at this point, and I would appreciate any advice you have. This is an incredibly frustrating problem. Whereas my standard dataset would typically take no longer than 30 minutes to tokenize, this one is going to take a full 24+ hours, at the rate its going now! Clearly, something is wrong; I'm just not sure if it's my environment, or a bug. Thanks in advance for any assistance you can provide.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27187/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27186/comments
https://api.github.com/repos/huggingface/transformers/issues/27186/events
https://github.com/huggingface/transformers/issues/27186
1,970,919,016
I_kwDOCUB6oc51edZo
27,186
Speculative decoding functionality
{ "login": "domgri", "id": 47460259, "node_id": "MDQ6VXNlcjQ3NDYwMjU5", "avatar_url": "https://avatars.githubusercontent.com/u/47460259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/domgri", "html_url": "https://github.com/domgri", "followers_url": "https://api.github.com/users/domgri/followers", "following_url": "https://api.github.com/users/domgri/following{/other_user}", "gists_url": "https://api.github.com/users/domgri/gists{/gist_id}", "starred_url": "https://api.github.com/users/domgri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/domgri/subscriptions", "organizations_url": "https://api.github.com/users/domgri/orgs", "repos_url": "https://api.github.com/users/domgri/repos", "events_url": "https://api.github.com/users/domgri/events{/privacy}", "received_events_url": "https://api.github.com/users/domgri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Interesting! @gante WDYT? ", "Hi @domgri 👋 \r\n\r\nI'm very interested in having DeepMind's Speculative decoding when `do_sample=True`, in `assisted_generation`! It should only need a few extra if/elses, to handle the clever token acceptance strategy they came up with :)\r\n\r\nAs for the others, since they don't seem to bring additional benefits, I would like to avoid adding them -- maintaining different generation methods takes up a significant amount of time, so they need to bring benefits for us to justify adding them to the codebase. \r\n\r\nFinally, FYI, I'm trying to design a strategy for issues like this one: some way to leverage the Hub to add new generation methods than don't need to be approved (or maintained) by the transformers team 🙌 ", "Thanks for a quick and encouraging feedback!\r\n\r\nThen for now I will focus on DeepMind's Speculative decoding and see how it goes 🙌. Though, will be interested to see the outcome of strategy question as well 👍.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### Feature request Hey 👋 Idea is to add speculative decoding functionality in assisted generation code path. There are several similar ways how that can be achieved. Related papers: * [Blockwise Parallel Decoding](https://proceedings.neurips.cc/paper/2018/file/c4127b9194fe8562c64dc0f5bf2c93bc-Paper.pdf), by Google Brain * [Speculative Sampling](https://arxiv.org/abs/2302.01318), by DeepMind * [Fast Inference from Transformers via Speculative Decoding](https://arxiv.org/pdf/2211.17192.pdf) ### Motivation - Shows promising inference time improvements. - Would enable faster expermentation with various models. - Could become a fundamental place for more elaborate experimentation of speculative decoding. Related issues/discussions: #26565 Related blog post explaining current assisted generation implementation and its (improvements and) limitations: [here](https://huggingface.co/blog/assisted-generation) by @gante ### Your contribution I would like to give an attempt and rase a PR for this (e.g. for Google Brain implementation). Although, might need some help with: - Deciding which approach to implement - Decision for code architecture - either combine with assisted generation, partially combine or implement separately - Motivation - maybe I am only one interested in this idea 😊 Any feedback would be greatly appreciated 🙇‍♂️
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27186/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27185/comments
https://api.github.com/repos/huggingface/transformers/issues/27185/events
https://github.com/huggingface/transformers/issues/27185
1,970,851,203
I_kwDOCUB6oc51eM2D
27,185
Small issue in docstring found
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "@rafaelpadilla can I work on that?" ]
1,698
1,698
1,698
CONTRIBUTOR
null
function `get_resize_output_image_size(...)` is being `# Copied from` in many models [code](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+get_resize_output_image_size&type=code). However, the docstring has some issues like the `input_image` argument is missing and `image_size` is in docstring but never used. Example found in [image_processing_deformable_detr.py](https://github.com/huggingface/transformers/blob/6b7f8ff1f3db0b21afbedb322770c292bc8dedae/src/transformers/models/deformable_detr/image_processing_deformable_detr.py#L126) ```python def get_resize_output_image_size( input_image: np.ndarray, size: Union[int, Tuple[int, int], List[int]], max_size: Optional[int] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, ) -> Tuple[int, int]: """ Computes the output image size given the input image size and the desired output size. If the desired output size is a tuple or list, the output image size is returned as is. If the desired output size is an integer, the output image size is computed by keeping the aspect ratio of the input image size. Args: image_size (`Tuple[int, int]`): The input image size. size (`int`): The desired output size. max_size (`int`, *optional*): The maximum allowed output size. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image. """ ..... ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27185/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27184/comments
https://api.github.com/repos/huggingface/transformers/issues/27184/events
https://github.com/huggingface/transformers/pull/27184
1,970,845,954
PR_kwDOCUB6oc5eQNUg
27,184
Unify warning styles for better readability
{ "login": "oneonlee", "id": 73745836, "node_id": "MDQ6VXNlcjczNzQ1ODM2", "avatar_url": "https://avatars.githubusercontent.com/u/73745836?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oneonlee", "html_url": "https://github.com/oneonlee", "followers_url": "https://api.github.com/users/oneonlee/followers", "following_url": "https://api.github.com/users/oneonlee/following{/other_user}", "gists_url": "https://api.github.com/users/oneonlee/gists{/gist_id}", "starred_url": "https://api.github.com/users/oneonlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oneonlee/subscriptions", "organizations_url": "https://api.github.com/users/oneonlee/orgs", "repos_url": "https://api.github.com/users/oneonlee/repos", "events_url": "https://api.github.com/users/oneonlee/events{/privacy}", "received_events_url": "https://api.github.com/users/oneonlee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27184). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Quick unify warning styles of [examples/pytorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch) for better readability Example: - Before - `10/31/2023 15:41:34 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 6distributed training: False, 16-bits training: False` - After - `10/31/2023 15:41:34 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 6, distributed training: False, 16-bits training: False` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @amyeroberts @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27184", "html_url": "https://github.com/huggingface/transformers/pull/27184", "diff_url": "https://github.com/huggingface/transformers/pull/27184.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27184.patch", "merged_at": 1698775934000 }
https://api.github.com/repos/huggingface/transformers/issues/27183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27183/comments
https://api.github.com/repos/huggingface/transformers/issues/27183/events
https://github.com/huggingface/transformers/issues/27183
1,970,839,170
I_kwDOCUB6oc51eJ6C
27,183
Model loading on meta device
{ "login": "RonanKMcGovern", "id": 78278410, "node_id": "MDQ6VXNlcjc4Mjc4NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RonanKMcGovern", "html_url": "https://github.com/RonanKMcGovern", "followers_url": "https://api.github.com/users/RonanKMcGovern/followers", "following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}", "gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}", "starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions", "organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs", "repos_url": "https://api.github.com/users/RonanKMcGovern/repos", "events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}", "received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @RonanKMcGovern \r\nthanks for your issue\r\nI ran:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel_id = \"tiiuae/falcon-7b\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n load_in_4bit=True,\r\n torch_dtype=torch.bfloat16,\r\n)\r\n\r\nfor n, p in model.named_parameters():\r\n if p.device.type == \"meta\":\r\n print(f\"{n} is on meta!\")\r\n```\r\n\r\nand I can confirm I had no parameter on the meta device while having the same error message you shared. Perhaps it is a bug at accelerate. Can you file an issue there and use this small handy snippet?", "done, thanks: https://github.com/huggingface/accelerate/issues/2103" ]
1,698
1,698
1,698
NONE
null
### System Info A6000 GPU on runpod. Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.35.0.dev0 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.25.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes A6000 - Using distributed or parallel set-up in script?: Only one GPU, so shouldn't be relevant, but somehow the model is getting loaded to cpu at least in part. ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` !pip install -U -q git+https://github.com/huggingface/transformers.git !pip install -q -U bitsandbytes !pip install -q -U git+https://github.com/huggingface/peft.git !pip install -q -U git+https://github.com/huggingface/accelerate.git !pip install -q datasets !pip install -q -U scipy !pip install -U flash-attn -q !pip install -q -U trl from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, AutoConfig import torch model_id = "tiiuae/falcon-7b" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) config = AutoConfig.from_pretrained(model_id) config.max_position_embeddings = 4096 # (input + output) tokens can now be up to 4096 model = AutoModelForCausalLM.from_pretrained( model_id, config=config, quantization_config=bnb_config, # rope_scaling={"type": "linear", "factor": 2.0}, device_map='auto', # trust_remote_code=True, torch_dtype=torch.bfloat16, use_flash_attention_2=True, # works with Llama models and reduces memory reqs cache_dir=cache_dir) ``` ### Expected behavior I would expect this model to easily fit on an A6000 with 48GB of VRAM. Instead, I get this error/notification: ``` WARNING:root:Some parameters are on the meta device device because they were offloaded to the . WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu/disk. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27183/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27182/comments
https://api.github.com/repos/huggingface/transformers/issues/27182/events
https://github.com/huggingface/transformers/pull/27182
1,970,690,894
PR_kwDOCUB6oc5ePrZ2
27,182
Fix dropout in `StarCoder`
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27182). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes the dropout in the `GPTBigCodeFlashAttention2` module. Thanks to @sohamparikh94 for catching this. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> cc : @amyeroberts , @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27182/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27182/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27182", "html_url": "https://github.com/huggingface/transformers/pull/27182", "diff_url": "https://github.com/huggingface/transformers/pull/27182.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27182.patch", "merged_at": 1698767098000 }
https://api.github.com/repos/huggingface/transformers/issues/27181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27181/comments
https://api.github.com/repos/huggingface/transformers/issues/27181/events
https://github.com/huggingface/transformers/issues/27181
1,970,679,822
I_kwDOCUB6oc51djAO
27,181
`skip_memory_metrics=False` breaks training loop when on MPS device
{ "login": "Datamance", "id": 8699411, "node_id": "MDQ6VXNlcjg2OTk0MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/8699411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Datamance", "html_url": "https://github.com/Datamance", "followers_url": "https://api.github.com/users/Datamance/followers", "following_url": "https://api.github.com/users/Datamance/following{/other_user}", "gists_url": "https://api.github.com/users/Datamance/gists{/gist_id}", "starred_url": "https://api.github.com/users/Datamance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Datamance/subscriptions", "organizations_url": "https://api.github.com/users/Datamance/orgs", "repos_url": "https://api.github.com/users/Datamance/repos", "events_url": "https://api.github.com/users/Datamance/events{/privacy}", "received_events_url": "https://api.github.com/users/Datamance/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Update to the related issue, which adds some insight: adding an `on_step_begin` cache-clearing callback like so:\r\n\r\n```python\r\nclass MpsCacheClearCallback(transformers.TrainerCallback):\r\n def on_step_begin(\r\n self,\r\n args: TrainingArguments,\r\n state: TrainerState,\r\n control: TrainerControl,\r\n **kwargs,\r\n ):\r\n gc.collect()\r\n torch.mps.empty_cache()\r\n gc.collect()\r\n```\r\n\r\ndoes seem to _transiently_ reduce the memory use, however, the ceiling does seem to keep moving up as the training run progresses. My takeaway from this is that it's not a memory leak issue - the Trainer is deliberately storing some tensors that I don't want. This model is a binary classifier; there's no way that the loss or metrics should take up so much space. Which brings me back to the original reason for filing this bug - in order to see what the holdover/cruft is, I need more visibility into memory metrics. Again, I'm open to workarounds as well!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "gently pinging @muellerzr " ]
1,698
1,706
null
NONE
null
### System Info Here's my environment info: - `transformers` version: 4.34.0 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.11.5 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu) - Jax version: 0.4.18 - JaxLib version: 0.4.18 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: N/A ### Who can help? @muellerzr @pacman100 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Create a Trainer, pass `skip_memory_metrics=False` 2. Receive `ValueError: No available GPU device found!` The problem comes from [this block of code](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_utils.py#L496) in `TrainerMemoryTracker` that doesn't check for `torch.mps`. ### Expected behavior Unless there's some special case for MPS or apple silicon (in which case, it should be documented), I'd like to be able to log/profile memory metrics with the tooling here. Specifically what I am trying to do is track down what looks like a memory leak in `_inner_training_loop`. Even with a batch size of 1, a single gradient accumulation step, and no eval metrics, Activity Monitor shows the memory footprint of my training script growing by about ~10 MB every 30-40 seconds or so. This wouldn't be a big deal normally, but the assignment I'm working on wants us to only use 15 GB GPU memory. My total memory footprint starts at about 14.3GB and pretty quickly reaches 15 after a few iterations. You can see the way I construct the trainer [here](https://github.com/Datamance/SecondProject/blob/master/data_tools.py#L321). Also open to hearing workarounds for this - it sounds like `TrainerCallback` could be useful here, something like: ```python class MpsCacheClearCallback(transformers.TrainerCallback): def on_epoch_end( self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs, ): gc.collect() torch.mps.empty_cache() gc.collect() ``` But I'm not clear on when it's a good idea to clear the cache. Also, semi-related shouldn't [`accelerator.free_memory`](https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L2987) / [`release_memory`](https://github.com/huggingface/accelerate/blob/main/src/accelerate/utils/memory.py#L29) [clear the mps cache](https://pytorch.org/docs/stable/generated/torch.mps.empty_cache.html) as well?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27181/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27181/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27180/comments
https://api.github.com/repos/huggingface/transformers/issues/27180/events
https://github.com/huggingface/transformers/pull/27180
1,970,654,260
PR_kwDOCUB6oc5ePjT2
27,180
Translate task summary to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu \r\n\r\nhere is another pr. it may result in some merge conflict problem as i have another PR is still under review. \r\n\r\nYou can deal with it later.\r\n\r\nBest", "@stevhliu \r\n\r\nhi, thansk for your review. I have fix the reviews and conflict problem. \r\n\r\nBest ", "It seems there is still some problem with yml file. I will fix it later", "@stevhliu\n\nhi, thansk for your review. I have fixed yml problem.\n\nBest", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27180). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27180/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27180/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27180", "html_url": "https://github.com/huggingface/transformers/pull/27180", "diff_url": "https://github.com/huggingface/transformers/pull/27180.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27180.patch", "merged_at": 1698856115000 }
https://api.github.com/repos/huggingface/transformers/issues/27179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27179/comments
https://api.github.com/repos/huggingface/transformers/issues/27179/events
https://github.com/huggingface/transformers/issues/27179
1,970,553,271
I_kwDOCUB6oc51dEG3
27,179
Llama inference instability in fp16 producing inf in the middle of the model
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Related to #17937 but there is dummy model.\r\n\r\nWill take a look here.", "@ArthurZucker has been tracking it, and has a draft PR for it: https://github.com/huggingface/transformers/pull/27114\r\n\r\n@fxmarty Can you check if applying this change fixes it?", "@ydshieh @gante Thank you! No this PR is unrelated unfortunately, as it also happens when the prompt `Elon Musk is a South` with `max_length=9` (only one padding token) and the extended attention mask\r\n\r\n![image](https://github.com/huggingface/transformers/assets/9808326/a22b352a-8ca0-4834-ba42-e6f0df29853c)\r\n\r\nthat does not have any inf.\r\n\r\nIt may just be instability in the model, but it feels weird that it arises only when some attention mask rows are fully masked.", "Well, I guess it needs another deep dive 😬 ", "I haven't been able to give a final conclusion, but in `LlamaMLP.forward`, change the `else` block to \r\n\r\n```python\r\n h1 = self.gate_proj(x)\r\n h2 = self.act_fn(h1)\r\n h3 = self.up_proj(x)\r\n h4 = self.down_proj(h2 * h3)\r\n down_proj = h4\r\n```\r\nand print their maximal absolute values, we will see their magnitude get unusually larger than before from layer 29 (0-based), and amplified to `255` in layer 30, than` h4` get `inf`.\r\n\r\n**The question is what happened in `layer 29` for this input**: but I am afraid it's just some numerical issue and we don't really have the control. \r\n\r\nWill take a further look later when I get spare time.\r\n```\r\n------------------------------------\r\nlayer: 29\r\nh1: 13.234375\r\nh2: 4.5\r\nh3: 18.15625\r\nh4: 57.28125\r\n------------------------------------\r\nlayer: 30\r\nh1: 255.875\r\nh2: 255.875\r\nh3: 261.75\r\nh4: inf\r\n------------------------------------\r\nlayer: 31\r\nh1: nan\r\nh2: nan\r\nh3: nan\r\nh4: nan\r\n------------------------------------\r\n```\r\n\r\n\r\nfull\r\n```\r\nlayer: 0\r\nh1: 4.2109375\r\nh2: 4.1484375\r\nh3: 2.220703125\r\nh4: 4.08203125\r\n------------------------------------\r\nlayer: 1\r\nh1: 19.75\r\nh2: 19.75\r\nh3: 18.328125\r\nh4: 753.0\r\n------------------------------------\r\nlayer: 2\r\nh1: 2.8671875\r\nh2: 2.025390625\r\nh3: 2.2421875\r\nh4: 4.1640625\r\n------------------------------------\r\nlayer: 3\r\nh1: 2.259765625\r\nh2: 1.11328125\r\nh3: 1.484375\r\nh4: 0.423583984375\r\n------------------------------------\r\nlayer: 4\r\nh1: 4.4375\r\nh2: 4.38671875\r\nh3: 2.642578125\r\nh4: 7.58203125\r\n------------------------------------\r\nlayer: 5\r\nh1: 3.142578125\r\nh2: 2.416015625\r\nh3: 2.431640625\r\nh4: 1.2734375\r\n------------------------------------\r\nlayer: 6\r\nh1: 2.7265625\r\nh2: 1.98828125\r\nh3: 2.2578125\r\nh4: 1.0556640625\r\n------------------------------------\r\nlayer: 7\r\nh1: 2.650390625\r\nh2: 1.8046875\r\nh3: 2.349609375\r\nh4: 1.3251953125\r\n------------------------------------\r\nlayer: 8\r\nh1: 3.34375\r\nh2: 1.76171875\r\nh3: 2.6796875\r\nh4: 1.8408203125\r\n------------------------------------\r\nlayer: 9\r\nh1: 4.37109375\r\nh2: 2.328125\r\nh3: 3.142578125\r\nh4: 1.734375\r\n------------------------------------\r\nlayer: 10\r\nh1: 4.3046875\r\nh2: 3.1796875\r\nh3: 2.62109375\r\nh4: 1.3212890625\r\n------------------------------------\r\nlayer: 11\r\nh1: 3.853515625\r\nh2: 3.5078125\r\nh3: 3.0078125\r\nh4: 1.62890625\r\n------------------------------------\r\nlayer: 12\r\nh1: 3.33203125\r\nh2: 2.224609375\r\nh3: 2.548828125\r\nh4: 1.005859375\r\n------------------------------------\r\nlayer: 13\r\nh1: 3.560546875\r\nh2: 2.783203125\r\nh3: 3.087890625\r\nh4: 2.23828125\r\n------------------------------------\r\nlayer: 14\r\nh1: 3.841796875\r\nh2: 2.9609375\r\nh3: 2.63671875\r\nh4: 0.67626953125\r\n------------------------------------\r\nlayer: 15\r\nh1: 3.609375\r\nh2: 3.4765625\r\nh3: 3.107421875\r\nh4: 1.8994140625\r\n------------------------------------\r\nlayer: 16\r\nh1: 5.06640625\r\nh2: 4.078125\r\nh3: 4.28515625\r\nh4: 5.421875\r\n------------------------------------\r\nlayer: 17\r\nh1: 5.35546875\r\nh2: 5.33203125\r\nh3: 3.740234375\r\nh4: 2.919921875\r\n------------------------------------\r\nlayer: 18\r\nh1: 4.0\r\nh2: 3.853515625\r\nh3: 3.60546875\r\nh4: 3.271484375\r\n------------------------------------\r\nlayer: 19\r\nh1: 4.46484375\r\nh2: 4.4140625\r\nh3: 3.75\r\nh4: 4.16796875\r\n------------------------------------\r\nlayer: 20\r\nh1: 3.66796875\r\nh2: 2.970703125\r\nh3: 3.658203125\r\nh4: 2.962890625\r\n------------------------------------\r\nlayer: 21\r\nh1: 5.34375\r\nh2: 5.31640625\r\nh3: 3.400390625\r\nh4: 2.0234375\r\n------------------------------------\r\nlayer: 22\r\nh1: 3.318359375\r\nh2: 3.203125\r\nh3: 3.451171875\r\nh4: 1.546875\r\n------------------------------------\r\nlayer: 23\r\nh1: 4.28125\r\nh2: 4.22265625\r\nh3: 4.109375\r\nh4: 2.67578125\r\n------------------------------------\r\nlayer: 24\r\nh1: 4.21484375\r\nh2: 3.220703125\r\nh3: 3.15625\r\nh4: 0.9482421875\r\n------------------------------------\r\nlayer: 25\r\nh1: 3.93359375\r\nh2: 3.7109375\r\nh3: 3.947265625\r\nh4: 3.9921875\r\n------------------------------------\r\nlayer: 26\r\nh1: 4.3359375\r\nh2: 3.865234375\r\nh3: 4.37109375\r\nh4: 1.7041015625\r\n------------------------------------\r\nlayer: 27\r\nh1: 4.55078125\r\nh2: 3.400390625\r\nh3: 3.630859375\r\nh4: 1.4111328125\r\n------------------------------------\r\nlayer: 28\r\nh1: 4.90234375\r\nh2: 4.4453125\r\nh3: 7.54296875\r\nh4: 2.0546875\r\n------------------------------------\r\nlayer: 29\r\nh1: 13.234375\r\nh2: 4.5\r\nh3: 18.15625\r\nh4: 57.28125\r\n------------------------------------\r\nlayer: 30\r\nh1: 255.875\r\nh2: 255.875\r\nh3: 261.75\r\nh4: inf\r\n------------------------------------\r\nlayer: 31\r\nh1: nan\r\nh2: nan\r\nh3: nan\r\nh4: nan\r\n------------------------------------\r\n```", "After taking a further look, this doesn't seem to relate any bug but just the limitation of using fp16, and this is also depending on the input data.\r\n\r\nOne observation I found is: larger tensor values tend to appear when the prompt is (very) short.\r\n\r\nAlso, when this happens, I often see many places in the corresponding multiplications have values with the same sign.\r\n\r\nNothing more I can provide I am afraid.", "Thanks a lot @ydshieh. Did you notice any difference with whether rows are fully masked in the attention mask or not? \r\n\r\nWe can probably close this one - at least it is good to know that (at least) llama 7b has numerical instabilities during inference in fp16.", "> whether rows are fully masked in the attention mask or not?\r\n\r\nOh, I might made a mistake! You have `max_length=9` in the code snippet, so if I use long sequence, there is no padding!\r\nOK, need to recheck !", "I think beam search with ROPE and fp16 has instabilities yes, reported here: #26332 if I am not mistaken this is what we have no? And I think a recent PR to fix this was merged: #26843 . \r\nBut yeah I have a pretty huge list of bugs to process! ", "FYI: here the issue is not even in the generation - the issue comes already in the first step: just encoding the input prompt.", "Same issue in layer 29/30 in https://github.com/PanQiWei/AutoGPTQ/issues/412. Unmasking fully masked padding rows solves the issue there as well.\r\n\r\n![image](https://github.com/huggingface/transformers/assets/9808326/4de83f84-c1e7-4513-b8e8-cc5bb23baa9f)\r\n\r\nAnd the nans indeed start to appear at the padding index if we do not unmask:\r\n\r\n\r\nIn the layer 30 without unmasking:\r\n```\r\nhidden_states after layernorm torch.Size([2, 6, 4096])\r\nhidden_states b=0, seq_idx=0 mean: 0.00121307373046875\r\nhidden_states b=0, seq_idx=1 mean: -0.0168914794921875\r\nhidden_states b=0, seq_idx=2 mean: -0.00237274169921875\r\nhidden_states b=0, seq_idx=3 mean: 0.0007181167602539062\r\nhidden_states b=0, seq_idx=4 mean: -0.0108642578125\r\nhidden_states b=0, seq_idx=5 mean: -0.006961822509765625\r\nhidden_states b=1, seq_idx=0 mean: -0.0016736984252929688\r\nhidden_states b=1, seq_idx=1 mean: 0.0012159347534179688\r\nhidden_states b=1, seq_idx=2 mean: -0.016876220703125\r\nhidden_states b=1, seq_idx=3 mean: -0.0023746490478515625\r\nhidden_states b=1, seq_idx=4 mean: 0.0006799697875976562\r\nhidden_states b=1, seq_idx=5 mean: -0.010833740234375\r\nup_proj, down_proj\r\n--- forward\r\ninput finite tensor(True, device='cuda:0')\r\noutput torch.Size([2, 6, 11008])\r\noutput finite tensor(True, device='cuda:0')\r\noutput absmax tensor(1.0762e+02, device='cuda:0', dtype=torch.float16)\r\noutput absmean tensor(4.6924e-01, device='cuda:0', dtype=torch.float16)\r\n--- forward\r\ninput finite tensor(True, device='cuda:0')\r\noutput torch.Size([2, 6, 11008])\r\noutput finite tensor(True, device='cuda:0')\r\noutput absmax tensor(1.0962e+02, device='cuda:0', dtype=torch.float16)\r\noutput absmean tensor(4.5728e-01, device='cuda:0', dtype=torch.float16)\r\ngate_proj b=0, seq_idx=0 mean: -0.047821, absmax: 14.078125\r\ngate_proj b=0, seq_idx=1 mean: -0.208618, absmax: 23.078125\r\ngate_proj b=0, seq_idx=2 mean: -0.253174, absmax: 23.859375\r\ngate_proj b=0, seq_idx=3 mean: -0.270264, absmax: 27.84375\r\ngate_proj b=0, seq_idx=4 mean: -0.184692, absmax: 14.5078125\r\ngate_proj b=0, seq_idx=5 mean: -0.254639, absmax: 12.8203125\r\ngate_proj b=1, seq_idx=0 mean: 0.309814, absmax: 107.625\r\ngate_proj b=1, seq_idx=1 mean: -0.047852, absmax: 14.078125\r\ngate_proj b=1, seq_idx=2 mean: -0.208496, absmax: 23.234375\r\ngate_proj b=1, seq_idx=3 mean: -0.252930, absmax: 23.96875\r\ngate_proj b=1, seq_idx=4 mean: -0.270508, absmax: 27.984375\r\ngate_proj b=1, seq_idx=5 mean: -0.184937, absmax: 14.6484375\r\nup_proj b=0, seq_idx=0 mean: 0.001290, absmax: 15.0546875\r\nup_proj b=0, seq_idx=1 mean: -0.008339, absmax: 18.40625\r\nup_proj b=0, seq_idx=2 mean: -0.016205, absmax: 18.0\r\nup_proj b=0, seq_idx=3 mean: -0.005768, absmax: 23.234375\r\nup_proj b=0, seq_idx=4 mean: -0.000823, absmax: 6.44921875\r\nup_proj b=0, seq_idx=5 mean: -0.003519, absmax: 11.6171875\r\nup_proj b=1, seq_idx=0 mean: 0.015915, absmax: 109.625\r\nup_proj b=1, seq_idx=1 mean: 0.001284, absmax: 15.046875\r\nup_proj b=1, seq_idx=2 mean: -0.008362, absmax: 18.5625\r\nup_proj b=1, seq_idx=3 mean: -0.016220, absmax: 18.046875\r\nup_proj b=1, seq_idx=4 mean: -0.005787, absmax: 23.34375\r\nup_proj b=1, seq_idx=5 mean: -0.000838, absmax: 6.546875\r\nact_gate b=0, seq_idx=0 mean: -0.011940, absmax: 14.078125\r\nact_gate b=0, seq_idx=1 mean: 0.004330, absmax: 4.80859375\r\nact_gate b=0, seq_idx=2 mean: 0.010277, absmax: 5.859375\r\nact_gate b=0, seq_idx=3 mean: -0.015503, absmax: 6.46875\r\nact_gate b=0, seq_idx=4 mean: 0.031921, absmax: 5.67578125\r\nact_gate b=0, seq_idx=5 mean: -0.006973, absmax: 6.5\r\nact_gate b=1, seq_idx=0 mean: 0.219971, absmax: 107.625\r\nact_gate b=1, seq_idx=1 mean: -0.011948, absmax: 14.078125\r\nact_gate b=1, seq_idx=2 mean: 0.004345, absmax: 4.80859375\r\nact_gate b=1, seq_idx=3 mean: 0.010429, absmax: 5.859375\r\nact_gate b=1, seq_idx=4 mean: -0.015495, absmax: 6.46484375\r\nact_gate b=1, seq_idx=5 mean: 0.031738, absmax: 5.67578125\r\ninter b=0, seq_idx=0 mean: 0.03338623046875, absmax: 212.0\r\ninter b=0, seq_idx=1 mean: 0.00040793418884277344, absmax: 6.7734375\r\ninter b=0, seq_idx=2 mean: 0.0011510848999023438, absmax: 7.125\r\ninter b=0, seq_idx=3 mean: 0.00832366943359375, absmax: 17.46875\r\ninter b=0, seq_idx=4 mean: 0.00707244873046875, absmax: 13.90625\r\ninter b=0, seq_idx=5 mean: 0.0014142990112304688, absmax: 7.62890625\r\ninter b=1, seq_idx=0 mean: 1.3212890625, absmax: 11800.0\r\ninter b=1, seq_idx=1 mean: 0.03338623046875, absmax: 211.875\r\ninter b=1, seq_idx=2 mean: 0.0004088878631591797, absmax: 6.796875\r\ninter b=1, seq_idx=3 mean: 0.0011835098266601562, absmax: 7.1484375\r\ninter b=1, seq_idx=4 mean: 0.008331298828125, absmax: 17.515625\r\ninter b=1, seq_idx=5 mean: 0.007049560546875, absmax: 13.8828125\r\ncall down_proj\r\n--- forward\r\ninput finite tensor(True, device='cuda:0')\r\noutput torch.Size([2, 6, 4096])\r\noutput finite tensor(False, device='cuda:0')\r\noutput absmax tensor(inf, device='cuda:0', dtype=torch.float16)\r\noutput absmean tensor(inf, device='cuda:0', dtype=torch.float16)\r\ndown_proj b=0, seq_idx=0 finite: True\r\ndown_proj b=0, seq_idx=1 finite: True\r\ndown_proj b=0, seq_idx=2 finite: True\r\ndown_proj b=0, seq_idx=3 finite: True\r\ndown_proj b=0, seq_idx=4 finite: True\r\ndown_proj b=0, seq_idx=5 finite: True\r\ndown_proj b=1, seq_idx=0 finite: False\r\ndown_proj b=1, seq_idx=1 finite: True\r\ndown_proj b=1, seq_idx=2 finite: True\r\ndown_proj b=1, seq_idx=3 finite: True\r\ndown_proj b=1, seq_idx=4 finite: True\r\ndown_proj b=1, seq_idx=5 finite: True\r\n```\r\n\r\nIn the layer 30 with unmasking fully masked rows:\r\n```\r\nhidden_states after layernorm torch.Size([2, 6, 4096])\r\nhidden_states b=0, seq_idx=0 mean: 0.0012102127075195312\r\nhidden_states b=0, seq_idx=1 mean: -0.01690673828125\r\nhidden_states b=0, seq_idx=2 mean: -0.002384185791015625\r\nhidden_states b=0, seq_idx=3 mean: 0.0007028579711914062\r\nhidden_states b=0, seq_idx=4 mean: -0.01085662841796875\r\nhidden_states b=0, seq_idx=5 mean: -0.006946563720703125\r\nhidden_states b=1, seq_idx=0 mean: -0.0006947517395019531\r\nhidden_states b=1, seq_idx=1 mean: 0.00121307373046875\r\nhidden_states b=1, seq_idx=2 mean: -0.0168609619140625\r\nhidden_states b=1, seq_idx=3 mean: -0.0023975372314453125\r\nhidden_states b=1, seq_idx=4 mean: 0.0006928443908691406\r\nhidden_states b=1, seq_idx=5 mean: -0.01084136962890625\r\nup_proj, down_proj\r\n--- forward\r\ninput finite tensor(True, device='cuda:0')\r\noutput torch.Size([2, 6, 11008])\r\noutput finite tensor(True, device='cuda:0')\r\noutput absmax tensor(3.3969e+01, device='cuda:0', dtype=torch.float16)\r\noutput absmean tensor(4.5752e-01, device='cuda:0', dtype=torch.float16)\r\n--- forward\r\ninput finite tensor(True, device='cuda:0')\r\noutput torch.Size([2, 6, 11008])\r\noutput finite tensor(True, device='cuda:0')\r\noutput absmax tensor(3.1141e+01, device='cuda:0', dtype=torch.float16)\r\noutput absmean tensor(4.5410e-01, device='cuda:0', dtype=torch.float16)\r\ngate_proj b=0, seq_idx=0 mean: -0.047882, absmax: 14.078125\r\ngate_proj b=0, seq_idx=1 mean: -0.208374, absmax: 23.09375\r\ngate_proj b=0, seq_idx=2 mean: -0.252930, absmax: 23.875\r\ngate_proj b=0, seq_idx=3 mean: -0.270508, absmax: 27.90625\r\ngate_proj b=0, seq_idx=4 mean: -0.184692, absmax: 14.515625\r\ngate_proj b=0, seq_idx=5 mean: -0.254639, absmax: 12.84375\r\ngate_proj b=1, seq_idx=0 mean: -0.073853, absmax: 33.96875\r\ngate_proj b=1, seq_idx=1 mean: -0.047852, absmax: 14.1015625\r\ngate_proj b=1, seq_idx=2 mean: -0.208496, absmax: 23.21875\r\ngate_proj b=1, seq_idx=3 mean: -0.253418, absmax: 23.953125\r\ngate_proj b=1, seq_idx=4 mean: -0.270264, absmax: 27.984375\r\ngate_proj b=1, seq_idx=5 mean: -0.184692, absmax: 14.5546875\r\nup_proj b=0, seq_idx=0 mean: 0.001290, absmax: 15.046875\r\nup_proj b=0, seq_idx=1 mean: -0.008347, absmax: 18.40625\r\nup_proj b=0, seq_idx=2 mean: -0.016235, absmax: 17.984375\r\nup_proj b=0, seq_idx=3 mean: -0.005745, absmax: 23.265625\r\nup_proj b=0, seq_idx=4 mean: -0.000815, absmax: 6.4453125\r\nup_proj b=0, seq_idx=5 mean: -0.003561, absmax: 11.6328125\r\nup_proj b=1, seq_idx=0 mean: -0.004223, absmax: 31.140625\r\nup_proj b=1, seq_idx=1 mean: 0.001290, absmax: 15.078125\r\nup_proj b=1, seq_idx=2 mean: -0.008362, absmax: 18.5625\r\nup_proj b=1, seq_idx=3 mean: -0.016251, absmax: 18.03125\r\nup_proj b=1, seq_idx=4 mean: -0.005783, absmax: 23.328125\r\nup_proj b=1, seq_idx=5 mean: -0.000843, absmax: 6.4765625\r\nact_gate b=0, seq_idx=0 mean: -0.011971, absmax: 14.078125\r\nact_gate b=0, seq_idx=1 mean: 0.004372, absmax: 4.8046875\r\nact_gate b=0, seq_idx=2 mean: 0.010483, absmax: 5.86328125\r\nact_gate b=0, seq_idx=3 mean: -0.015427, absmax: 6.46875\r\nact_gate b=0, seq_idx=4 mean: 0.031860, absmax: 5.67578125\r\nact_gate b=0, seq_idx=5 mean: -0.007015, absmax: 6.4921875\r\nact_gate b=1, seq_idx=0 mean: 0.002026, absmax: 4.19140625\r\nact_gate b=1, seq_idx=1 mean: -0.011955, absmax: 14.1015625\r\nact_gate b=1, seq_idx=2 mean: 0.004314, absmax: 4.8125\r\nact_gate b=1, seq_idx=3 mean: 0.010254, absmax: 5.86328125\r\nact_gate b=1, seq_idx=4 mean: -0.015503, absmax: 6.4609375\r\nact_gate b=1, seq_idx=5 mean: 0.031891, absmax: 5.6640625\r\ninter b=0, seq_idx=0 mean: 0.033355712890625, absmax: 211.875\r\ninter b=0, seq_idx=1 mean: 0.00041985511779785156, absmax: 6.76953125\r\ninter b=0, seq_idx=2 mean: 0.0011568069458007812, absmax: 7.1328125\r\ninter b=0, seq_idx=3 mean: 0.008331298828125, absmax: 17.421875\r\ninter b=0, seq_idx=4 mean: 0.007068634033203125, absmax: 13.8828125\r\ninter b=0, seq_idx=5 mean: 0.0014171600341796875, absmax: 7.63671875\r\ninter b=1, seq_idx=0 mean: 0.0037746429443359375, absmax: 21.890625\r\ninter b=1, seq_idx=1 mean: 0.033477783203125, absmax: 212.625\r\ninter b=1, seq_idx=2 mean: 0.00041794776916503906, absmax: 6.78125\r\ninter b=1, seq_idx=3 mean: 0.001155853271484375, absmax: 7.1328125\r\ninter b=1, seq_idx=4 mean: 0.00830078125, absmax: 17.4375\r\ninter b=1, seq_idx=5 mean: 0.007068634033203125, absmax: 13.828125\r\ncall down_proj\r\n--- forward\r\ninput finite tensor(True, device='cuda:0')\r\noutput torch.Size([2, 6, 4096])\r\noutput finite tensor(True, device='cuda:0')\r\noutput absmax tensor(5.3750e+02, device='cuda:0', dtype=torch.float16)\r\noutput absmean tensor(4.9854e-01, device='cuda:0', dtype=torch.float16)\r\ndown_proj b=0, seq_idx=0 finite: True\r\ndown_proj b=0, seq_idx=1 finite: True\r\ndown_proj b=0, seq_idx=2 finite: True\r\ndown_proj b=0, seq_idx=3 finite: True\r\ndown_proj b=0, seq_idx=4 finite: True\r\ndown_proj b=0, seq_idx=5 finite: True\r\ndown_proj b=1, seq_idx=0 finite: True\r\ndown_proj b=1, seq_idx=1 finite: True\r\ndown_proj b=1, seq_idx=2 finite: True\r\ndown_proj b=1, seq_idx=3 finite: True\r\ndown_proj b=1, seq_idx=4 finite: True\r\ndown_proj b=1, seq_idx=5 finite: True\r\n```\r\n\r\nIt is unclear to me what is happening here and how it relates to fully masked rows.", "Great details! I am thinking if maybe the original training saw the unmasked row but now at inference time, it saw another version, which leads to this large value now. (similar to the different behavior of SDPA between torch 2.0.1 / 2.1.0 on GPU as we saw previously.)", "@ydshieh I want to give a try at some point to the original llama repo to see how padding is handled there.", "not stale", "mark", "I think computing ROPE in float32 percision should partly fix this" ]
1,698
1,706
null
COLLABORATOR
null
### System Info - `transformers` version: 4.35.0.dev0 - Platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.1 - Accelerate version: 0.25.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (True) - Using GPU in script?: A100 ### Who can help? @ydshieh @fxmarty @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I encounter inference instability with llama running in fp16 when left padding is used, and especially when full rows are masked out in the 4D attention mask. At some point in the forward, `inf` values may appear in the intermediate logits, ultimately leading to tensors filled with `nan` and raising the error: ``` Traceback (most recent call last): File "=debug.py", line 38, in <module> outputs = model.generate( File "/fsx/felix/condaenvs/fx/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/fsx/felix/transformers/src/transformers/generation/utils.py", line 1704, in generate return self.sample( File "/fsx/felix/transformers/src/transformers/generation/utils.py", line 2822, in sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 ``` Note that the `inf` specifically appear at a padding position. Reproduction: ```python from transformers import AutoTokenizer, pipeline, logging, AutoModelForCausalLM import torch model_name_or_path = "meta-llama/Llama-2-7b-chat-hf" token = "[specify your token]" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True, token=token) tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer.padding_side = "left" with torch.device("cuda"): model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, token=token) sentence = "Felix Marty is a French" # Alternatively, the issue can be reproduced with: # sentence = "Elon Musk is a South" # max_length=9 inp = tokenizer(sentence, return_tensors='pt', padding="max_length", max_length=9).to("cuda") print("inp", inp["input_ids"].shape) print("inp", inp) torch.set_printoptions(threshold=10000000) print("\n\n*** Generate:") with torch.no_grad(): outputs = model.generate( **inp, max_new_tokens=10, do_sample=True, top_p=0.9, temperature=float(0.01), top_k=40 ) print(tokenizer.batch_decode(outputs)) ``` Printing `torch.all(torch.isfinite())` at some points in the model, it appears the `inf` start to appear in the MLP at [`self.gate_proj(x)) * self.up_proj(x)`](https://github.com/huggingface/transformers/blob/3cd3eaf96048cb76e67a432e72a7cecbdd1630a8/src/transformers/models/llama/modeling_llama.py#L232) and things go crazy from there. What's interesting is that for example fixing (two left padding tokens) ![image](https://github.com/huggingface/transformers/assets/9808326/57474cd8-dc0a-4d46-9e9e-713e3b827902) to ![image](https://github.com/huggingface/transformers/assets/9808326/0df5358a-c4b2-4e70-80d3-8e12fda0fc67) solves the issue. It makes me think that the solution implemented for SDPA to avoid fully masked rows in the attention mask may actually be required for some other cases as this one https://github.com/huggingface/transformers/pull/26572 - but it is unclear why it relates to overflow here. WDYT @gante @ydshieh? Is this something you have ever observed? ### Expected behavior No `inf` spawning in the middle of inference with fp16 model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27179/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27179/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27178/comments
https://api.github.com/repos/huggingface/transformers/issues/27178/events
https://github.com/huggingface/transformers/pull/27178
1,970,539,429
PR_kwDOCUB6oc5ePKHw
27,178
[`Quantization` / `tests` ] Fix bnb MPT test
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Addresses: https://github.com/huggingface/transformers/pull/27145#discussion_r1376240625 https://github.com/mosaicml/llm-foundry/issues/703 being merged, we can now safely unskip the test and make sure to point the trust remote code model into the commit that contains the fix cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27178/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27178", "html_url": "https://github.com/huggingface/transformers/pull/27178", "diff_url": "https://github.com/huggingface/transformers/pull/27178.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27178.patch", "merged_at": 1698765954000 }
https://api.github.com/repos/huggingface/transformers/issues/27177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27177/comments
https://api.github.com/repos/huggingface/transformers/issues/27177/events
https://github.com/huggingface/transformers/issues/27177
1,970,528,559
I_kwDOCUB6oc51c-Ev
27,177
May be detach the loss when evaluating? in case GPU OOM
{ "login": "ziranyang0", "id": 97387929, "node_id": "U_kgDOBc4FmQ", "avatar_url": "https://avatars.githubusercontent.com/u/97387929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ziranyang0", "html_url": "https://github.com/ziranyang0", "followers_url": "https://api.github.com/users/ziranyang0/followers", "following_url": "https://api.github.com/users/ziranyang0/following{/other_user}", "gists_url": "https://api.github.com/users/ziranyang0/gists{/gist_id}", "starred_url": "https://api.github.com/users/ziranyang0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ziranyang0/subscriptions", "organizations_url": "https://api.github.com/users/ziranyang0/orgs", "repos_url": "https://api.github.com/users/ziranyang0/repos", "events_url": "https://api.github.com/users/ziranyang0/events{/privacy}", "received_events_url": "https://api.github.com/users/ziranyang0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not sure why exactly that'd be helpful, since we already detach the loss during `prediction_step`: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3407\r\n\r\nAnd convert it to the CPU later on during the call to `nested_numpify`: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3249-L3251\r\n\r\nIf you don't have the memory to handle this, you can use `eval_accumulation_steps` instead to have that conversion to the CPU happen faster", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: v0.15.0 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: 4*3090 - Using distributed or parallel set-up in script?: default DDP in Trainer ### Who can help? @muellerzr @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction the code below are quite easy, finetuning a gpt2 on wikitext2 If you run this on some 3090, then OOM is expected to be raised when evaluation(i.e. the **results = trainer.evaluate()** line) ``` import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel, DataCollatorForLanguageModeling, TrainingArguments, Trainer from datasets import load_dataset import wandb import os wandb.init(project="gpt2-finetune-wikitext2") # 1. Preparation MODEL_NAME = 'gpt2-medium' tokenizer = GPT2Tokenizer.from_pretrained(MODEL_NAME) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer.pad_token = tokenizer.eos_token # 2. Loading the wikitext-2 dataset wikitext_dataset = load_dataset('wikitext', 'wikitext-2-raw-v1') # Tokenization def tokenize_function(examples): return tokenizer(examples['text'], truncation=True, padding='max_length', max_length=128) train_dataset = wikitext_dataset['train'].map(tokenize_function, batched=True) valid_dataset = wikitext_dataset['validation'].map(tokenize_function, batched=True) # Removing unnecessary columns train_dataset = train_dataset.remove_columns(["text"]) valid_dataset = valid_dataset.remove_columns(["text"]) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False ) def compute_metrics(eval_pred): logits, labels = eval_pred loss = torch.nn.CrossEntropyLoss()(logits.view(-1, logits.size(-1)), labels.view(-1)) perplexity = torch.exp(torch.tensor(loss)).item() return {"loss": loss, "perplexity": perplexity} # 3. Fine-tuning training_args = TrainingArguments( output_dir="./gpt2-wikitext2", overwrite_output_dir=True, num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=4, eval_steps=400, save_steps=800, warmup_steps=500, logging_dir='./logs', logging_strategy="steps", logging_steps=10, report_to="wandb", # Add this line to log to wandb ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=valid_dataset, compute_metrics=compute_metrics, ) trainer.train() # 4. Evaluation results = trainer.evaluate() print(results) perplexity = torch.exp(torch.tensor(results["loss"])) # Log perplexity to wandb wandb.log({"perplexity": perplexity.item()}) print(f"Perplexity: {perplexity.item()}") # Save the model model.save_pretrained("./gpt2-wikitext2") tokenizer.save_pretrained("./gpt2-wikitext2") wandb.finish() ``` ### Expected behavior I'm finetuning a gpt2 on wikitext2. The problem is that during evaluation(which is the line **results = trainer.evaluate()**), an OOM, out of memory error is raised. I think this should not happen since training is OK with a even bigger batchsize, after checking the source code in Trainer(**/transformers/trainer.py**), I found it useful to add 1 line at line 3011 just after ``` loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) ``` in **evaluation_loop()** the line is: ``` loss = loss.detach().cpu() ``` which detach the loss and the OOM error would not happen any more. after all, I am not sure whether the OOM during evaluation is a bug or a feature, please excuse me if bothering
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27177/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27176/comments
https://api.github.com/repos/huggingface/transformers/issues/27176/events
https://github.com/huggingface/transformers/pull/27176
1,970,509,267
PR_kwDOCUB6oc5ePDbi
27,176
Backward compatibility fix for the Conversation class
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
MEMBER
null
Improve our handling of the old `past_user_inputs` property a bit, and revert a hack in the tests because it now works properly. Should fix all the tests that were red! cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27176/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27176/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27176", "html_url": "https://github.com/huggingface/transformers/pull/27176", "diff_url": "https://github.com/huggingface/transformers/pull/27176.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27176.patch", "merged_at": 1698765126000 }
https://api.github.com/repos/huggingface/transformers/issues/27175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27175/comments
https://api.github.com/repos/huggingface/transformers/issues/27175/events
https://github.com/huggingface/transformers/pull/27175
1,970,453,485
PR_kwDOCUB6oc5eO3JD
27,175
Trigger CI if `tiny_model_summary.json` is modified
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
COLLABORATOR
null
# What does this PR do? When `tests/utils/tiny_model_summary.json` is changed, currently it won't trigger tests, as it is not a python file. However, we expect CI being triggered, as the modification of this file usually means there are new models enabling pipeline testing. (So far, sometimes a PR (usually mine) may have green CI but later red CI on `main` due to this)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27175/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27175", "html_url": "https://github.com/huggingface/transformers/pull/27175", "diff_url": "https://github.com/huggingface/transformers/pull/27175.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27175.patch", "merged_at": 1698760142000 }
https://api.github.com/repos/huggingface/transformers/issues/27174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27174/comments
https://api.github.com/repos/huggingface/transformers/issues/27174/events
https://github.com/huggingface/transformers/issues/27174
1,970,119,894
I_kwDOCUB6oc51baTW
27,174
Bug in greedy_search when use_cache=False and use inputs_embeds as model input
{ "login": "TsuTikgiau", "id": 13706888, "node_id": "MDQ6VXNlcjEzNzA2ODg4", "avatar_url": "https://avatars.githubusercontent.com/u/13706888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TsuTikgiau", "html_url": "https://github.com/TsuTikgiau", "followers_url": "https://api.github.com/users/TsuTikgiau/followers", "following_url": "https://api.github.com/users/TsuTikgiau/following{/other_user}", "gists_url": "https://api.github.com/users/TsuTikgiau/gists{/gist_id}", "starred_url": "https://api.github.com/users/TsuTikgiau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TsuTikgiau/subscriptions", "organizations_url": "https://api.github.com/users/TsuTikgiau/orgs", "repos_url": "https://api.github.com/users/TsuTikgiau/repos", "events_url": "https://api.github.com/users/TsuTikgiau/events{/privacy}", "received_events_url": "https://api.github.com/users/TsuTikgiau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "BTW, I only checked greedy_search. I'm not sure if this bug exists also in other decode strategies.", "Hey @TsuTikgiau 👋 Thank you for opening this issue.\r\n\r\nThe situation you describe is correct; `input_embeds` is not updated throughout `generate`, `input_ids` is. As such, it requires `use_cache=True` to work -- the relevant corresponding inputs are stored in `past_key_values`. \r\n\r\nAdding support for it would add significant complexity to `generate`. As such, my decision for now is *not* to add it, as I haven't seen any request for it. My advice would be to modify the function locally to suit your needs :)\r\n\r\nOther users: if you encounter this issue, let me know (reaction or comment here). If several people request this feature, I might revisit the decision above! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info transformers 4.30.0, python 3.9, ubuntu ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction outputs = llama2model.generate( inputs_embeds=inputs_embeds, attention_mask=attention_mask, do_sample=False, use_cache=False ) ### Expected behavior I'm using LLama2 model to take token embedding directly as input and do greedy search generation (do_sample=False) with no KV cache (use_cache=False). I faced a runtime error "RuntimeError: shape '[-1, 332]' is invalid for input of size 666" at [this line](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/models/llama/modeling_llama.py#L528). Note that when use_cache=True, the code works. After some debugging I found that this is because inputs_embeds is not updated. After each decode step, the new generated input_id is added at [this line](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/generation/utils.py#L2382) and the new attention mask is added at [this line](https://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/generation/utils.py#L2386). However, the new inputs_embeds is not added. As we set use_cache=False and use the inputs_embeds as input, the original inputs_embeds will be use again in the forward pass with an updated attention mask (length+1), and leads to the size mismatch error. I have checked the latest code in the main branch and notice that the update of inputs_embeds is still missing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27174/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27173/comments
https://api.github.com/repos/huggingface/transformers/issues/27173/events
https://github.com/huggingface/transformers/pull/27173
1,970,088,294
PR_kwDOCUB6oc5eNmIx
27,173
[doctring] Fix docstring for BlipTextConfig, BlipVisionConfig
{ "login": "Hangsiin", "id": 142411895, "node_id": "U_kgDOCH0Idw", "avatar_url": "https://avatars.githubusercontent.com/u/142411895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hangsiin", "html_url": "https://github.com/Hangsiin", "followers_url": "https://api.github.com/users/Hangsiin/followers", "following_url": "https://api.github.com/users/Hangsiin/following{/other_user}", "gists_url": "https://api.github.com/users/Hangsiin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hangsiin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hangsiin/subscriptions", "organizations_url": "https://api.github.com/users/Hangsiin/orgs", "repos_url": "https://api.github.com/users/Hangsiin/repos", "events_url": "https://api.github.com/users/Hangsiin/events{/privacy}", "received_events_url": "https://api.github.com/users/Hangsiin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27173). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
edit docstrings # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27173/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27173", "html_url": "https://github.com/huggingface/transformers/pull/27173", "diff_url": "https://github.com/huggingface/transformers/pull/27173.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27173.patch", "merged_at": 1698748917000 }
https://api.github.com/repos/huggingface/transformers/issues/27172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27172/comments
https://api.github.com/repos/huggingface/transformers/issues/27172/events
https://github.com/huggingface/transformers/issues/27172
1,969,958,894
I_kwDOCUB6oc51ay_u
27,172
Fail to run run_classification.py example
{ "login": "pengwa", "id": 10530022, "node_id": "MDQ6VXNlcjEwNTMwMDIy", "avatar_url": "https://avatars.githubusercontent.com/u/10530022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pengwa", "html_url": "https://github.com/pengwa", "followers_url": "https://api.github.com/users/pengwa/followers", "following_url": "https://api.github.com/users/pengwa/following{/other_user}", "gists_url": "https://api.github.com/users/pengwa/gists{/gist_id}", "starred_url": "https://api.github.com/users/pengwa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pengwa/subscriptions", "organizations_url": "https://api.github.com/users/pengwa/orgs", "repos_url": "https://api.github.com/users/pengwa/repos", "events_url": "https://api.github.com/users/pengwa/events{/privacy}", "received_events_url": "https://api.github.com/users/pengwa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @pengwa, thanks for raising this issue! \r\n\r\nFrom the error message you can see that an exception is raised here: \r\n\r\n```\r\n File \"/tmp/transformers/src/transformers/tokenization_utils_base.py\", line 2703, in _get_padding_truncation_strategies\r\n raise ValueError(\r\nValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\r\n```\r\n\r\nThis indicates that the tokenizer being loaded with `meta-llama/Llama-2-7b-hf` doesn't have a pad_token assigned and is required for processing the text. You'll need to modify the script to add a pad token to the tokenizer as per the example in the error message. ", "I see, thank @amyeroberts for the quick response!\r\n\r\nDo you mean something like this?\r\n\r\n```diff\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n use_fast=model_args.use_fast_tokenizer,\r\n revision=model_args.model_revision,\r\n token=model_args.token,\r\n trust_remote_code=model_args.trust_remote_code,\r\n )\r\n+ tokenizer.pad_token = tokenizer.eos_token\r\n\r\n+ model.config.pad_token_id = tokenizer.pad_token\r\n model = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n token=model_args.token,\r\n trust_remote_code=model_args.trust_remote_code,\r\n ignore_mismatched_sizes=model_args.ignore_mismatched_sizes,\r\n load_in_4bit=True,\r\n quantization_config=bnb_config,\r\n )\r\n \r\n```\r\n\r\nSometimes I see model.config.pad_token_id = tokenizer.pad_token is done after `model = AutoModelForSequenceClassification.from_pretrained`, for example \"https://github.com/huggingface/transformers/blob/HEAD/src/transformers/generation/utils.py#L2691-L2692\", is it too late to let model be aware of the token id ? ", "@pengwa, yes, something like that. You'll need to do `model.config.pad_token = tokenizer.pad_token` after the `model = AutoModelForSequenceClassification.from_pretrained` call as the model object doesn't exist until after the from_pretrained call. ", "> @pengwa, yes, something like that. You'll need to do `model.config.pad_token = tokenizer.pad_token` after the `model = AutoModelForSequenceClassification.from_pretrained` call as the model object doesn't exist until after the from_pretrained call.\r\n\r\nReally appreciate your claficiation on this!!! @amyeroberts.\r\n\r\nAnother quick question, it seems doing in below two approaches are equivalent in model definition, while one difference is: in approach 2 when model is initialization nn.embedding in its constructor, it can pass padding token id in nn.Embedding apis, am I right?\r\n\r\nApproach 1: \r\n\r\n```diff\r\n model = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n token=model_args.token,\r\n trust_remote_code=model_args.trust_remote_code,\r\n ignore_mismatched_sizes=model_args.ignore_mismatched_sizes,\r\n load_in_4bit=True,\r\n quantization_config=bnb_config,\r\n )\r\n+ model.config.pad_token_id = tokenizer.pad_token\r\n``` \r\n\r\nApproach 2: \r\n\r\n\r\n```diff\r\n+ config.pad_token_id = tokenizer.pad_token\r\n model = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n token=model_args.token,\r\n trust_remote_code=model_args.trust_remote_code,\r\n ignore_mismatched_sizes=model_args.ignore_mismatched_sizes,\r\n load_in_4bit=True,\r\n quantization_config=bnb_config,\r\n )\r\n``` ", "Hi @pengwa, depending on the model, they're not necessarily equivalent. \r\n\r\nThe model uses the config to construct itself. Therefore, changing the config input can change how the model is constructed, and logic dependant on the config input wouldn't be affected if the config values are created after model instantiation. For example `padding_idx` [set here](https://github.com/huggingface/transformers/blob/e9dbd3926317a4effb1d033d8454ff18280d0b7d/src/transformers/models/llama/modeling_llama.py#L819) for Llama. \r\n\r\nIf possible, the config should be updated first and then passed to the model. \r\n\r\n", "> Hi @pengwa, depending on the model, they're not necessarily equivalent.\r\n> \r\n> The model uses the config to construct itself. Therefore, changing the config input can change how the model is constructed, and logic dependant on the config input wouldn't be affected if the config values are created after model instantiation. For example `padding_idx` [set here](https://github.com/huggingface/transformers/blob/e9dbd3926317a4effb1d033d8454ff18280d0b7d/src/transformers/models/llama/modeling_llama.py#L819) for Llama.\r\n> \r\n> If possible, the config should be updated first and then passed to the model.\r\n\r\nThank you @amyeroberts . `If possible, the config should be updated first and then passed to the model.` , this is exactly what I am assuming. Thanks a lot for the help! Closing this issue. " ]
1,698
1,699
1,699
NONE
null
### System Info transformers: 4.35.0.dev0, 8211c59b9a8fe84d2861446b26542f89a0260e64 Name: torch: Version: 2.2.0.dev20231007+cu118 Repro command: ``` torchrun --nproc_per_node $num_gpus \ examples/pytorch/text-classification/run_classification.py \ --model_name_or_path meta-llama/Llama-2-7b-hf \ --dataset_name dair-ai/emotion \ --dataset_config_name split \ --shuffle_train_dataset --metric_name accuracy --text_column_name text --label_column_name label \ --max_seq_length 512 \ --per_device_train_batch_size 10 \ --per_device_eval_batch_size 1 \ --do_train \ --fp16 \ --output_dir /tmp/test-clssification --overwrite_output_dir \ --report_to none \ --max_steps 300 --logging_steps 1 \ --token $HUGGINGFACE_TOKEN ``` Error messages: ``` [INFO|modeling_utils.py:3861] 2023-10-31 02:03:32,535 >> Some weights of the model checkpoint at meta-llama/Llama-2-7b-hf were not used when initializing LlamaForSequenceClassification: ['lm_head.weight'] - This IS expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:3873] 2023-10-31 02:03:32,535 >> Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at meta-llama/Llama-2-7b-hf and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 2023-10-31 02:03:32,538 __main__ [WARNING] - The label2id key in the model config.json is not equal to the label2id key of this run. You can ignore this if you are doing finetuning. Running tokenizer on dataset: 0%| | 0/16000 [00:00<?, ? examples/s] Traceback (most recent call last): File "examples/pytorch/text-classification/run_classification.py", line 763, in <module> main() File "examples/pytorch/text-classification/run_classification.py", line 588, in main raw_datasets = raw_datasets.map( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 853, in map { File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 854, in <dictcomp> k: dataset.map( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3097, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3474, in _map_single batch = apply_function_on_filtered_inputs( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "examples/pytorch/text-classification/run_classification.py", line 578, in preprocess_function result = tokenizer(examples["sentence"], padding=padding, max_length=max_seq_length, truncation=True) File "/tmp/transformers/src/transformers/tokenization_utils_base.py", line 2798, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/tmp/transformers/src/transformers/tokenization_utils_base.py", line 2884, in _call_one return self.batch_encode_plus( File "/tmp/transformers/src/transformers/tokenization_utils_base.py", line 3066, in batch_encode_plus padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( File "/tmp/transformers/src/transformers/tokenization_utils_base.py", line 2703, in _get_padding_truncation_strategies raise ValueError( ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. [2023-10-31 02:03:38,731] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 29880) of binary: /opt/conda/envs/ptca/bin/python Traceback (most recent call last): File "/opt/conda/envs/ptca/bin/torchrun", line 8, in <module> sys.exit(main()) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/distributed/run.py", line 806, in main run(args) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ examples/pytorch/text-classification/run_classification.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-10-31_02:03:38 host : node-0 rank : 0 (local_rank: 0) exitcode : 1 (pid: 29880) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Run the commandl above using 1 or more than 1 gpus, 2. it failed with above messages. ### Expected behavior The expected behavior is to run successfully.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27172/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27171/comments
https://api.github.com/repos/huggingface/transformers/issues/27171/events
https://github.com/huggingface/transformers/issues/27171
1,969,895,740
I_kwDOCUB6oc51ajk8
27,171
4.34 cause baichuan tokenizer break
{ "login": "nrailg", "id": 14273895, "node_id": "MDQ6VXNlcjE0MjczODk1", "avatar_url": "https://avatars.githubusercontent.com/u/14273895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nrailg", "html_url": "https://github.com/nrailg", "followers_url": "https://api.github.com/users/nrailg/followers", "following_url": "https://api.github.com/users/nrailg/following{/other_user}", "gists_url": "https://api.github.com/users/nrailg/gists{/gist_id}", "starred_url": "https://api.github.com/users/nrailg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nrailg/subscriptions", "organizations_url": "https://api.github.com/users/nrailg/orgs", "repos_url": "https://api.github.com/users/nrailg/repos", "events_url": "https://api.github.com/users/nrailg/events{/privacy}", "received_events_url": "https://api.github.com/users/nrailg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "upgrade transformers to 4.33.2", "\r\n\r\n\r\n\r\n> upgrade transformers to 4.33.2\r\n\r\nThanks, I'll try it.\r\n\r\nIt's \"downgrade\", precisely.", "Hey! You should open an issue on the repo as this is a `remote` code, which we have no real control over. The fix is fairly straightforward, but should be done by the authors to make sure they have the expected behaviour. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info transformers==4.34 ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use AutoTokenizer to load Baichuan 7B https://huggingface.co/baichuan-inc/Baichuan-7B/blob/main/tokenization_baichuan.py ``` File "/home/nrwu/work/stanford_alpaca/app/sug-dpo/sft.py", line 94, in <module> tokenizer = AutoTokenizer.from_pretrained( File "/home/pretrainx/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 738, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/pretrainx/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2045, in from_pretrained return cls._from_pretrained( File "/home/pretrainx/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2256, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/nrwu/work/hf-home/modules/transformers_modules/Baichuan-7B/tokenization_baichuan.py", line 75, in __init__ super().__init__( File "/home/pretrainx/conda/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 366, in __init__ self._add_tokens(self.all_special_tokens_extended, special_tokens=True) File "/home/pretrainx/conda/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 462, in _add_tokens current_vocab = self.get_vocab().copy() File "/home/nrwu/work/hf-home/modules/transformers_modules/Baichuan-7B/tokenization_baichuan.py", line 109, in get_vocab vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} File "/home/nrwu/work/hf-home/modules/transformers_modules/Baichuan-7B/tokenization_baichuan.py", line 105, in vocab_size return self.sp_model.get_piece_size() AttributeError: 'BaiChuanTokenizer' object has no attribute 'sp_model' ``` ### Expected behavior there should be no exception.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27171/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27170/comments
https://api.github.com/repos/huggingface/transformers/issues/27170/events
https://github.com/huggingface/transformers/pull/27170
1,969,856,025
PR_kwDOCUB6oc5eMzYa
27,170
Disable CI runner check
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
COLLABORATOR
null
# What does this PR do? Now our CIs run on AWS runners, and they are spun up when a action is triggered and removed afterward, it doesn't make sense to check the runner status as before (where the runners existed forever).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27170/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27170", "html_url": "https://github.com/huggingface/transformers/pull/27170", "diff_url": "https://github.com/huggingface/transformers/pull/27170.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27170.patch", "merged_at": 1698749962000 }
https://api.github.com/repos/huggingface/transformers/issues/27169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27169/comments
https://api.github.com/repos/huggingface/transformers/issues/27169/events
https://github.com/huggingface/transformers/issues/27169
1,969,723,212
I_kwDOCUB6oc51Z5dM
27,169
transformers.utils.fx feature support for passes.shape_prop.ShapeProp(graph)
{ "login": "KotaHemanthUC", "id": 114269470, "node_id": "U_kgDOBs-dHg", "avatar_url": "https://avatars.githubusercontent.com/u/114269470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KotaHemanthUC", "html_url": "https://github.com/KotaHemanthUC", "followers_url": "https://api.github.com/users/KotaHemanthUC/followers", "following_url": "https://api.github.com/users/KotaHemanthUC/following{/other_user}", "gists_url": "https://api.github.com/users/KotaHemanthUC/gists{/gist_id}", "starred_url": "https://api.github.com/users/KotaHemanthUC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KotaHemanthUC/subscriptions", "organizations_url": "https://api.github.com/users/KotaHemanthUC/orgs", "repos_url": "https://api.github.com/users/KotaHemanthUC/repos", "events_url": "https://api.github.com/users/KotaHemanthUC/events{/privacy}", "received_events_url": "https://api.github.com/users/KotaHemanthUC/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @KotaHemanthUC, thanks for raising this issue! \r\n\r\nThis seems like an issue which is better placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nIf you believe there's a bug in the transformers library, then please provide a minimal code reproducer and full stacktrace of the error encountered. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info Ubuntu x86_64. Hi Dev community, I want to get intermediate tensor shape llm for given input using fx graph. In torch fx, we have out = fx.passes.shape_prop.ShapeProp(graph) out.propogate(sample_input).shape Is there similar feature capability for transformers.utils.fx ? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm getting error: Using symbolic_trace graph with fx.passes.passes.shape_prop.ShapeProp I'm getting raise RuntimeError( RuntimeError: ShapeProp error for: node=%decoder_input_ids : torch.Tensor [num_users=2] = placeholder[target=decoder_input_ids] with meta={} ### Expected behavior Run smoothly for given fx graph , input.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27169/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27168/comments
https://api.github.com/repos/huggingface/transformers/issues/27168/events
https://github.com/huggingface/transformers/issues/27168
1,969,563,786
I_kwDOCUB6oc51ZSiK
27,168
_clamp_coord in FuyuProcessor was not defined
{ "login": "nooodles2023", "id": 29591526, "node_id": "MDQ6VXNlcjI5NTkxNTI2", "avatar_url": "https://avatars.githubusercontent.com/u/29591526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nooodles2023", "html_url": "https://github.com/nooodles2023", "followers_url": "https://api.github.com/users/nooodles2023/followers", "following_url": "https://api.github.com/users/nooodles2023/following{/other_user}", "gists_url": "https://api.github.com/users/nooodles2023/gists{/gist_id}", "starred_url": "https://api.github.com/users/nooodles2023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nooodles2023/subscriptions", "organizations_url": "https://api.github.com/users/nooodles2023/orgs", "repos_url": "https://api.github.com/users/nooodles2023/repos", "events_url": "https://api.github.com/users/nooodles2023/events{/privacy}", "received_events_url": "https://api.github.com/users/nooodles2023/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @nooodles2023, thanks for raising this issue! \r\n\r\nThere's currently some active development on the Fuyu processing so it's not-yet in a stable state: #27133, #27007, #27083 \r\n\r\nIt's good that this has been flagged as something we need to address cc @pcuenca @molbap ", "You are right @nooodles2023! As @amyeroberts mentioned we are currently working on a refactor. I removed the calls to `_clamp_coords` in https://github.com/amyeroberts/transformers/pull/113; we don't need the \"transformed image\" any more, only the scale factor that was used to prepare it for inference.", "Thank you for your reply.\r\nFuyu is a really nice VL model, I want to finetune it for object detection in UI screens by text prompt. Would you have any plan to post the finetune script?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Fixed in the refactor." ]
1,698
1,701
1,701
NONE
null
### System Info on transformers master branch ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` def original_to_transformed_h_coords(self, original_coords): # apply crop cropped_coords = ( self._clamp_coords(original_coords, min_value=self.crop_top, max_value=self.crop_bottom) - self.crop_top ) # apply scale scaled_coords = self._scale_coords(cropped_coords, scale=self.scaled_h / self.original_h) # apply pad return scaled_coords + self.padding_top def original_to_transformed_w_coords(self, original_coords): # apply crop cropped_coords = ( self._clamp_coords(original_coords, min_value=self.crop_left, max_value=self.crop_right) - self.crop_left ) # apply scale scaled_coords = self._scale_coords(cropped_coords, scale=self.scaled_w / self.original_w) # apply pad return scaled_coords + self.padding_left def scale_point_to_transformed_image(x: float, y: float) -> List[int]: x_scaled = original_to_transformed_w_coords(np.array([x / 2]))[0] y_scaled = original_to_transformed_h_coords(np.array([y / 2]))[0] return [x_scaled, y_scaled] def scale_bbox_to_transformed_image(top: float, left: float, bottom: float, right: float) -> List[int]: top_scaled = original_to_transformed_w_coords(np.array([top / 2]))[0] left_scaled = original_to_transformed_h_coords(np.array([left / 2]))[0] bottom_scaled = original_to_transformed_w_coords(np.array([bottom / 2]))[0] right_scaled = original_to_transformed_h_coords(np.array([right / 2]))[0] return [top_scaled, left_scaled, bottom_scaled, right_scaled] ``` scale_point_to_transformed_imagegot 2 params, but the caller passed 3 params. ``` # Remove all spaces from num_ints num_ints = [float(num.strip()) for num in num_int_strs] # scale to transformed image siz if len(num_ints) == 2: num_ints_translated = scale_point_to_transformed_image( x=num_ints[0], y=num_ints[1], transformed_image=transformed_image ) ``` ### Expected behavior check the code if there were something wrong when uploaded
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27168/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27167/comments
https://api.github.com/repos/huggingface/transformers/issues/27167/events
https://github.com/huggingface/transformers/pull/27167
1,969,538,191
PR_kwDOCUB6oc5eLuWc
27,167
Corrected grammatical error and improved readability of Readme.md file.
{ "login": "cheta-nyadav", "id": 119566250, "node_id": "U_kgDOByBvqg", "avatar_url": "https://avatars.githubusercontent.com/u/119566250?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cheta-nyadav", "html_url": "https://github.com/cheta-nyadav", "followers_url": "https://api.github.com/users/cheta-nyadav/followers", "following_url": "https://api.github.com/users/cheta-nyadav/following{/other_user}", "gists_url": "https://api.github.com/users/cheta-nyadav/gists{/gist_id}", "starred_url": "https://api.github.com/users/cheta-nyadav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cheta-nyadav/subscriptions", "organizations_url": "https://api.github.com/users/cheta-nyadav/orgs", "repos_url": "https://api.github.com/users/cheta-nyadav/repos", "events_url": "https://api.github.com/users/cheta-nyadav/events{/privacy}", "received_events_url": "https://api.github.com/users/cheta-nyadav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@pvl pls review, and merge.", "Hi @cheta-nyadav, thanks for opening a PR an contributing to improving our docs! \r\n\r\nAs both of these changes are grammatically incorrect this PR won't be approved and merged in", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
# What does this PR do? Corrects a prominent grammatical mistake, using a instead of an before "unified". Changed a line for easier readability. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27167/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27167", "html_url": "https://github.com/huggingface/transformers/pull/27167", "diff_url": "https://github.com/huggingface/transformers/pull/27167.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27167.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27166/comments
https://api.github.com/repos/huggingface/transformers/issues/27166/events
https://github.com/huggingface/transformers/issues/27166
1,969,487,595
I_kwDOCUB6oc51Y_7r
27,166
NotImplementedError: Cannot copy out of meta tensor; no data!
{ "login": "fancyerii", "id": 5372812, "node_id": "MDQ6VXNlcjUzNzI4MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5372812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fancyerii", "html_url": "https://github.com/fancyerii", "followers_url": "https://api.github.com/users/fancyerii/followers", "following_url": "https://api.github.com/users/fancyerii/following{/other_user}", "gists_url": "https://api.github.com/users/fancyerii/gists{/gist_id}", "starred_url": "https://api.github.com/users/fancyerii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fancyerii/subscriptions", "organizations_url": "https://api.github.com/users/fancyerii/orgs", "repos_url": "https://api.github.com/users/fancyerii/repos", "events_url": "https://api.github.com/users/fancyerii/events{/privacy}", "received_events_url": "https://api.github.com/users/fancyerii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerz @pacman100 " ]
1,698
1,700
1,700
NONE
null
### System Info node 2(throw exception) - `Accelerate` version: 0.23.0 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - Numpy version: 1.24.1 - PyTorch version (GPU?): 2.1.0+cu118 (True) - PyTorch XPU available: False - PyTorch NPU available: False - System RAM: 38.44 GB - GPU type: Tesla V100-SXM2-32GB - `Accelerate` default config: - compute_environment: LOCAL_MACHINE - distributed_type: FSDP - mixed_precision: fp16 - use_cpu: False - debug: False - num_processes: 2 - machine_rank: 1 - num_machines: 2 - main_process_ip: 10.8.0.7 - main_process_port: 29500 - rdzv_backend: static - same_network: False - main_training_function: main - fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'BACKWARD_PRE', 'fsdp_forward_prefetch': True, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 1, 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': False, 'fsdp_use_orig_params': True} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] node1: - `Accelerate` version: 0.23.0 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - Numpy version: 1.24.1 - PyTorch version (GPU?): 2.1.0+cu118 (True) - PyTorch XPU available: False - PyTorch NPU available: False - System RAM: 93.55 GB - GPU type: NVIDIA A100-SXM4-40GB - `Accelerate` default config: - compute_environment: LOCAL_MACHINE - distributed_type: FSDP - mixed_precision: fp16 - use_cpu: False - debug: False - num_processes: 2 - machine_rank: 0 - num_machines: 2 - main_process_ip: 10.8.0.7 - main_process_port: 29500 - rdzv_backend: static - same_network: False - main_training_function: main - fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'BACKWARD_PRE', 'fsdp_forward_prefetch': True, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 1, 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': False, 'fsdp_use_orig_params': True} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] ### Who can help? @ArthurZucker @younesbelkada @sgugger @lewtun ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am following the blog [Fine-tuning Llama 2 70B using PyTorch FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp). I want to use two machine each with 1 gpu card to run llama 2 7B model. I am using the codes provided by the blog [here](https://github.com/pacman100/DHS-LLM-Workshop/tree/main/chat_assistant/training). I am running "accelerate config" on each machine to setup. Please see above for detailed information. I am using the following command line to run on each machine: ``` accelerate launch train.py \ --model_name "/home/lili/models_hf/7B-chat" \ --dataset_name "smangrul/code-chat-assistant-v1" \ --max_seq_len 2048 \ --max_steps 1000 \ --logging_steps 25 \ --eval_steps 100 \ --save_steps 500 \ --bf16 False \ --fp16 True \ --packing True \ --output_dir "full-finetune-llama-chat-asst" \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 2 \ --dataset_text_field "content" \ --use_gradient_checkpointing \ --learning_rate 5e-5 \ --lr_scheduler_type "cosine" \ --weight_decay 0.01 \ --warmup_ratio 0.03 \ --use_flash_attn False ``` error message: ``` Traceback (most recent call last): File "/nas/lili/deepspeedtest/chat_assistant/training/train.py", line 238, in <module> main(args) File "/nas/lili/deepspeedtest/chat_assistant/training/train.py", line 178, in main model, peft_config, tokenizer = create_and_prepare_model(args) File "/nas/lili/deepspeedtest/chat_assistant/training/utils.py", line 184, in create_and_prepare_model model = AutoModelForCausalLM.from_pretrained( File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pret rained return model_class.from_pretrained( File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3307, in from_pretrained ) = cls._load_pretrained_model( File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3695, in _load_pretrained_m odel new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/transformers/modeling_utils.py", line 741, in _load_state_dict_in to_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/home/lili/miniconda3/envs/py39_torch21/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 317, in set_module_tensor_to_ device new_value = value.to(device) NotImplementedError: Cannot copy out of meta tensor; no data! ``` And I debug the code and the two machine ran with different path. The second machine with node_rank=1 went: "map_location = "meta"" while the first one went to "cpu". ``` if ( (is_deepspeed_zero3_enabled() or is_fsdp_enabled()) and torch.distributed.is_initialized() and torch.distributed.get_rank() > 0 ): map_location = "meta" else: map_location = "cpu" ``` For fsdp init, only machine with local_rank=0 will load full model to cpu and dispatch sharded parameter to other gpus in this node. But As the codes above shows, it checks rank rather than local rank. So my question is: if there are two nodes each with 4 gpu cards. nodes1 gpu0 gpu1 gpu2 gpu3 nodes2 gpu0 gpu1 gpu2 gpu3 rank 0 1 2 3 4 5 6 7 local_rank 0 1 2 3 0 1 2 3 for fsdp full sharding, nodes1 will load all parameters to it's cpu memory and then dispatch sharded parameters to all the 8 gpu. Am I right? ### Expected behavior no exception
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27166/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27165/comments
https://api.github.com/repos/huggingface/transformers/issues/27165/events
https://github.com/huggingface/transformers/pull/27165
1,969,447,976
PR_kwDOCUB6oc5eLbYw
27,165
fix: Fix typical_p behaviour broken in recent change
{ "login": "njhill", "id": 16958488, "node_id": "MDQ6VXNlcjE2OTU4NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/16958488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/njhill", "html_url": "https://github.com/njhill", "followers_url": "https://api.github.com/users/njhill/followers", "following_url": "https://api.github.com/users/njhill/following{/other_user}", "gists_url": "https://api.github.com/users/njhill/gists{/gist_id}", "starred_url": "https://api.github.com/users/njhill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/njhill/subscriptions", "organizations_url": "https://api.github.com/users/njhill/orgs", "repos_url": "https://api.github.com/users/njhill/repos", "events_url": "https://api.github.com/users/njhill/events{/privacy}", "received_events_url": "https://api.github.com/users/njhill/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@gante sorry about this! I observed that it can actually make a significant difference to the output when typical_p is used.", "(the CI failed for unrelated reasons, rerunning failed jobs)" ]
1,698
1,698
1,698
CONTRIBUTOR
null
A recent PR https://github.com/huggingface/transformers/pull/26579 fixed an edge case out-of-bounds tensor indexing error in TypicalLogitsWarper, and a related behaviour change was made that we thought fixed a long-standing bug w.r.t. the token inclusion cutoff. However after looking more closely, I am pretty certain that the original logic was correct and that the OOB fix should have been made differently. Specifically the docs state that it should include the "smallest set of tokens that add up to P or higher" and so `last_ind` should actually be one more than the index of the last token satisfying `(cumulative_probs < self.mass)`. We still need a max clamp in case that last token is the very last one in the tensor.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27165/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27165", "html_url": "https://github.com/huggingface/transformers/pull/27165", "diff_url": "https://github.com/huggingface/transformers/pull/27165.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27165.patch", "merged_at": 1698757796000 }
https://api.github.com/repos/huggingface/transformers/issues/27164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27164/comments
https://api.github.com/repos/huggingface/transformers/issues/27164/events
https://github.com/huggingface/transformers/pull/27164
1,969,384,805
PR_kwDOCUB6oc5eLN2G
27,164
Remove broken links to s-JoL/Open-Llama
{ "login": "CSRessel", "id": 6520751, "node_id": "MDQ6VXNlcjY1MjA3NTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6520751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CSRessel", "html_url": "https://github.com/CSRessel", "followers_url": "https://api.github.com/users/CSRessel/followers", "following_url": "https://api.github.com/users/CSRessel/following{/other_user}", "gists_url": "https://api.github.com/users/CSRessel/gists{/gist_id}", "starred_url": "https://api.github.com/users/CSRessel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CSRessel/subscriptions", "organizations_url": "https://api.github.com/users/CSRessel/orgs", "repos_url": "https://api.github.com/users/CSRessel/repos", "events_url": "https://api.github.com/users/CSRessel/events{/privacy}", "received_events_url": "https://api.github.com/users/CSRessel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27164). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? I noticed a broken link in the README, for the source code of the Open-Llama model. This looks like it matches the docs where there's been a warning added: > This model is in maintenance mode only, so we won't accept any new PRs changing its code. To reduce confusion (and link rot!) this PR updates all references to that code, by clarifying that the original source was removed. For the previous discussion on model deprecation, see: https://github.com/huggingface/transformers/pull/24922 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Potential reviewers might be @stevhliu or @MKhalusova since it's docs, or @amyeroberts who I've seen approve a lot of README edits today, or @sgugger has previously reviewed PR's related to Open-Llama.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27164/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27164", "html_url": "https://github.com/huggingface/transformers/pull/27164", "diff_url": "https://github.com/huggingface/transformers/pull/27164.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27164.patch", "merged_at": 1698747474000 }
https://api.github.com/repos/huggingface/transformers/issues/27163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27163/comments
https://api.github.com/repos/huggingface/transformers/issues/27163/events
https://github.com/huggingface/transformers/pull/27163
1,969,325,086
PR_kwDOCUB6oc5eLAn6
27,163
[WIP] Bounding box transformations
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27163). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? Facilitates bounding box transformations among different formats: `XYXY` <-> `XYWH` <-> `XCYCWH` <-> `RELATIVE_XYXY` <-> `RELATIVE_XYWH` <-> `RELATIVE_XCYCWH` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? **Still WIP**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27163/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27163/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27163", "html_url": "https://github.com/huggingface/transformers/pull/27163", "diff_url": "https://github.com/huggingface/transformers/pull/27163.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27163.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27162/comments
https://api.github.com/repos/huggingface/transformers/issues/27162/events
https://github.com/huggingface/transformers/pull/27162
1,969,323,766
PR_kwDOCUB6oc5eLAVk
27,162
Fixed base model class name extraction from PeftModels
{ "login": "kkteru", "id": 7015292, "node_id": "MDQ6VXNlcjcwMTUyOTI=", "avatar_url": "https://avatars.githubusercontent.com/u/7015292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kkteru", "html_url": "https://github.com/kkteru", "followers_url": "https://api.github.com/users/kkteru/followers", "following_url": "https://api.github.com/users/kkteru/following{/other_user}", "gists_url": "https://api.github.com/users/kkteru/gists{/gist_id}", "starred_url": "https://api.github.com/users/kkteru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kkteru/subscriptions", "organizations_url": "https://api.github.com/users/kkteru/orgs", "repos_url": "https://api.github.com/users/kkteru/repos", "events_url": "https://api.github.com/users/kkteru/events{/privacy}", "received_events_url": "https://api.github.com/users/kkteru/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27162). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27161 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @pacman100, @muellerzr <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27162/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27162", "html_url": "https://github.com/huggingface/transformers/pull/27162", "diff_url": "https://github.com/huggingface/transformers/pull/27162.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27162.patch", "merged_at": 1698955683000 }
https://api.github.com/repos/huggingface/transformers/issues/27161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27161/comments
https://api.github.com/repos/huggingface/transformers/issues/27161/events
https://github.com/huggingface/transformers/issues/27161
1,969,320,560
I_kwDOCUB6oc51YXJw
27,161
Trainer doesn't shift labels for CAUSAL_LM PEFT models with label smoothing enabled
{ "login": "kkteru", "id": 7015292, "node_id": "MDQ6VXNlcjcwMTUyOTI=", "avatar_url": "https://avatars.githubusercontent.com/u/7015292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kkteru", "html_url": "https://github.com/kkteru", "followers_url": "https://api.github.com/users/kkteru/followers", "following_url": "https://api.github.com/users/kkteru/following{/other_user}", "gists_url": "https://api.github.com/users/kkteru/gists{/gist_id}", "starred_url": "https://api.github.com/users/kkteru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kkteru/subscriptions", "organizations_url": "https://api.github.com/users/kkteru/orgs", "repos_url": "https://api.github.com/users/kkteru/repos", "events_url": "https://api.github.com/users/kkteru/events{/privacy}", "received_events_url": "https://api.github.com/users/kkteru/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Created a PR for this quick fix, if that helps. ", "cc @younesbelkada ", "Thanks for the deepdive! I will reply you on the PR itself" ]
1,698
1,698
1,698
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.0.dev0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.14.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: RTX 3090 - Using distributed or parallel set-up in script?: No ### Who can help? @pacman100, @muellerzr ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am not sure how to articulate this silent bug with a code snippet. I will try to explain it referring to the code and hopefully it will be clear. The [`compute_loss`](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2672) function in the [Trainer](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L229) class gets the `model_name` from `model.base_model` when the model is a `PeftModel` as written [here](https://github.com/huggingface/transformers/blob/84724efd101af52ed3d6af878e41ff8fd651a9cc/src/transformers/trainer.py#L2690). The `base_model` of a `PeftModel` is defined [here](https://github.com/huggingface/peft/blob/884b1ac3a8ef49c9301b5bbf02e8bc64349e95f9/src/peft/peft_model.py#L119) as one of [`PEFT_TYPE_TO_MODEL_MAPPING`](https://github.com/huggingface/peft/blob/main/src/peft/peft_model.py#L68). The 'true' base model is actually stored at `base_model.model` as declared [here](https://github.com/huggingface/peft/blob/884b1ac3a8ef49c9301b5bbf02e8bc64349e95f9/src/peft/tuners/tuners_utils.py#L70). The issue is--in [this](https://github.com/huggingface/transformers/blob/84724efd101af52ed3d6af878e41ff8fd651a9cc/src/transformers/trainer.py#L2693) line of `compute_loss` method, the check to shift labels is done by seeing if `model_name` is inside the [`MODEL_FOR_CAUSAL_LM_MAPPING_NAMES`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py#L384) list. Since the `model_name` for a `PeftModel` isn't in that list, the labels aren't shifted. From what I can tell, a simple fix without breaking anything else could be to modify [this](https://github.com/huggingface/transformers/blob/84724efd101af52ed3d6af878e41ff8fd651a9cc/src/transformers/trainer.py#L2690) line to: ``` model_name = unwrap_model(model.base_model.model)._get_name() ``` ### Expected behavior The labels should be shifted for causal language modelling tasks even when using peft models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27161/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27160/comments
https://api.github.com/repos/huggingface/transformers/issues/27160/events
https://github.com/huggingface/transformers/pull/27160
1,969,313,309
PR_kwDOCUB6oc5eK-Bq
27,160
improving TimmBackbone to support FrozenBatchNorm2d
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? The `TimmBackbone` offers various customizable parameters, such as `num_channels`, `features_only`, and `out_indices`. Currently, we lack direct support for timm's `freeze_batch_norm_2d(...) `function, which substitutes the backbone's `BatchNorm2d` layers with `FrozenBatchNorm2d`. For models like RTDetr, Detr, ConditionalDetr, DeformableDetr, etc that require freezing of batch norm layers, one currently needs to either reimplement and modify the backbone manually or use the `timm.layers.freeze_batch_norm_2d(...)` function, like so: ```python my_backbone = TimmBackbone.from_pretrained("resnet50d") my_backbone = timm.layers.freeze_batch_norm_2d(my_backbone) ``` To simplify this, this PR introduces the `TimmBackboneConfig.freeze_batch_norm_2d` parameter, which defaults to False for backward compatibility. When creating the backbone, if this parameter is `True`, it will apply `timm's freeze_batch_norm_2d(...) ` function. This update simplifies the process of creating backbone models. For models requiring a switch from `BatchNorm2d` to `FrozenBatchNorm2d`, the `TimmBackbone` can be used directly, eliminating additional steps. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27160/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27160", "html_url": "https://github.com/huggingface/transformers/pull/27160", "diff_url": "https://github.com/huggingface/transformers/pull/27160.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27160.patch", "merged_at": 1698854315000 }
https://api.github.com/repos/huggingface/transformers/issues/27159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27159/comments
https://api.github.com/repos/huggingface/transformers/issues/27159/events
https://github.com/huggingface/transformers/pull/27159
1,969,173,398
PR_kwDOCUB6oc5eKe5r
27,159
[WIP] Add implementation of `spectrogram_batch`
{ "login": "ravenouse", "id": 85110830, "node_id": "MDQ6VXNlcjg1MTEwODMw", "avatar_url": "https://avatars.githubusercontent.com/u/85110830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ravenouse", "html_url": "https://github.com/ravenouse", "followers_url": "https://api.github.com/users/ravenouse/followers", "following_url": "https://api.github.com/users/ravenouse/following{/other_user}", "gists_url": "https://api.github.com/users/ravenouse/gists{/gist_id}", "starred_url": "https://api.github.com/users/ravenouse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ravenouse/subscriptions", "organizations_url": "https://api.github.com/users/ravenouse/orgs", "repos_url": "https://api.github.com/users/ravenouse/repos", "events_url": "https://api.github.com/users/ravenouse/events{/privacy}", "received_events_url": "https://api.github.com/users/ravenouse/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27159). All of your documentation changes will be reflected on that endpoint.", "Super cool PR @ravenouse, a most welcome upgrade! As mentioned by @ArthurZucker, one other optimisation we can employ is using the `torch.stft` backend for computing the spectrograms: https://github.com/huggingface/transformers/pull/26119#issue-1892916324. This yields approx 4x speed-up for processing with bs=1. I believe @ylacombe is considering adding support for this, which could be a nice parallel to your PR!", "Thanks for working on this @ravenouse, this looks super promising and will clearly be valuable for any audio-related model!\r\n\r\nI'm indeed considering adding a torch alternative of this numpy implementation! What would be really good for these current/future improvements is that we conduct extensive speed benchmark to allow users to make an informed choice when choosing implementation!", "Thank you so much for all the feedbacks and information! @ArthurZucker @sanchit-gandhi @ylacombe \r\n\r\nI am excited to know the 4x speed up brought by the `torch.stft`.\r\nInspired by the experiment from @sanchit-gandhi , I conducted a similar experiment for `spectrogram_batch`, resulting in a 2x speedup when `bs=200`, compared to the original function with `bs=1`.\r\nLink to the experimenting notebook: https://colab.research.google.com/drive/1aXytDfXiMy_tzvjP9A4rM7Z24jmV2Ha-?usp=sharing\r\n\r\nFor further enhancement, I believe implementing code that enables GPU acceleration for feature extraction and providing users with the option to select GPUs would be an incredible step forward. Prior to submitting this PR, I experimented with a CuPy version of `spectrogram_batch`. My initial findings indicate that the CuPy Batch version is even faster than the Numpy Batch version, if the GPU memory is managed effectively. \r\nI anticipate that the torch GPU version can achieve comparable performance. An experimental notebook exploring this aspect will be shared shortly.\r\n\r\nOnce again, I am more than happy to contribute to the package in this direction. Please let me know if there is anything else I can do to further the effort.", "cc @ylacombe and @sanchit-gandhi let's try to maybe merge this now and have the rest as a todo? ", "Hi @ylacombe ,\r\nThank you so much for taking the time to review this PR. I really appreciate the insightful inputs you have shared.\r\n\r\nI will definitely follow the suggested directions, ensuring that the `spectrogram_batch` generates the same results as the original function.\r\n\r\nCurrently, I am working towards a deadline this week. Once that is completed, I will prioritize making the necessary modifications to the PR.\r\n\r\nOnce again, thank you for your valuable input and support!", "Hi @ylacombe,\r\n\r\nI hope you had a wonderful break!\r\n\r\nFollowing your feedback, I've updated the `spectrogram_batch` function to enhance its pre and post-processing capabilities. The key modifications include:\r\n- Utilizing `original_waveform_lengths` and `true_num_frames` to capture and truncate the results to their true lengths.\r\n- Currently, the function relies on the pad function of the `SequenceFeatureExtractor`, as detailed [here](https://github.dev/huggingface/transformers/blob/v4.36.1/src/transformers/models/whisper/feature_extraction_whisper.py#L226). This results in some redundancy in the code.\r\n\r\nYou can review the revised function in this [Colab notebook](https://colab.research.google.com/drive/1iI09ocHoT3J4ErRCoiVfi598WXbYIY5q?usp=sharing). It produces the same results as the original `spectrogram` when tested on `hf-internal-testing/librispeech_asr_dummy`.\r\n\r\nFor the next step, I plan to:\r\n1. Implement a simple and efficient batch padding method, eliminating the current reliance on `SequenceFeatureExtractor`..\r\n2. Implement batch version for the `amplitude_to_db` and `power_to_db`.\r\n3. Add the function annotation and docstrings to the functions.\r\n\r\nPlease let me know what your thoughts on this. \r\nThank you very much!", "Hey @ravenouse, thanks for all the progress so far and happy new year! Could you actually update the code here so that it's easier to review, test and keep track of the comments?\r\nMany thanks!", "Hi @ylacombe, happy new year!\r\n\r\nI have updated the code: I modified the `spectrogram_batch` further, eliminating the previous dependency of the `SequenceFeatureExtractor` for batch padding the waveforms.\r\n\r\nIn case you want to run the test on your own, here is the updated notebook: [Link](https://colab.research.google.com/drive/1q2AsR4RynMT0Sx5p5YkUJgEc20dO6JCC?usp=sharing)\r\n\r\nPlease let me know what you think about this!\r\n\r\nThank you so much for your time!", "Gently pining @ylacombe ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,708
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This pull request introduces the implementation of `spectrogram_batch`, specifically optimized for batch processing through broadcasting techniques. The primary goal is to reduce the data processing time during feature extraction, which is a critical step in working with audio models like `whisper`. ### Motivation In my work and research with the whisper model, I observed that the feature extraction step can be exceedingly time-consuming, taking up to 10 hours for certain audio datasets. <br> In my opinion, the bottleneck is primarily due to the lack of batch processing support in the current `spectrogram` and `FeatureExtractor` implementations, resulting in iterative calls within a for-loop, as illustrated below: ```Python # Reference: https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/feature_extraction_whisper.py#L250 input_features = [self._np_extract_fbank_features(waveform) for waveform in input_features[0]] ``` ### Future Work The current branch only adds a basic implementation of the `spectrogram_batch`. To bring this implementation to production level, I believe there are several steps needed to be done: 1. **Extensive Testing**: Implementing a comprehensive suite of tests to evaluate the function’s performance and correctness across different parameter settings. I only tested the implementation in the whisper models' setting. 2. **Integration**: Modifying existing feature extractor codes to incorporate the new batch processing function. I am fully committed to continuing the development and testing of this feature. However, given the extensive changes and the potential impact on the library, I am reaching out for collaboration and support from the community/maintainers. Any guidance, suggestions, or contributions to this effort would be immensely appreciated. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27159/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27159", "html_url": "https://github.com/huggingface/transformers/pull/27159", "diff_url": "https://github.com/huggingface/transformers/pull/27159.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27159.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27158/comments
https://api.github.com/repos/huggingface/transformers/issues/27158/events
https://github.com/huggingface/transformers/pull/27158
1,969,111,569
PR_kwDOCUB6oc5eKRRt
27,158
Update README.md
{ "login": "gfggithubleet", "id": 144522681, "node_id": "U_kgDOCJ09uQ", "avatar_url": "https://avatars.githubusercontent.com/u/144522681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gfggithubleet", "html_url": "https://github.com/gfggithubleet", "followers_url": "https://api.github.com/users/gfggithubleet/followers", "following_url": "https://api.github.com/users/gfggithubleet/following{/other_user}", "gists_url": "https://api.github.com/users/gfggithubleet/gists{/gist_id}", "starred_url": "https://api.github.com/users/gfggithubleet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gfggithubleet/subscriptions", "organizations_url": "https://api.github.com/users/gfggithubleet/orgs", "repos_url": "https://api.github.com/users/gfggithubleet/repos", "events_url": "https://api.github.com/users/gfggithubleet/events{/privacy}", "received_events_url": "https://api.github.com/users/gfggithubleet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts ", "Hi @gfggithubleet, thanks for opening a PR and contributing to improving our docs!\r\n\r\nThis is just a stylistic change, rather than a typo. Although it's inconsistently applied throughout the repo (using punctuation at the end of lines) this would make this list inconsistent with the other bullet point lists in this file. As such, this PR won't be approved and merged in. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
TYPO FIXED # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27158/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27158", "html_url": "https://github.com/huggingface/transformers/pull/27158", "diff_url": "https://github.com/huggingface/transformers/pull/27158.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27158.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27157/comments
https://api.github.com/repos/huggingface/transformers/issues/27157/events
https://github.com/huggingface/transformers/pull/27157
1,969,104,939
PR_kwDOCUB6oc5eKP2B
27,157
[KOSMOS-2] Update docs
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? This PR: - updates the KOSMOS-2 docs to include a figure (makes the docs more vibrant) - makes sure it is placed in the "multimodal" section rather than the "text models" section
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27157/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27157", "html_url": "https://github.com/huggingface/transformers/pull/27157", "diff_url": "https://github.com/huggingface/transformers/pull/27157.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27157.patch", "merged_at": 1698698540000 }
https://api.github.com/repos/huggingface/transformers/issues/27156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27156/comments
https://api.github.com/repos/huggingface/transformers/issues/27156/events
https://github.com/huggingface/transformers/pull/27156
1,969,078,347
PR_kwDOCUB6oc5eKKBz
27,156
Update CONTRIBUTING.md
{ "login": "gfggithubleet", "id": 144522681, "node_id": "U_kgDOCJ09uQ", "avatar_url": "https://avatars.githubusercontent.com/u/144522681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gfggithubleet", "html_url": "https://github.com/gfggithubleet", "followers_url": "https://api.github.com/users/gfggithubleet/followers", "following_url": "https://api.github.com/users/gfggithubleet/following{/other_user}", "gists_url": "https://api.github.com/users/gfggithubleet/gists{/gist_id}", "starred_url": "https://api.github.com/users/gfggithubleet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gfggithubleet/subscriptions", "organizations_url": "https://api.github.com/users/gfggithubleet/orgs", "repos_url": "https://api.github.com/users/gfggithubleet/repos", "events_url": "https://api.github.com/users/gfggithubleet/events{/privacy}", "received_events_url": "https://api.github.com/users/gfggithubleet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts ", "Hi @gfggithubleet, thanks for opening this PR and contributing to improving our docs! The changes in this PR are stylistic and not consistent with the rest of the repo. As such, this PR won't be approved " ]
1,698
1,698
1,698
NONE
null
TYPO FIXED # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27156/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27156", "html_url": "https://github.com/huggingface/transformers/pull/27156", "diff_url": "https://github.com/huggingface/transformers/pull/27156.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27156.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27155
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27155/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27155/comments
https://api.github.com/repos/huggingface/transformers/issues/27155/events
https://github.com/huggingface/transformers/pull/27155
1,969,058,093
PR_kwDOCUB6oc5eKFkD
27,155
Fix import of torch.utils.checkpoint
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? When rebasing on main after #27124 was merged, I couldn't run any test as I got: ``` raise RuntimeError( E RuntimeError: Failed to import transformers.models.transfo_xl.modeling_transfo_xl because of the following error (look up to see its traceback): E module 'torch.utils' has no attribute 'checkpoint' ``` which was happening here: ``` src/transformers/modeling_utils.py:1886: in PreTrainedModel self, enable: bool = True, gradient_checkpointing_func: Callable = torch.utils.checkpoint.checkpoint E AttributeError: module 'torch.utils' has no attribute 'checkpoint' ``` My PyTorch version is 1.13. Then I saw this fix: https://github.com/EleutherAI/gpt-neox/pull/85. So I applied the same.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27155/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27155", "html_url": "https://github.com/huggingface/transformers/pull/27155", "diff_url": "https://github.com/huggingface/transformers/pull/27155.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27155.patch", "merged_at": 1698696509000 }
https://api.github.com/repos/huggingface/transformers/issues/27154
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27154/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27154/comments
https://api.github.com/repos/huggingface/transformers/issues/27154/events
https://github.com/huggingface/transformers/pull/27154
1,968,994,084
PR_kwDOCUB6oc5eJ3nE
27,154
Fix: typos in README.md
{ "login": "THEFZNKHAN", "id": 124388165, "node_id": "U_kgDOB2oDRQ", "avatar_url": "https://avatars.githubusercontent.com/u/124388165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/THEFZNKHAN", "html_url": "https://github.com/THEFZNKHAN", "followers_url": "https://api.github.com/users/THEFZNKHAN/followers", "following_url": "https://api.github.com/users/THEFZNKHAN/following{/other_user}", "gists_url": "https://api.github.com/users/THEFZNKHAN/gists{/gist_id}", "starred_url": "https://api.github.com/users/THEFZNKHAN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/THEFZNKHAN/subscriptions", "organizations_url": "https://api.github.com/users/THEFZNKHAN/orgs", "repos_url": "https://api.github.com/users/THEFZNKHAN/repos", "events_url": "https://api.github.com/users/THEFZNKHAN/events{/privacy}", "received_events_url": "https://api.github.com/users/THEFZNKHAN/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts \r\nPlease review my PR \r\nand if anything needs to change then tell me 😊.", "Welcome 😊", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27154). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? fix the typo (out-of-the box ---> out-of-the-box) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27154/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27154", "html_url": "https://github.com/huggingface/transformers/pull/27154", "diff_url": "https://github.com/huggingface/transformers/pull/27154.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27154.patch", "merged_at": 1698693130000 }
https://api.github.com/repos/huggingface/transformers/issues/27153
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27153/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27153/comments
https://api.github.com/repos/huggingface/transformers/issues/27153/events
https://github.com/huggingface/transformers/pull/27153
1,968,976,219
PR_kwDOCUB6oc5eJzuU
27,153
Fix CLAP converting script
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27153). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,702
1,702
COLLABORATOR
null
# What does this PR do? Following [this hub discussion](https://huggingface.co/lukewys/laion_clap/discussions/3) and the addition of CLAP new checkpoints this summer, I've corrected CLAP converting script and added the new checkpoints to the hub ([here](https://huggingface.co/ylacombe/larger_clap_music), [here](https://huggingface.co/ylacombe/larger_clap_general) and [here](https://huggingface.co/ylacombe/larger_clap_music_and_speech) - yet to be moved to laion organization). Note that I've also used [this fix](https://github.com/LAION-AI/CLAP/pull/118) of the CLAP library from @Vaibhavs10, that is yet to be merged! I've also manually checked that it didn't break the original checkpoint conversion! ## Who can review? Hey @amyeroberts and @sanchit-gandhi ! cc @Vaibhavs10 as well!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27153/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27153", "html_url": "https://github.com/huggingface/transformers/pull/27153", "diff_url": "https://github.com/huggingface/transformers/pull/27153.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27153.patch", "merged_at": 1702043309000 }
https://api.github.com/repos/huggingface/transformers/issues/27152
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27152/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27152/comments
https://api.github.com/repos/huggingface/transformers/issues/27152/events
https://github.com/huggingface/transformers/pull/27152
1,968,919,533
PR_kwDOCUB6oc5eJnR8
27,152
Fix : Grammatic typo in readme
{ "login": "AaadityaG", "id": 114663382, "node_id": "U_kgDOBtWf1g", "avatar_url": "https://avatars.githubusercontent.com/u/114663382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AaadityaG", "html_url": "https://github.com/AaadityaG", "followers_url": "https://api.github.com/users/AaadityaG/followers", "following_url": "https://api.github.com/users/AaadityaG/following{/other_user}", "gists_url": "https://api.github.com/users/AaadityaG/gists{/gist_id}", "starred_url": "https://api.github.com/users/AaadityaG/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AaadityaG/subscriptions", "organizations_url": "https://api.github.com/users/AaadityaG/orgs", "repos_url": "https://api.github.com/users/AaadityaG/repos", "events_url": "https://api.github.com/users/AaadityaG/events{/privacy}", "received_events_url": "https://api.github.com/users/AaadityaG/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts ", "@amyeroberts should I close it, Please respond ", "@AaadityaG Please have patience. There are many people who have open PRs and issues who are waiting for a response. We try to address them as soon as we can but this is normally on the order of hours or days rather than minutes. \r\n\r\nYes, you can close this PR. The response from ChatGPT is because it doesn't know that \"Audio Spectrogram Transformer\" is the name of the model. \r\n", "@amyeroberts I am very sorry, I must be patient. " ]
1,698
1,698
1,698
NONE
null
# What does this PR do? Audio -> an Audio <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27152/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27152", "html_url": "https://github.com/huggingface/transformers/pull/27152", "diff_url": "https://github.com/huggingface/transformers/pull/27152.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27152.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27151
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27151/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27151/comments
https://api.github.com/repos/huggingface/transformers/issues/27151/events
https://github.com/huggingface/transformers/issues/27151
1,968,733,015
I_kwDOCUB6oc51WHtX
27,151
Evaluatation of the gradients of class probabilities and logits with respect to inputs.
{ "login": "behroozazarkhalili", "id": 80390531, "node_id": "MDQ6VXNlcjgwMzkwNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/behroozazarkhalili", "html_url": "https://github.com/behroozazarkhalili", "followers_url": "https://api.github.com/users/behroozazarkhalili/followers", "following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}", "gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}", "starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions", "organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs", "repos_url": "https://api.github.com/users/behroozazarkhalili/repos", "events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}", "received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @behroozazarkhalili, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "> Hi @behroozazarkhalili, thanks for raising an issue!\r\n> \r\n> This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nOkay. I am going to close the issue and open a new one in the forums." ]
1,698
1,698
1,698
NONE
null
I want to compute the gradient of class probability and class logits with respect to inputs in transformer models. I have read issues #8601 and #8747, as well as the related test in the repository, to develop the following function. ```python def get_attention_hidden_states_grad_logits(hf_model, input, class_index, layer_index): model_outputs = hf_model(**input) hidden_sattes = model.get("hidden_states", []) attentions = model.get("attentions", []) [hidden_states[i].retain_grad() for i in range(len(hidden_states))] [attentions[i].retain_grad() for i in range(len(attentions))] class_logits = model_outputs.get('logits') class_probabilities = model_outputs.get('logits').softmax(dim=-1) class_logits.flatten()[logits_index].backward(retain_graph=True) hidden_states_grad = hidden_states[grad_index].grad attention_grads = attentions[grad_index].grad return hidden_states_grad, attention_grads, class_probabilities, class_logits ``` #### Question 1: Is it necessary to apply the `retain_grad` method on all hidden_sattes and attentions? I think due to using the chain rule, using the following code would be enough (using the `retain_grad` method just for hidden states and attention of the first layer.) ```python def get_attention_hidden_states_grad_logits(hf_model, input, class_index, layer_index): model_outputs = hf_model(**input) hidden_sattes = model.get("hidden_states", []) attentions = model.get("attentions", []) hidden_states[0].retain_grad() attentions[0].retain_grad() class_logits = model_outputs.get('logits') class_probabilities = model_outputs.get('logits').softmax(dim=-1) class_logits.flatten()[logits_index].backward(retain_graph=True) hidden_states_grad = hidden_states[grad_index].grad attention_grads = attentions[grad_index].grad return hidden_states_grad, attention_grads, class_probabilities, class_logits ``` #### Question 2. I'm wondering if it's possible to calculate the gradient of class probabilities. Specifically, I'd like to know if the following code, which substitutes class probabilities with class logits, can be used to evaluate the gradient of class probabilities with respect to attentions and hidden states. ```python def get_attention_hidden_states_grad_probs(hf_model, input, class_index, layer_index): model_outputs = hf_model(**input) hidden_sattes = model.get("hidden_states", []) attentions = model.get("attentions", []) [hidden_states[i].retain_grad() for i in range(len(hidden_states))] [attentions[i].retain_grad() for i in range(len(attentions))] class_logits = model_outputs.get('logits') class_probabilities = model_outputs.get('logits').softmax(dim=-1) class_class_probabilities.flatten()[logits_index].backward(retain_graph=True) hidden_states_grad = hidden_states[grad_index].grad attention_grads = attentions[grad_index].grad return hidden_states_grad, attention_grads, class_probabilities, class_logits ``` The gradients of attentions and hidden states change, which is surprising to me because the gradient of the attentions and hidden states with respect to inputs should not change as the intermediate variables when you change the output variables. #### Question 3. It appears that the following code can also evaluate the gradients of outputs (class probabilities or class logits) with respect to attention and hidden states. Can you please confirm if this is correct? ```python torch.autograd.grad(model_outputs.logits.flatten()[0], model_outputs.attentions[0], create_graph=True) torch.autograd.grad(model_outputs.logits.flatten()[0], model_outputs.hidden_states[0], create_graph=True) torch.autograd.grad(model_outputs.logits.softmax(dim=-1).flatten()[0], model_outputs.attentions[0], create_graph=True) torch.autograd.grad(model_outputs.logits.softmax(dim=-1).flatten()[0], model_outputs.hidden_states[0], create_graph=True) ``` @joeddav @patrickvonplaten Could you please confirm if I have made any errors? It would be tremendously helpful if you could provide me with the correct code to compute the gradient of attentions and hidden states. This is an essential aspect of my projects, and I greatly appreciate your assistance. #### Side Note: **I know that the gradient of attention and hidden states does not change when we change the class_index; It is indeed for another purpose and future tasks**.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27151/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27150
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27150/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27150/comments
https://api.github.com/repos/huggingface/transformers/issues/27150/events
https://github.com/huggingface/transformers/issues/27150
1,968,606,167
I_kwDOCUB6oc51VovX
27,150
Fast T5 tokenization fails for models with additional special tokens not prefixed by extra_id_
{ "login": "LoicDagnas", "id": 7431237, "node_id": "MDQ6VXNlcjc0MzEyMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7431237?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LoicDagnas", "html_url": "https://github.com/LoicDagnas", "followers_url": "https://api.github.com/users/LoicDagnas/followers", "following_url": "https://api.github.com/users/LoicDagnas/following{/other_user}", "gists_url": "https://api.github.com/users/LoicDagnas/gists{/gist_id}", "starred_url": "https://api.github.com/users/LoicDagnas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LoicDagnas/subscriptions", "organizations_url": "https://api.github.com/users/LoicDagnas/orgs", "repos_url": "https://api.github.com/users/LoicDagnas/repos", "events_url": "https://api.github.com/users/LoicDagnas/events{/privacy}", "received_events_url": "https://api.github.com/users/LoicDagnas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, the PR was merged and should have been part of the latest release. I no longer have this issue on main, nor on `transformers == 4.35.0`. It was not included in the patch as it was reported after 😉 ", "It works with the latest release, thank you " ]
1,698
1,699
1,699
NONE
null
### System Info - `transformers` version: 4.34.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cpu (False) - Tensorflow version (GPU?): 2.13.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction As described [here](https://github.com/huggingface/transformers/pull/23909#issuecomment-1785240174), when I try to load this tokenizer ```python from transformers import AutoTokenizer AutoTokenizer.from_pretrained("iarfmoose/t5-base-question-generator") ``` [This exception](https://github.com/huggingface/transformers/blob/e4dad4fe32525c26eccb5790c258aa271476ac33/src/transformers/models/t5/tokenization_t5_fast.py#L127) is raised. ### Expected behavior The tokenizer should be loaded without error as it was for version < 4.34
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27150/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27150/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27149
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27149/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27149/comments
https://api.github.com/repos/huggingface/transformers/issues/27149/events
https://github.com/huggingface/transformers/pull/27149
1,968,496,015
PR_kwDOCUB6oc5eIJ7X
27,149
Remove some Kosmos-2 `copied from`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
COLLABORATOR
null
# What does this PR do? Remove some Kosmos-2 `copied from` as there is recent change in #27086. Will update them later to match bart, but for now better not to break `main` CI.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27149/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27149", "html_url": "https://github.com/huggingface/transformers/pull/27149", "diff_url": "https://github.com/huggingface/transformers/pull/27149.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27149.patch", "merged_at": 1698678447000 }
https://api.github.com/repos/huggingface/transformers/issues/27148
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27148/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27148/comments
https://api.github.com/repos/huggingface/transformers/issues/27148/events
https://github.com/huggingface/transformers/pull/27148
1,968,341,200
PR_kwDOCUB6oc5eHn77
27,148
Update `Kosmos-2` gradient checkpointing
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Move this to #27148 otherwise that PR will fail CI." ]
1,698
1,698
1,698
COLLABORATOR
null
# What does this PR do? Update `Kosmos-2` gradient checkpointing as @younesbelkada told me.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27148/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27148", "html_url": "https://github.com/huggingface/transformers/pull/27148", "diff_url": "https://github.com/huggingface/transformers/pull/27148.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27148.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27147
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27147/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27147/comments
https://api.github.com/repos/huggingface/transformers/issues/27147/events
https://github.com/huggingface/transformers/pull/27147
1,968,295,484
PR_kwDOCUB6oc5eHd5v
27,147
Fix some tests using `"common_voice"`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "That is in the plan :-)" ]
1,698
1,698
1,698
COLLABORATOR
null
# What does this PR do? Fix some tests using `"common_voice"`. Errors: ``` FileNotFoundError: https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/en.tar.gz ``` and ``` This version of the Common Voice dataset is deprecated. You can download the latest one with >>> load_dataset("mozilla-foundation/common_voice_11_0", "en") ``` We can load the same dataset using `"mozilla-foundation/common_voice_6_1"`, but this requires passing `token` which is not good for testing purpose. So I change it to `"mozilla-foundation/common_voice_11_0"` but update expected outputs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27147/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27147", "html_url": "https://github.com/huggingface/transformers/pull/27147", "diff_url": "https://github.com/huggingface/transformers/pull/27147.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27147.patch", "merged_at": 1698676035000 }
https://api.github.com/repos/huggingface/transformers/issues/27146
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27146/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27146/comments
https://api.github.com/repos/huggingface/transformers/issues/27146/events
https://github.com/huggingface/transformers/pull/27146
1,968,278,270
PR_kwDOCUB6oc5eHaFO
27,146
device agnostic models testing
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for opening this PR @statelesshz ! Let us know when you'd like a review", "Due to network issue, it's not easy to download the model weights on huggingface hub from my area, so I take `test_modeling_bert.py` as an example to give the execution reaults on NPU.\r\n\r\n```\r\n(hf) [root@localhost /home/hf/transformers]# RUN_SLOW=1 TRANSFORMERS_TEST_BACKEND=\"torch_npu\" TRANSFORMERS_TEST_DEVICE=\"npu:0\" TRANSFORMEs/bert/test_modeling_bert.py\r\n========================================================================================= test session starts ===========================\r\nplatform linux -- Python 3.8.17, pytest-7.4.2, pluggy-1.3.0 -- /root/anaconda3/envs/hf/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /home/hf/transformers\r\nconfigfile: setup.cfg\r\nplugins: dash-2.13.0, hydra-core-1.3.2, odl-0.7.0, anyio-4.0.0\r\ncollected 135 items \r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_assisted_decoding_matches_greedy_search PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_assisted_decoding_sample PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_attention_outputs PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_sample_generate PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_sample_generate_dict_output PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_search_generate PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_search_generate_dict_output PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_search_generate_dict_outputs_use_cache PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_can_use_safetensors PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_config PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_constrained_beam_search_generate SKIPPED (unconditional skip) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_constrained_beam_search_generate_dict_output SKIPPED (unconditional skip) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_contrastive_generate PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_contrastive_generate_dict_outputs_use_cache PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_contrastive_generate_low_memory PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_correct_missing_keys PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_cpu_offload SKIPPED (test requires CUDA) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_decoder_model_past_with_large_inputs PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_decoder_model_past_with_large_inputs_relative_pos_emb PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_determinism PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_disk_offload SKIPPED (test requires CUDA) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_equivalence_flax_to_pt SKIPPED (test is PT+FLAX test) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_equivalence_pt_to_flax SKIPPED (test is PT+FLAX test) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_feed_forward_chunking PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_conversion SKIPPED (test requires Flash Attention) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_fp32_ln SKIPPED (test requires Flash Attention) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_generate_left_padding SKIPPED (test requires Flash Attention) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_generate_padding_right SKIPPED (test requires Flash Attention) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_generate_use_cache SKIPPED (test requires Flash Attention) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference SKIPPED (test requires Flash Attention) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_padding_right SKIPPED (test requires Flash Attention)\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_causal_lm PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_causal_lm_decoder PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_masked_lm PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_multiple_choice PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_next_sequence_prediction PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_pretraining PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_question_answering PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_sequence_classification PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_token_classification PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_for_warning_if_padding_and_no_attention_mask PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_forward_signature PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_from_pretrained_no_checkpoint PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_generate_from_inputs_embeds_decoder_only PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_generate_with_head_masking PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_generate_without_input_ids PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_gradient_checkpointing_backward_compatibility PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_gradient_checkpointing_enable_disable PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_greedy_generate PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_greedy_generate_dict_outputs PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_greedy_generate_dict_outputs_use_cache PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_group_beam_search_generate PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_group_beam_search_generate_dict_output PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_head_pruning PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_head_pruning_integration PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_head_pruning_save_load_from_config_init PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_head_pruning_save_load_from_pretrained PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_headmasking PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_hidden_states_output PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_initialization PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_inputs_embeds PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_keep_in_fp32_modules PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_left_padding_compatibility PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_load_save_without_tied_weights PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_load_with_mismatched_shapes PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_as_decoder PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_as_decoder_with_default_input_mask PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_common_attributes PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_from_pretrained PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_is_small PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_main_input_name PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_outputs_equivalence PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_parallel_beam_search PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_parallel_equal_results SKIPPED (test requires multiple GPUs) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_parallelism SKIPPED (test requires multiple GPUs) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_parallelization SKIPPED (test requires multiple GPUs) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_various_embeddings PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_model_weights_reload_no_missing_tied_weights PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_multi_gpu_data_parallel_forward SKIPPED (test requires multiple GPUs) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_past_key_values_format PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_audio_classification SKIPPED (BertModelTest::test_pipeline_audio_cl\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_automatic_speech_recognition SKIPPED (BertModelTest::test_pipeline_\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_conversational SKIPPED (BertModelTest::test_pipeline_conversational\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_depth_estimation SKIPPED (BertModelTest::test_pipeline_depth_estima\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_document_question_answering SKIPPED (test requires PyTesseract) \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_feature_extraction PASSED \r\n\r\n \r\nPASSED [ 65%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_image_classification SKIPPED (BertModelTest::test_pipeline_image_cl\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_image_segmentation SKIPPED (BertModelTest::test_pipeline_image_segm\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_image_to_text SKIPPED (BertModelTest::test_pipeline_image_to_text i\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_mask_generation SKIPPED (`run_pipeline_test` is currently not imple\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_object_detection SKIPPED (BertModelTest::test_pipeline_object_detec\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_question_answering PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_summarization SKIPPED (BertModelTest::test_pipeline_summarization i\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_table_question_answering SKIPPED (BertModelTest::test_pipeline_tabl\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_text2text_generation SKIPPED (BertModelTest::test_pipeline_text2tex\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_text_classification PASSED \r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_text_generation \r\n\r\n\r\nPASSED [ 73%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_text_to_audio SKIPPED (BertModelTest::test_pipeline_te...) [ 74%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_token_classification PASSED [ 74%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_translation SKIPPED (BertModelTest::test_pipeline_tran...) [ 75%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_video_classification SKIPPED (test requires decord) [ 76%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_visual_question_answering SKIPPED (BertModelTest::test...) [ 77%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot PASSED [ 77%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot_audio_classification SKIPPED (BertModelTest:...) [ 78%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot_image_classification SKIPPED (BertModelTest:...) [ 79%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot_object_detection SKIPPED (BertModelTest::tes...) [ 80%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_problem_types PASSED [ 80%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pt_tf_model_equivalence SKIPPED (test is PT+TF test) [ 81%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_resize_embeddings_untied PASSED [ 82%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_resize_position_vector_embeddings PASSED [ 82%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_resize_tokens_embeddings PASSED [ 83%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_retain_grad_hidden_states_attentions PASSED [ 84%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_sample_generate PASSED [ 85%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_sample_generate_dict_output PASSED [ 85%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_save_load PASSED [ 86%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_save_load_fast_init_from_base PASSED [ 87%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_save_load_fast_init_to_base PASSED [ 88%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_save_load_keys_to_ignore_on_save PASSED [ 88%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_tie_model_weights PASSED [ 89%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_tied_weights_keys PASSED [ 90%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torch_fx PASSED [ 91%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torch_fx_output_loss PASSED [ 91%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_device_change PASSED [ 92%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_output_attentions PASSED [ 93%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_output_hidden_state PASSED [ 94%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_simple PASSED [ 94%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_training PASSED [ 95%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_training_gradient_checkpointing PASSED [ 96%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_training_gradient_checkpointing_use_reentrant PASSED [ 97%]\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_training_gradient_checkpointing_use_reentrant_false PASSED [ 97%]\r\ntests/models/bert/test_modeling_bert.py::BertModelIntegrationTest::test_inference_no_head_relative_embedding_key PASSED [ 99%]\r\ntests/models/bert/test_modeling_bert.py::BertModelIntegrationTest::test_inference_no_head_relative_embedding_key_query PASSED [100%]\r\n\r\n=========================================================== warnings summary ===========================================================\r\n../../../root/anaconda3/envs/hf/lib/python3.8/site-packages/_pytest/config/__init__.py:1373\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\ntests/test_modeling_common.py:2790\r\n /home/hf/transformers/tests/test_modeling_common.py:2790: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html\r\n @mark.flash_attn_test\r\n\r\ntests/test_modeling_common.py:2817\r\n /home/hf/transformers/tests/test_modeling_common.py:2817: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html\r\n @mark.flash_attn_test\r\n\r\ntests/test_modeling_common.py:2863\r\n /home/hf/transformers/tests/test_modeling_common.py:2863: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html\r\n @mark.flash_attn_test\r\n\r\ntests/test_modeling_common.py:2905\r\n /home/hf/transformers/tests/test_modeling_common.py:2905: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html\r\n @mark.flash_attn_test\r\n\r\ntests/test_modeling_common.py:2942\r\n /home/hf/transformers/tests/test_modeling_common.py:2942: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html\r\n @mark.flash_attn_test\r\n\r\ntests/test_modeling_common.py:2979\r\n /home/hf/transformers/tests/test_modeling_common.py:2979: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html\r\n @mark.flash_attn_test\r\n\r\ntests/test_modeling_common.py:3009\r\n /home/hf/transformers/tests/test_modeling_common.py:3009: PytestUnknownMarkWarning: Unknown pytest.mark.flash_attn_test - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html\r\n @mark.flash_attn_test\r\n\r\nsrc/transformers/deepspeed.py:23\r\n /home/hf/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_assisted_decoding_matches_greedy_search\r\n /home/hf/transformers/tests/test_modeling_common.py:3069: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at /opt/_internal/cpython-3.8.17/lib/python3.8/site-packages/torch/include/ATen/core/LegacyTypeDispatch.h:74.)\r\n attn_mask[:, 0] = 1\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_attention_outputs\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch_npu/utils/storage.py:36: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n if self.device.type != 'cpu':\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_sample_generate\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_sample_generate_dict_output\r\n /home/hf/transformers/src/transformers/generation/utils.py:3341: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_search_generate\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_search_generate_dict_output\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_beam_search_generate_dict_outputs_use_cache\r\n /home/hf/transformers/src/transformers/generation/utils.py:3005: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_greedy_generate\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_greedy_generate_dict_outputs\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_greedy_generate_dict_outputs_use_cache\r\n /home/hf/transformers/src/transformers/generation/utils.py:2450: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList([MaxLengthCriteria(max_length=max_length)])` instead.\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_group_beam_search_generate\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_group_beam_search_generate_dict_output\r\n /home/hf/transformers/src/transformers/generation/utils.py:3663: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py: 54 warnings\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/urllib3/connectionpool.py:1056: InsecureRequestWarning: Unverified HTTPS request is being made to host '90.253.31.68'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_text_generation\r\n /home/hf/transformers/src/transformers/generation/utils.py:1273: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py:642: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0xffff395d15e0> was reported to be 1 (when accessing len(dataloader)), but 2 samples have been fetched. \r\n warnings.warn(warn_msg)\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py:642: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0xffff395d1790> was reported to be 2 (when accessing len(dataloader)), but 3 samples have been fetched. \r\n warnings.warn(warn_msg)\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py:642: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0xffff395d1790> was reported to be 2 (when accessing len(dataloader)), but 4 samples have been fetched. \r\n warnings.warn(warn_msg)\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py:642: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0xffff395be6a0> was reported to be 1 (when accessing len(dataloader)), but 2 samples have been fetched. \r\n warnings.warn(warn_msg)\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py:642: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0xffff395d10a0> was reported to be 2 (when accessing len(dataloader)), but 3 samples have been fetched. \r\n warnings.warn(warn_msg)\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_pipeline_zero_shot\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py:642: UserWarning: Length of IterableDataset <transformers.pipelines.pt_utils.PipelineChunkIterator object at 0xffff395d10a0> was reported to be 2 (when accessing len(dataloader)), but 4 samples have been fetched. \r\n warnings.warn(warn_msg)\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_sample_generate\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_sample_generate_dict_output\r\n /home/hf/transformers/src/transformers/generation/utils.py:2728: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.\r\n warnings.warn(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torch_fx\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()'\r\n torch.has_cuda,\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torch_fx\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()'\r\n torch.has_cudnn,\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torch_fx\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()'\r\n torch.has_mps,\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torch_fx\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()'\r\n torch.has_mkldnn,\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_device_change\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/jit/_trace.py:1093: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:\r\n Tensor-likes are not close!\r\n \r\n Mismatched elements: 2911 / 2912 (100.0%)\r\n Greatest absolute difference: 2.786480948328972 at index (10, 5, 26) (up to 1e-05 allowed)\r\n Greatest relative difference: 945.6015975578661 at index (6, 3, 1) (up to 1e-05 allowed)\r\n _check_trace(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_device_change\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/jit/_trace.py:1093: TracerWarning: Output nr 2. of the traced function does not match the corresponding output of the Python function. Detailed error:\r\n Tensor-likes are not close!\r\n \r\n Mismatched elements: 416 / 416 (100.0%)\r\n Greatest absolute difference: 0.20477154292166233 at index (4, 21) (up to 1e-05 allowed)\r\n Greatest relative difference: 46.22814215551245 at index (11, 11) (up to 1e-05 allowed)\r\n _check_trace(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_device_change\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/jit/_trace.py:1093: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:\r\n Tensor-likes are not close!\r\n \r\n Mismatched elements: 8917 / 9009 (99.0%)\r\n Greatest absolute difference: 0.27886080741882324 at index (3, 4, 13) (up to 1e-05 allowed)\r\n Greatest relative difference: 5961.7773065868605 at index (5, 1, 61) (up to 1e-05 allowed)\r\n _check_trace(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_torchscript_device_change\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/jit/_trace.py:1093: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:\r\n Tensor-likes are not close!\r\n \r\n Mismatched elements: 8916 / 9009 (99.0%)\r\n Greatest absolute difference: 0.33901818469166756 at index (5, 4, 33) (up to 1e-05 allowed)\r\n Greatest relative difference: 8042.04489700118 at index (11, 5, 78) (up to 1e-05 allowed)\r\n _check_trace(\r\n\r\ntests/models/bert/test_modeling_bert.py::BertModelTest::test_training_gradient_checkpointing\r\n /root/anaconda3/envs/hf/lib/python3.8/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\r\n warnings.warn(\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n======================================= 97 passed, 38 skipped, 93 warnings in 1067.07s (0:17:47) ======================================\r\n```", "Marking this ready for review :-) @ydshieh and @amyeroberts ", "Nice to know bert works well on NPU!", "Thank you @statelesshz the king of NPU!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27146). All of your documentation changes will be reflected on that endpoint." ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Part of https://github.com/huggingface/transformers/issues/25654#issuecomment-1783704306 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27146/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27146", "html_url": "https://github.com/huggingface/transformers/pull/27146", "diff_url": "https://github.com/huggingface/transformers/pull/27146.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27146.patch", "merged_at": 1698772334000 }
https://api.github.com/repos/huggingface/transformers/issues/27145
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27145/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27145/comments
https://api.github.com/repos/huggingface/transformers/issues/27145/events
https://github.com/huggingface/transformers/pull/27145
1,968,064,782
PR_kwDOCUB6oc5eGrFi
27,145
[`tests` / `Quantization`] Fix bnb test
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks! makes sense, adapted the test accordingly and made : https://github.com/mosaicml/llm-foundry/issues/703 to track the issue" ]
1,698
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Fixes the currently bnb failing test ([Link to failing job](https://github.com/huggingface/transformers/actions/runs/6680718296)) ``` tests/quantization/bnb/test_mixed_int8.py::MixedInt8GPT2Test::test_get_keys_to_not_convert ``` The issue is related to the fact that currently the `trust_remote_code` version of MPT models cannot be loaded from transformers main since https://github.com/huggingface/transformers/pull/27086 removed some private methods from the modeling files, i.e. this simple script fails on main: ```python from accelerate import init_empty_weights from transformers import AutoModelForCausalLM, AutoConfig model_id = "mosaicml/mpt-7b" config = AutoConfig.from_pretrained( model_id, trust_remote_code=True, revision="72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7" ) with init_empty_weights(): model = AutoModelForCausalLM.from_config( config, trust_remote_code=True, code_revision="72e5f594ce36f9cabfa2a9fd8f58b491eb467ee7" ) ``` I raised an issue on the Hub https://huggingface.co/mosaicml/mpt-7b/discussions/83 so the issue should be hopefully fixed soon. I proposed to temporary skip the test for trust_remote_code version of MPT models and revert it back once that issue gets resolved. I believe that should be fine
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27145/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27145", "html_url": "https://github.com/huggingface/transformers/pull/27145", "diff_url": "https://github.com/huggingface/transformers/pull/27145.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27145.patch", "merged_at": 1698676988000 }