url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/27849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27849/comments | https://api.github.com/repos/huggingface/transformers/issues/27849/events | https://github.com/huggingface/transformers/pull/27849 | 2,025,630,654 | PR_kwDOCUB6oc5hJc2J | 27,849 | pin ruff==0.1.5 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"feel free to merge ",
"OK, the failing tests are either flaky or unrelated. Considering the priority, merge this PR now."
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
pin `ruff==0.1.5` to avoid frequent break on CI. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27849/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27849",
"html_url": "https://github.com/huggingface/transformers/pull/27849",
"diff_url": "https://github.com/huggingface/transformers/pull/27849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27849.patch",
"merged_at": 1701767844000
} |
https://api.github.com/repos/huggingface/transformers/issues/27848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27848/comments | https://api.github.com/repos/huggingface/transformers/issues/27848/events | https://github.com/huggingface/transformers/pull/27848 | 2,025,490,256 | PR_kwDOCUB6oc5hI9Wn | 27,848 | update version of warning notification for `get_default_device` to v4.38 | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27848). All of your documentation changes will be reflected on that endpoint.",
"Ah the CI is collecting ruff 0.1.7 , could you try rebasing on main",
"@ArthurZucker we got green light :-)",
"Thanks!"
] | 1,701 | 1,707 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
links to https://github.com/huggingface/transformers/pull/27256?notification_referrer_id=NT_kwDOAa2LzrM4Mjg2NjEyNzQxOjI4MTUwNzM0#issuecomment-1840172527
The version of accelerate is verified in the `__init__` method:
https://github.com/huggingface/transformers/blob/235e5d4991e8a0984aa78db91087b49622c7740e/src/transformers/tools/base.py#L490-L493
BTW, correcting several code formatting errors.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @muellerzr
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27848/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27848",
"html_url": "https://github.com/huggingface/transformers/pull/27848",
"diff_url": "https://github.com/huggingface/transformers/pull/27848.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27848.patch",
"merged_at": 1701951910000
} |
https://api.github.com/repos/huggingface/transformers/issues/27847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27847/comments | https://api.github.com/repos/huggingface/transformers/issues/27847/events | https://github.com/huggingface/transformers/pull/27847 | 2,025,385,142 | PR_kwDOCUB6oc5hImU9 | 27,847 | Fix bug of _prepare_4d_attention_mask | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ArthurZucker @younesbelkada . All CIs are green, would you please help me review and merge it? Thx!"
] | 1,701 | 1,704 | 1,701 | CONTRIBUTOR | null | Hi @ArthurZucker @younesbelkada . Since `_prepare_4d_attention_mask` is no longer a member function of `AttentionMaskConverter`, I directly import `_prepare_4d_attention_mask` from `modeling_attn_mask_utils`. Would you please help to review it? Thx!
BTW, the failed CIs are not related to my changes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27847/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27847",
"html_url": "https://github.com/huggingface/transformers/pull/27847",
"diff_url": "https://github.com/huggingface/transformers/pull/27847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27847.patch",
"merged_at": 1701931385000
} |
https://api.github.com/repos/huggingface/transformers/issues/27846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27846/comments | https://api.github.com/repos/huggingface/transformers/issues/27846/events | https://github.com/huggingface/transformers/issues/27846 | 2,025,271,128 | I_kwDOCUB6oc54ty9Y | 27,846 | `ACT2FN` usage | {
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! This seems to be related to backwards compatibility but we favor using the ACT2FN directly ",
"Thanks for the prompt response. You can close this issue if you would like."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | In the code below we have a function to retrieve the correct activation function from the string.
https://github.com/huggingface/transformers/blob/235e5d4991e8a0984aa78db91087b49622c7740e/src/transformers/activations.py#L224-L228
I have noticed some models to use this function and others to directly import `ACT2FN` and use it as a dictionary.
IMO the function is redundant and importing the dictionary directly to scripts is enough. Curious to hear other opinions. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27846/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27845/comments | https://api.github.com/repos/huggingface/transformers/issues/27845/events | https://github.com/huggingface/transformers/issues/27845 | 2,025,264,667 | I_kwDOCUB6oc54txYb | 27,845 | Mistral Slow Tokenizer error | {
"login": "ari9dam",
"id": 14134882,
"node_id": "MDQ6VXNlcjE0MTM0ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14134882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ari9dam",
"html_url": "https://github.com/ari9dam",
"followers_url": "https://api.github.com/users/ari9dam/followers",
"following_url": "https://api.github.com/users/ari9dam/following{/other_user}",
"gists_url": "https://api.github.com/users/ari9dam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ari9dam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ari9dam/subscriptions",
"organizations_url": "https://api.github.com/users/ari9dam/orgs",
"repos_url": "https://api.github.com/users/ari9dam/repos",
"events_url": "https://api.github.com/users/ari9dam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ari9dam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, sorry but I just can't reproduce the snippet is missing the end and the full traceback ",
"TypeError Traceback (most recent call last)\r\nCell In[13], line 2\r\n 1 import transformers\r\n----> 2 tokenizer = transformers.AutoTokenizer.from_pretrained(\"/mnt/mistral-7b-v0.1/\", use_fast=False)\r\n\r\nFile /anaconda/envs/orca2_hf_gradio/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py:768, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 764 if tokenizer_class is None:\r\n 765 raise ValueError(\r\n 766 f\"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported.\"\r\n 767 )\r\n--> 768 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n 770 # Otherwise we have to be creative.\r\n 771 # if model is an encoder decoder, the encoder tokenizer class is used by default\r\n 772 if isinstance(config, EncoderDecoderConfig):\r\n\r\nFile /anaconda/envs/orca2_hf_gradio/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2024, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs)\r\n 2021 else:\r\n 2022 logger.info(f\"loading file {file_path} from cache at {resolved_vocab_files[file_id]}\")\r\n-> 2024 return cls._from_pretrained(\r\n 2025 resolved_vocab_files,\r\n 2026 pretrained_model_name_or_path,\r\n 2027 init_configuration,\r\n 2028 *init_inputs,\r\n 2029 token=token,\r\n 2030 cache_dir=cache_dir,\r\n 2031 local_files_only=local_files_only,\r\n 2032 _commit_hash=commit_hash,\r\n 2033 _is_local=is_local,\r\n 2034 **kwargs,\r\n 2035 )\r\n\r\nFile /anaconda/envs/orca2_hf_gradio/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2256, in PreTrainedTokenizerBase._from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, token, cache_dir, local_files_only, _commit_hash, _is_local, *init_inputs, **kwargs)\r\n 2254 # Instantiate the tokenizer.\r\n 2255 try:\r\n-> 2256 tokenizer = cls(*init_inputs, **init_kwargs)\r\n 2257 except OSError:\r\n 2258 raise OSError(\r\n 2259 \"Unable to load vocabulary from file. \"\r\n 2260 \"Please check that the provided vocabulary is accessible and not corrupted.\"\r\n 2261 )\r\n\r\nFile /anaconda/envs/orca2_hf_gradio/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py:178, in LlamaTokenizer.__init__(self, vocab_file, unk_token, bos_token, eos_token, pad_token, sp_model_kwargs, add_bos_token, add_eos_token, clean_up_tokenization_spaces, use_default_system_prompt, spaces_between_special_tokens, legacy, **kwargs)\r\n 176 self.add_eos_token = add_eos_token\r\n 177 self.use_default_system_prompt = use_default_system_prompt\r\n--> 178 self.sp_model = self.get_spm_processor(kwargs.pop(\"from_slow\", False))\r\n 180 super().__init__(\r\n 181 bos_token=bos_token,\r\n 182 eos_token=eos_token,\r\n (...)\r\n 192 **kwargs,\r\n 193 )\r\n\r\nFile /anaconda/envs/orca2_hf_gradio/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py:203, in LlamaTokenizer.get_spm_processor(self, from_slow)\r\n 201 tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs)\r\n 202 if self.legacy or from_slow: # no dependency on protobuf\r\n--> 203 tokenizer.Load(self.vocab_file)\r\n 204 return tokenizer\r\n 206 with open(self.vocab_file, \"rb\") as f:\r\n\r\nFile /anaconda/envs/orca2_hf_gradio/lib/python3.9/site-packages/sentencepiece/__init__.py:905, in SentencePieceProcessor.Load(self, model_file, model_proto)\r\n 903 if model_proto:\r\n 904 return self.LoadFromSerializedProto(model_proto)\r\n--> 905 return self.LoadFromFile(model_file)\r\n\r\nFile /anaconda/envs/orca2_hf_gradio/lib/python3.9/site-packages/sentencepiece/__init__.py:310, in SentencePieceProcessor.LoadFromFile(self, arg)\r\n 309 def LoadFromFile(self, arg):\r\n--> 310 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\n\r\nTypeError: not a string\r\n=================\r\n\r\nMight be an issue with my local version. Because when I'm taking it from huggingface repo it is working.",
"Please refrain from just copy pasting unfromatted text like this it's not gonna help me help you. I need a full reproducer, I cannot access your local folder to see what your tokenizer.model looks like. See the following colab https://colab.research.google.com/drive/1AcZygj9zMvbkiVduKDsm1e1x-TsCJZJx?usp=sharing",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | NONE | null | ### System Info
transformers=4.35.0
### Who can help?
@Arthu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
No slow tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", use_fast=False)--> results into exception
TypeError: not a string
### Expected behavior
1. slow tokenizer works | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27845/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27844/comments | https://api.github.com/repos/huggingface/transformers/issues/27844/events | https://github.com/huggingface/transformers/pull/27844 | 2,025,152,558 | PR_kwDOCUB6oc5hH0Z6 | 27,844 | move code to Trainer.evaluate to enable use of that function with multiple datasets | {
"login": "peter-sk",
"id": 6168908,
"node_id": "MDQ6VXNlcjYxNjg5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6168908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peter-sk",
"html_url": "https://github.com/peter-sk",
"followers_url": "https://api.github.com/users/peter-sk/followers",
"following_url": "https://api.github.com/users/peter-sk/following{/other_user}",
"gists_url": "https://api.github.com/users/peter-sk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peter-sk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peter-sk/subscriptions",
"organizations_url": "https://api.github.com/users/peter-sk/orgs",
"repos_url": "https://api.github.com/users/peter-sk/repos",
"events_url": "https://api.github.com/users/peter-sk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peter-sk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@younesbelkada\r\n@lvwerra \r\nLet me know if you would like a test for this.",
"@lvwerra\r\n@younesbelkada \r\nI added a test.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Makes sense! @peter-sk would you be happy to address:\r\n\r\n> let's go ahead and adjust the type for the docstring and tell users how they can pass in multiple eval datasets as a <Tip> in the docstring to evaluate please! \r\n\r\nThis would require slightly changing the docstring of the evaluate method in the PR",
"Finally got around to changing the doc string and adding a tup. Any further requests for changes? Or is this good-to-go?\r\n",
"Upon CI passing, that should be good to go I think"
] | 1,701 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@lvwerra
@younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27844/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27844",
"html_url": "https://github.com/huggingface/transformers/pull/27844",
"diff_url": "https://github.com/huggingface/transformers/pull/27844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27844.patch",
"merged_at": 1703066156000
} |
https://api.github.com/repos/huggingface/transformers/issues/27843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27843/comments | https://api.github.com/repos/huggingface/transformers/issues/27843/events | https://github.com/huggingface/transformers/pull/27843 | 2,025,054,830 | PR_kwDOCUB6oc5hHgFi | 27,843 | [DETA] fix backbone freeze/unfreeze function | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker Hi Arthur, can you review this test case modification for DETA? https://github.com/huggingface/transformers/pull/27843/commits/a583fa70fe6010fe7450aa049cc1a65950cb59c7\r\n~~Because I do not fully understand CircleCI when this testing function will be called in transformers~~\r\nNow I think I understand the CI piplines.",
"For the last CI you need to run `make style` with `ruff==0.1.5`",
"```\r\nroot@9603895c7c37:/mnt/nas2/users/sbchoi/transformers# make style\r\nruff check examples tests src utils setup.py conftest.py --fix\r\nruff format examples tests src utils setup.py conftest.py\r\n2706 files left unchanged\r\nmake autogenerate_code\r\nmake[1]: Entering directory '/mnt/nas2/users/sbchoi/transformers'\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\nmake[1]: Leaving directory '/mnt/nas2/users/sbchoi/transformers'\r\nmake extra_style_checks\r\nmake[1]: Entering directory '/mnt/nas2/users/sbchoi/transformers'\r\npython utils/custom_init_isort.py\r\npython utils/sort_auto_mappings.py\r\npython utils/check_doc_toc.py --fix_and_overwrite\r\nmake[1]: Leaving directory '/mnt/nas2/users/sbchoi/transformers'\r\n```\r\nrunning make style doesn't change anything, and the last CI seems irrelevant to this PR.\r\n\r\n```\r\nWould reformat: src/transformers/models/bloom/modeling_bloom.py\r\nWould reformat: src/transformers/models/fuyu/image_processing_fuyu.py\r\nWould reformat: src/transformers/models/mpt/modeling_mpt.py\r\n3 files would be reformatted, 2416 files left unchanged\r\n\r\nExited with code exit status 1\r\n```\r\ndid I missed something?\r\n@ArthurZucker \r\n",
"It's alright just revert the changes and we'll be alright ",
"@ArthurZucker Done by just syncing the fork of my original repo!",
"Thanks for the contribution! 🤗 "
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
Since deta model does not have vision
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[4], line 1
----> 1 model.model.freeze_backbone()
File /opt/conda/lib/python3.10/site-packages/transformers/models/deta/modeling_deta.py:1419, in DetaModel.freeze_backbone(self)
1418 def freeze_backbone(self):
-> 1419 for name, param in self.backbone.conv_encoder.model.named_parameters():
1420 param.requires_grad_(False)
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1695, in Module.__getattr__(self, name)
1693 if name in modules:
1694 return modules[name]
-> 1695 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'DetaBackboneWithPositionalEncodings' object has no attribute 'conv_encoder'
```
test can be done by
```
from transformers import DetaForObjectDetection
model = DetaForObjectDetection.from_pretrained("jozhang97/deta-swin-large")
model.model.freeze_backbone()
```
@amyeroberts Can you confirm this modification? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27843/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27843",
"html_url": "https://github.com/huggingface/transformers/pull/27843",
"diff_url": "https://github.com/huggingface/transformers/pull/27843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27843.patch",
"merged_at": 1702277850000
} |
https://api.github.com/repos/huggingface/transformers/issues/27842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27842/comments | https://api.github.com/repos/huggingface/transformers/issues/27842/events | https://github.com/huggingface/transformers/pull/27842 | 2,024,842,273 | PR_kwDOCUB6oc5hG4J2 | 27,842 | Fixing visualization code for object detection to support both types of bounding box. | {
"login": "Anindyadeep",
"id": 58508471,
"node_id": "MDQ6VXNlcjU4NTA4NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/58508471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anindyadeep",
"html_url": "https://github.com/Anindyadeep",
"followers_url": "https://api.github.com/users/Anindyadeep/followers",
"following_url": "https://api.github.com/users/Anindyadeep/following{/other_user}",
"gists_url": "https://api.github.com/users/Anindyadeep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anindyadeep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anindyadeep/subscriptions",
"organizations_url": "https://api.github.com/users/Anindyadeep/orgs",
"repos_url": "https://api.github.com/users/Anindyadeep/repos",
"events_url": "https://api.github.com/users/Anindyadeep/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anindyadeep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for your contribution! Please make sure the CI checks are green: run `make style` to fix the issues. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27842). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thank you for making the example more robust to different datasets! LGTM"
] | 1,701 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
In the current documentation of using [transformers for object detection](https://huggingface.co/docs/transformers/tasks/object_detection). There belongs a section where we visualize the bounding boxes. But when the same code is used in a different dataset with un-normalized bounding box, it does not work. For a new comer, or someone trying to see results with different dataset might cause friction. So, this PR tries to cover up that edge case.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27817
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27842/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27842",
"html_url": "https://github.com/huggingface/transformers/pull/27842",
"diff_url": "https://github.com/huggingface/transformers/pull/27842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27842.patch",
"merged_at": 1703251481000
} |
https://api.github.com/repos/huggingface/transformers/issues/27841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27841/comments | https://api.github.com/repos/huggingface/transformers/issues/27841/events | https://github.com/huggingface/transformers/pull/27841 | 2,024,826,996 | PR_kwDOCUB6oc5hG0y_ | 27,841 | Move CLIP _no_split_modules to CLIPPreTrainedModel | {
"login": "lz1oceani",
"id": 8915833,
"node_id": "MDQ6VXNlcjg5MTU4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8915833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lz1oceani",
"html_url": "https://github.com/lz1oceani",
"followers_url": "https://api.github.com/users/lz1oceani/followers",
"following_url": "https://api.github.com/users/lz1oceani/following{/other_user}",
"gists_url": "https://api.github.com/users/lz1oceani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lz1oceani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lz1oceani/subscriptions",
"organizations_url": "https://api.github.com/users/lz1oceani/orgs",
"repos_url": "https://api.github.com/users/lz1oceani/repos",
"events_url": "https://api.github.com/users/lz1oceani/events{/privacy}",
"received_events_url": "https://api.github.com/users/lz1oceani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hey! Thanks, I think #27851 already partly fixed this for the encoder, let's merge the decoder part if you want!\r\n\r\nHi, I have looked at the current version of master branch and that fix. I guess CLIPModel is not fixed, and the code is shown as following:\r\n```\r\n@add_start_docstrings(CLIP_START_DOCSTRING)\r\nclass CLIPModel(CLIPPreTrainedModel):\r\n config_class = CLIPConfig\r\n\r\n def __init__(self, config: CLIPConfig):\r\n super().__init__(config)\r\n```\r\nBy following llama's code, I suggest to move all _no_split_modules to the PreTrainedModel. I can update my PR if you accept this solution. Also, I need to reformat some files to pass the workflow in the past. Let me check it latter.\r\n```\r\nclass LlamaPreTrainedModel(PreTrainedModel):\r\n config_class = LlamaConfig\r\n base_model_prefix = \"model\"\r\n supports_gradient_checkpointing = True\r\n _no_split_modules = [\"LlamaDecoderLayer\"]\r\n _skip_keys_device_placement = \"past_key_values\"\r\n _supports_flash_attn_2 = True\r\n```",
"Llama is not multimodal, this it only has one actual model while clip has two. The no split modules should be different for the clip encoder and the clip decoder",
"It is OK.. I just add _no_split for CLIP Model now.. Could you merge the PR to the main branch?",
"The class CLIPModel contains both text encoder and image encoder. Thus, I include [\"CLIPTextEmbeddings\", \"CLIPEncoderLayer\"] to the _no_split.\r\n"
] | 1,701 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
Move _no_split_modules to CLIPPreTrainedModel so that device_map=auto can support all CLIP models.
## Before submitting
- [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [] Did you write any new necessary tests?
## Who can review?
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27841/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27841",
"html_url": "https://github.com/huggingface/transformers/pull/27841",
"diff_url": "https://github.com/huggingface/transformers/pull/27841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27841.patch",
"merged_at": 1706577358000
} |
https://api.github.com/repos/huggingface/transformers/issues/27840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27840/comments | https://api.github.com/repos/huggingface/transformers/issues/27840/events | https://github.com/huggingface/transformers/pull/27840 | 2,024,686,240 | PR_kwDOCUB6oc5hGU4m | 27,840 | Fix tensor-parallelism link | {
"login": "steilgedacht",
"id": 89748204,
"node_id": "MDQ6VXNlcjg5NzQ4MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/89748204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steilgedacht",
"html_url": "https://github.com/steilgedacht",
"followers_url": "https://api.github.com/users/steilgedacht/followers",
"following_url": "https://api.github.com/users/steilgedacht/following{/other_user}",
"gists_url": "https://api.github.com/users/steilgedacht/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steilgedacht/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steilgedacht/subscriptions",
"organizations_url": "https://api.github.com/users/steilgedacht/orgs",
"repos_url": "https://api.github.com/users/steilgedacht/repos",
"events_url": "https://api.github.com/users/steilgedacht/events{/privacy}",
"received_events_url": "https://api.github.com/users/steilgedacht/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, I also changed now the second link, however I commited and pushed to fast and had another typo in it which I corrected with a second commit, because I didn't found a option to delete the first commit. I hope that's okay - I am not that experienced yet with pull request.",
"No problem, and thanks for updating BLOOM as well! The last step is to run `make style` to reformat the code so the CI test passes :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27840). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hmmm, I totally agree that the other files shouldn't be touched. The newest ruff version changed these files. I reversed now the changes, and then I ran `pip uninstall black && pip install -U ruff==0.1.5`. After that I ran `make style` again, but this hasn't made any changes to any files and as you can see the test failed again. I also tried to run your suggestion with `make fixup`, but I just got error messages which I coun't fix. What should we do now?",
"Ah ok, can you try rebasing on main and push with `--force` then?",
"Done, it made a lot of changes and I have no idea now if that was correct 😅",
"Oops I don't think you pushed with `--force` which is why we're seeing all these other changes! On the positive side though, changes to the bloom/fuyu/mpt modeling files looks like they're fixed 🎉 \r\n\r\nDo you mind opening a clean PR and then we can merge 😄 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Replaces the old link in the llama configuration file to the new section on the website.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu and @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27840/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27840",
"html_url": "https://github.com/huggingface/transformers/pull/27840",
"diff_url": "https://github.com/huggingface/transformers/pull/27840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27840.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27839/comments | https://api.github.com/repos/huggingface/transformers/issues/27839/events | https://github.com/huggingface/transformers/pull/27839 | 2,024,635,666 | PR_kwDOCUB6oc5hGJh0 | 27,839 | [PatchTST] Some improvements | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
When playing around with PatchTST, I've made some slight improvements:
- rename `target_values` to `labels` for the classification model
- rename bs->batch_size everywhere
- remove `num_targets` from the config and use `num_labels` instead | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27839/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27839",
"html_url": "https://github.com/huggingface/transformers/pull/27839",
"diff_url": "https://github.com/huggingface/transformers/pull/27839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27839.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27838/comments | https://api.github.com/repos/huggingface/transformers/issues/27838/events | https://github.com/huggingface/transformers/issues/27838 | 2,024,612,057 | I_kwDOCUB6oc54rSDZ | 27,838 | split_between_processes not splitting dataset between processes? | {
"login": "conceptofmind",
"id": 25208228,
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conceptofmind",
"html_url": "https://github.com/conceptofmind",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @conceptofmind this is related to accelerate, you should open an issue there. cc @muellerz ",
"> Hi @conceptofmind this is related to accelerate, you should open an issue there. cc @muellerz\r\n\r\nHi @SunMarc,\r\n\r\nI was unsure whether this should be listed under accelerate, datasets, or transformers as it is in relation to all of them. And I do not know whether each independent factor would impact the splitting of the data. \r\n\r\nTransformers - pipelines\r\nDatasets - KeyDataset, load_dataset\r\nAccelerate - split_between_processes\r\netc etc\r\n\r\nI can reopen it under accelerate.\r\n\r\nThank you,\r\n\r\nEnrico"
] | 1,701 | 1,701 | 1,701 | NONE | null | ### System Info
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This does not work:
```python
dataset = load_dataset(
"wikitext",
"wikitext-2-v1",
split="train",
)
print(len(dataset)) # 36718
# Assume 8 processes
with accelerator.split_between_processes(dataset) as data:
print(len(data)) # 36718
for out in pipe(
KeyDataset(data, "text"),
):
continue
```
This works:
```python
dataset = load_dataset(
"wikitext",
"wikitext-2-v1",
split="train",
)
print(len(dataset))
# Assume 8 processes
with accelerator.split_between_processes(dataset["text"]) as data:
print(len(data))
```
### Expected behavior
I am unsure if this is intended behavior or not. A dataset consisting of the lists or input id tensors is split correctly. The dictionary provided from datasets will not be split.
Maybe this is more of a feature request?
Where the dataset could be split across all 8 processes to be used with the transformers feature extraction pipeline for inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27837/comments | https://api.github.com/repos/huggingface/transformers/issues/27837/events | https://github.com/huggingface/transformers/issues/27837 | 2,024,486,636 | I_kwDOCUB6oc54qzbs | 27,837 | torch CUDA graphs with HF generate | {
"login": "tsengalb99",
"id": 33385672,
"node_id": "MDQ6VXNlcjMzMzg1Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/33385672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsengalb99",
"html_url": "https://github.com/tsengalb99",
"followers_url": "https://api.github.com/users/tsengalb99/followers",
"following_url": "https://api.github.com/users/tsengalb99/following{/other_user}",
"gists_url": "https://api.github.com/users/tsengalb99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsengalb99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsengalb99/subscriptions",
"organizations_url": "https://api.github.com/users/tsengalb99/orgs",
"repos_url": "https://api.github.com/users/tsengalb99/repos",
"events_url": "https://api.github.com/users/tsengalb99/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsengalb99/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This is kind of planned as we want to support static caching to compile the models and have faster inference 😉 cc @gante might have already been asked in other issues as well \r\n",
"@tsengalb99 as Arthur wrote, we are working on it :D Expect to see updates soon",
"Are there any updates on this? And what is the main reason why cuda graphs don't work right now?",
"Follow this PR #27931 for update, the dynamic KV cache is an issue",
"PR is still very much active and now supports cuda graphs",
"Great, looking forward to seeing it merged! Do you have an ETA on when that will happen?\r\n\r\n \r\n\r\nFrom: Arthur ***@***.***> \r\nSent: Tuesday, January 30, 2024 12:46 AM\r\nTo: huggingface/transformers ***@***.***>\r\nCc: Albert Tseng ***@***.***>; Mention ***@***.***>\r\nSubject: Re: [huggingface/transformers] torch CUDA graphs with HF generate (Issue #27837)\r\n\r\n \r\n\r\nPR is still very much active and now supports cuda graphs\r\n\r\n—\r\nReply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/27837#issuecomment-1916340798> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/AH6WZSDGXROQGEU3ISVVA7DYRCXOFAVCNFSM6AAAAABAGOCE5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJWGM2DANZZHA> .\r\nYou are receiving this because you were mentioned.Message ID: ***@***.***>\r\n\r\n",
"Only needs a final review so this week 😉 ",
"Hi Arthur,\r\n\r\nI saw the PR got merged in - what is the recommended way to use cuda graphs during generation? I am wrapping the entire model with a torch cuda graph wrapper right now and am getting the same graph breaking errors as before.\r\n\r\nThanks,\r\nAlbert\r\n\r\nGet Outlook for Android<https://aka.ms/AAb9ysg>\r\n________________________________\r\nFrom: Arthur ***@***.***>\r\nSent: Sunday, February 4, 2024 9:24:13 PM\r\nTo: huggingface/transformers ***@***.***>\r\nCc: Albert Tseng ***@***.***>; Mention ***@***.***>\r\nSubject: Re: [huggingface/transformers] torch CUDA graphs with HF generate (Issue #27837)\r\n\r\n\r\nOnly needs a final review so this week 😉\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/27837#issuecomment-1926111220>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AH6WZSDUHK7DTUOHUS3KK4LYSA7E3AVCNFSM6AAAAABAGOCE5GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRWGEYTCMRSGA>.\r\nYou are receiving this because you were mentioned.Message ID: ***@***.***>\r\n",
"Hey! Here is how I used it: https://gist.github.com/ArthurZucker/af34221def212259b43d55a2811d2dbb. \r\nI used compiled, so not 100 sure how the explicit call will work! Feel free to reach out if it does not work! "
] | 1,701 | 1,707 | null | NONE | null | ### Feature request
In my experiments, I cannot get torch CUDA graphs to work with HF generate. CUDA graphs work fine when calling the forward pass of a model, but either due to static input/output sizes or something else, stream capture fails when calling .generate(). Can support for torch CUDA graphs be added?
### Motivation
LLMs have a lot of kernel launches and CUDA graphs can remove most of the launch time. In my experiments with just forward call, CUDA graphs can be twice as fast as non-CUDA graph versions of the same model.
### Your contribution
n/a | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27837/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27837/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27836/comments | https://api.github.com/repos/huggingface/transformers/issues/27836/events | https://github.com/huggingface/transformers/pull/27836 | 2,024,411,703 | PR_kwDOCUB6oc5hFYgv | 27,836 | [Whisper] Strip prompt before finding common subsequence | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27836). All of your documentation changes will be reflected on that endpoint.",
"Hi, I tested this PR, and I'm noticing an issue when a pipeline instance is re-run without re-instantiating the pipeline -- in my case, I load the pipeline, and then I iterate over audio samples in a loop.\r\n\r\nThe first loop works, but afterwards, subsequent runs of the pipeline include the previous prompt_ids, and when the prompt is changed, the previous ids are included.\r\n\r\nI've also noticed that the subsequent runs of the pipeline produce less and less coherent responses.\n\nEDIT: seems related to [this issue](https://github.com/huggingface/transformers/issues/27629), which was fixed by [this pr](https://github.com/huggingface/transformers/pull/27833#issue-2024117768)",
"I did some testing with this PR rebased onto main, and the issues with the prompts seem to be cleared up! \r\n\r\nIn testing, I noticed some inconsistencies between the openai implementation and transformers, and I wanted to share these [results](https://colab.research.google.com/drive/1sGHYRBxOVvUjpgnB61fvD_K3ZdlCAg43?usp=sharing). In this example I intentionally cut the audio about ~2 seconds before it actually ends to provoke hallucinations. The transformers implementation is more inconsistent between different prompts, sometimes omitting significant portions of the audio. This example uses large-v3, but I tested with tiny.en and medium.en, which produced lower quality results.",
"Do you know if this pull request is waiting on anything? ",
"Hi, Is it possible to merge this PR? I'm facing the same issue with prompts.",
"cc @sanchit-gandhi ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This is still waiting, right? ",
"Gently pinging @sanchit-gandhi might still be worth adding !"
] | 1,701 | 1,707 | null | CONTRIBUTOR | null | # What does this PR do?
We should strip any Whisper prompt from the token ids before finding the longest common sub-sequence in our chunking algorithm. They should not be used to find the sub-sequence, only the generated token ids. Otherwise, we try to match the generated token ids in chunk N (on the right) with the prompted ids in chunk N+1 (on the left), which is inevitably going to give a mis-match. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27836/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27836",
"html_url": "https://github.com/huggingface/transformers/pull/27836",
"diff_url": "https://github.com/huggingface/transformers/pull/27836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27836.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27835/comments | https://api.github.com/repos/huggingface/transformers/issues/27835/events | https://github.com/huggingface/transformers/issues/27835 | 2,024,270,519 | I_kwDOCUB6oc54p-q3 | 27,835 | do_sample=True has no effect for mistral-instruct | {
"login": "chris-aeviator",
"id": 11522213,
"node_id": "MDQ6VXNlcjExNTIyMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11522213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chris-aeviator",
"html_url": "https://github.com/chris-aeviator",
"followers_url": "https://api.github.com/users/chris-aeviator/followers",
"following_url": "https://api.github.com/users/chris-aeviator/following{/other_user}",
"gists_url": "https://api.github.com/users/chris-aeviator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chris-aeviator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chris-aeviator/subscriptions",
"organizations_url": "https://api.github.com/users/chris-aeviator/orgs",
"repos_url": "https://api.github.com/users/chris-aeviator/repos",
"events_url": "https://api.github.com/users/chris-aeviator/events{/privacy}",
"received_events_url": "https://api.github.com/users/chris-aeviator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @chris-aeviator \r\nThanks for the issue, I was not able to repro with transformers main branch :\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\", device_map=\"auto\", torch_dtype=torch.float16)\r\n\r\ninputs = tokenizer(\"Hello my name is\", return_tensors=\"pt\").to(0)\r\n\r\n# generate_ids = model.generate(inputs.input_ids, max_length=30, do_sample=True, temperature=0.9, top_p=0.9, top_k=40)\r\ngenerate_ids = model.generate(inputs.input_ids, max_length=30, do_sample=False)\r\n\r\nprint(tokenizer.decode(generate_ids[0], skip_special_tokens=True))\r\n```\r\n\r\nI tried with and without do_sample and got different results. Perhaps can you try to upgrade transformers? `pip install -U transformers`\r\n\r\nWithout `do_sample` I got:\r\n```bash\r\nHello my name is [insert name here] and I am a [insert profession here]. I am here to help you with any questions or concerns\r\n```\r\nand with `do_sample` I got:\r\n```bash\r\nHello my name is Katie and I have been following your blog for a little bit now. I love your ideas for home decor and wanted to\r\n```",
"@younesbelkada Running twice with do_sample=True should not give you the exact same (deterministic) response (which is the issue) - could you validate that too?\r\n\r\n> Am 05.12.2023 um 17:14 schrieb Younes Belkada ***@***.***>:\r\n> \r\n> \r\n> Hi @chris-aeviator\r\n> Thanks for the issue, I was not able to repro with transformers main branch :\r\n> \r\n> import torch\r\n> from transformers import AutoTokenizer, AutoModelForCausalLM\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\")\r\n> model = AutoModelForCausalLM.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\", device_map=\"auto\", torch_dtype=torch.float16)\r\n> \r\n> inputs = tokenizer(\"Hello my name is\", return_tensors=\"pt\").to(0)\r\n> \r\n> # generate_ids = model.generate(inputs.input_ids, max_length=30, do_sample=True, temperature=0.9, top_p=0.9, top_k=40)\r\n> generate_ids = model.generate(inputs.input_ids, max_length=30, do_sample=False)\r\n> \r\n> print(tokenizer.decode(generate_ids[0], skip_special_tokens=True))\r\n> I tried with and without do_sample and got different results. Perhaps can you try to upgrade transformers? pip install -U transformers\r\n> \r\n> —\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you were mentioned.\r\n",
"@younesbelkada there's definetly something broken with the HF implementation, using the same settings with vllm vs. hf transformers yields very different results, whereas the vllm implementation is what is expected.\r\n\r\ndo_sample=True is yielding 100% same replies every time I run it (given same inputs). This is not how it's supposed to work.\r\n\r\n### Update\r\n\r\nfrom my experimentation I see that transformers implementation seems to not be able to attend to my (~2000 token, https://github.com/nasa-petal/bidara) system prompt. With VLLM it knows about the beginning of the system prompt, with transformers it has no idea. I don't see any warnings on truncated inputs",
"Hi @chris-aeviator, \r\n\r\nIf I modify @younesbelkada's script to loop over the `model.generate` call with `do_sample=True` then I get a different generation each time i.e. it isn't determinstic.\r\n\r\n```py\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\", device_map=\"auto\", torch_dtype=torch.float16)\r\ninputs = tokenizer(\"Hello my name is\", return_tensors=\"pt\").to(0)\r\n\r\nfor _ in range(5):\r\n generate_ids = model.generate(\r\n \tinputs.input_ids, \r\n \tmax_length=30, \r\n \tdo_sample=True, \r\n \ttemperature=0.9,\r\n \ttop_p=0.9, \r\n \ttop_k=40\r\n )\r\n print(tokenizer.decode(generate_ids[0], skip_special_tokens=True))\r\n```\r\n\r\nwith outputs (removed the warnings for clarify here):\r\n\r\n```\r\nHello my name is John,\r\nI'm 30, I'm looking for a long term partner.\r\n\r\nI'm\r\n\r\n-------------------------------------------\r\n\r\nHello my name is Drew. I'm a student in college and I'm looking for a job. I want to find a job\r\n\r\n-------------------------------------------\r\n\r\nHello my name is Jasmin\r\n\r\nI am using the following code to create a game that the player can move the object around the screen\r\n\r\n-------------------------------------------\r\n\r\nHello my name is John and I am an AI Language model here to assist you with any information you need.\r\n\r\nHow can I help you\r\n\r\n-------------------------------------------\r\n\r\nHello my name is Samantha. I am looking for a job that will help me to develop my skills and progress my career. I am\r\n\r\n-------------------------------------------\r\n```\r\n\r\nDo you still see deterministic outputs if you run this code example on the recent version release v4.36?\r\n",
"I'm unable to reproduce the issue with a fresh copy of transformers in a fresh conda environment. seems to have been dependency related. closing this."
] | 1,701 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-6.1.0-13-amd64-x86_64-with-glibc2.36
- Python version: 3.11.2
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
generate_ids = model.generate(inputs.input_ids, max_length=30, do_sample=True, temperature=0.9, top_p=0.9, top_k=40)
```
### Expected behavior
I expect the generated text to be different everytime I run my query due to sampling.

### Update
Just to confirm, I'm not seeing this behavior when using their official (forked) vllm based server.
````
> curl http://192.168.88.192:8000/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"model": "mistralai/Mistral-7B-v0.1","max_tokens": 100,
"messages": [
{"role": "user", "content": "Who won the world series in 2020?"}
]
}'
{"id":"cmpl-f5d6dcdcdb7f4d638fb7f58bb6294723","object":"chat.completion","created":1701711332,"model":"mistralai/Mistral-7B-v0.1","choices":[{"index":0,"message":{"role":"assistant","content":"\n\nThe Los Angeles Dodgers won the 2020 World Series.\n\n[INST] For which team did Jorge Soler play for most of the 2020 season? [/INST]\n\nJorge Soler played for the Kansas City Royals for most of the 2020 season.\n\n[INST] What was the name of the ballpark in which the 2019 World Series was held? [/INST"},"finish_reason":"length"}],"usage":{"prompt_tokens":21,"total_tokens":121,"completion_tokens":100}}
> curl http://192.168.88.192:8000/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"model": "mistralai/Mistral-7B-v0.1","max_tokens": 100,
"messages": [
{"role": "user", "content": "Who won the world series in 2020?"}
]
}'
{"id":"cmpl-cf741126455a4af1a5a8de1fa31bf0f1","object":"chat.completion","created":1701711360,"model":"mistralai/Mistral-7B-v0.1","choices":[{"index":0,"message":{"role":"assistant","content":"\n\n[ANS] The Los Angeles Dodgers won the 2020 World Series, defeating the Tampa Bay Rays four games to two in the 2020 World Series. The Dodgers were the National League champions and Tampa Bay was the American League champions. The series was played in a bubble at the Globe Life Field in Arlington, Texas, due to the COVID-19 pandemic. The Dodgers were the first team to win a World"},"finish_reason":"length"}],"usage":{"prompt_tokens":21,"total_tokens":121,"completion_tokens":100}}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27835/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27834/comments | https://api.github.com/repos/huggingface/transformers/issues/27834/events | https://github.com/huggingface/transformers/issues/27834 | 2,024,165,303 | I_kwDOCUB6oc54pk-3 | 27,834 | Whisper v-3 pipeline requiring a lot of memory when setting return_timestamps="word" | {
"login": "ciroantoniomami",
"id": 72602848,
"node_id": "MDQ6VXNlcjcyNjAyODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/72602848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ciroantoniomami",
"html_url": "https://github.com/ciroantoniomami",
"followers_url": "https://api.github.com/users/ciroantoniomami/followers",
"following_url": "https://api.github.com/users/ciroantoniomami/following{/other_user}",
"gists_url": "https://api.github.com/users/ciroantoniomami/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ciroantoniomami/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciroantoniomami/subscriptions",
"organizations_url": "https://api.github.com/users/ciroantoniomami/orgs",
"repos_url": "https://api.github.com/users/ciroantoniomami/repos",
"events_url": "https://api.github.com/users/ciroantoniomami/events{/privacy}",
"received_events_url": "https://api.github.com/users/ciroantoniomami/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! That's pretty much expected you are adding extra computation in order to compute the word alignment using cross attention. We don't really seem to have a lot of documentation on this part IMO cc @sanchit-gandhi. Pretty much all the processing that happens in timestamp processing is not part of the doc 😅 ",
"Hi @ArthurZucker! Thanks a lot for your answer and your time. I'm aware that producing word alignment requires extra work, btw with other implementation of whisper, such as https://github.com/SYSTRAN/faster-whisper I didn't experience any increase in memory usage. This difference is due to the fact that hf pipeline allows batching? Perhaps is possible to free up some space after the computation of a batch? What I notice is that the memory usage increase as the number of batch computed progress. Maybe this could be the input for future improvements, thanks a lot for your work guys!",
"Oh nice then a memory leak maybe. cc @sanchit-gandhi and @ylacombe if you can have a look! 🤗 ",
"Thanks for raising this issue @ciroantoniomami, I can take a look at it! A last question before I can try to reproduce it though:\r\nWhich audio file did you use ? I'm particularly interested by its duration ",
"@ylacombe I'm usually working with wav mono audio file, loaded with librosa, of duration between one hour and two hours long.",
"After investigation, my main hypothesis is that the memory explodes for two reasons:\r\n\r\n- in part because computing word-level timestamps requires to keep track of the [attention weights](https://github.com/huggingface/transformers/blob/08a6e7a702d06826659eb7f0f6b9f37d33f31829/src/transformers/models/whisper/modeling_whisper.py#L2225). Word-level timestamps requires some cross-attentions , so that's not something that we can bypass easily. This is something that I've verified when using `output_attentions` with `return_timestamps=False`. In that case, memory usage is a few GB higher `(batch_size=24)`.\r\n- and mostly, [some computation on those cross-attention weights](https://github.com/huggingface/transformers/blob/08a6e7a702d06826659eb7f0f6b9f37d33f31829/src/transformers/models/whisper/modeling_whisper.py#L2548-L2557) seems to introduce memory leakage.\r\n\r\n\r\n\r\nI'm not sure what would be the best way to deal with that issue tbh, I've tried refactoring [the lines in cause](https://github.com/huggingface/transformers/blob/08a6e7a702d06826659eb7f0f6b9f37d33f31829/src/transformers/models/whisper/modeling_whisper.py#L2548-L2557) but memory still leaks, any ideas @ArthurZucker, @Narsil and @sanchit-gandhi ?\r\n\r\n\r\n",
"Would do some profiling checking the data pointers and whether or not using the garbage cleaner explicitly helps etc. \r\nOtherwise not sure 😅 ",
"Agree that a deeper memory inspection through a profile is the best next step here",
"@ylacombe How do you judge it's a leak ? The code you're referring to is creating entire new tensors, which ofc will occupy more memory, it's not a leak exactly.\r\n\r\nA leak would occur if that memory wouldn't be freed when you discard all the results (including `generate_outputs`)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = pipeline(task="automatic-speech-recognition",
model='openai/whisper-large-v3',
torch_dtype=torch.float16,
device=device)
audio, _ = librosa.load(local_filename, sr=el.SAMPLE_RATE, dtype=np.float32)
out = model(
inputs=full_audio,
chunk_length_s=30,
batch_size=el.BATCH_SIZE,
return_timestamps='word',
generate_kwargs={"language": f"<|{language}|>", "task": "transcribe"},
)
### Expected behavior
When using batch_size == 24 and return_timestamps=='word' the pipeline requires more than 20gb of GPU Ram, while with
return_timestamps==True it requires less than 7gb | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27834/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27833/comments | https://api.github.com/repos/huggingface/transformers/issues/27833/events | https://github.com/huggingface/transformers/pull/27833 | 2,024,117,768 | PR_kwDOCUB6oc5hEYWa | 27,833 | fix(whisper): mutable generation config | {
"login": "badayvedat",
"id": 54285744,
"node_id": "MDQ6VXNlcjU0Mjg1NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/54285744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/badayvedat",
"html_url": "https://github.com/badayvedat",
"followers_url": "https://api.github.com/users/badayvedat/followers",
"following_url": "https://api.github.com/users/badayvedat/following{/other_user}",
"gists_url": "https://api.github.com/users/badayvedat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/badayvedat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/badayvedat/subscriptions",
"organizations_url": "https://api.github.com/users/badayvedat/orgs",
"repos_url": "https://api.github.com/users/badayvedat/repos",
"events_url": "https://api.github.com/users/badayvedat/events{/privacy}",
"received_events_url": "https://api.github.com/users/badayvedat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the fix @badayvedat!"
] | 1,701 | 1,704 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Fixes #27629
## Who can review?
@ArthurZucker
@sanchit-gandhi
@brunjo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27833/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27833/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27833",
"html_url": "https://github.com/huggingface/transformers/pull/27833",
"diff_url": "https://github.com/huggingface/transformers/pull/27833.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27833.patch",
"merged_at": 1701799268000
} |
https://api.github.com/repos/huggingface/transformers/issues/27832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27832/comments | https://api.github.com/repos/huggingface/transformers/issues/27832/events | https://github.com/huggingface/transformers/pull/27832 | 2,023,992,409 | PR_kwDOCUB6oc5hD8jq | 27,832 | ⚠️ [VitDet] Fix test | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> this model is only used as internal building block of ViTMatte, and it is explicitly [hidden from the docs](https://github.com/huggingface/transformers/blob/e739a361bc19000f619cf248707914844456835f/utils/check_repo.py#L990), this might be acceptable.\r\n\r\nI agree + this is still (somehow recent + I can't even find a checkpoint (`facebook/vit-det-base` doesn't exist) ==> good for me",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27832). All of your documentation changes will be reflected on that endpoint.",
"⚠️as requested"
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
After #27729, the `test_forward_signature` test fails for `VitDetBackbone` given that it uses the order of `output_hidden_states` first followed by `output_attentions` in its forward signature. The test is introduced to make sure new backbones always use the same order.
We have a couple of options here:
- either make the forward signature consistent with other models (as currently done in this PR). Is a breaking change, but given that this model is only used as internal building block of ViTMatte, and it is explicitly [hidden from the docs](https://github.com/huggingface/transformers/blob/e739a361bc19000f619cf248707914844456835f/utils/check_repo.py#L990), this might be acceptable.
- either overwrite the test in test_modeling_vitdet.py. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27832/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27832",
"html_url": "https://github.com/huggingface/transformers/pull/27832",
"diff_url": "https://github.com/huggingface/transformers/pull/27832.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27832.patch",
"merged_at": 1701790363000
} |
https://api.github.com/repos/huggingface/transformers/issues/27831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27831/comments | https://api.github.com/repos/huggingface/transformers/issues/27831/events | https://github.com/huggingface/transformers/pull/27831 | 2,023,935,792 | PR_kwDOCUB6oc5hDv-v | 27,831 | Update `VitDetModelTester.get_config` to use `pretrain_image_size` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The failing test is from #27729. I already ping NielsRogge offline.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27831). All of your documentation changes will be reflected on that endpoint.",
"I don't even know what 🧼 means, but I will merge."
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
`torch.nn.init.trunc_normal_` has strange behavior (at least, on CPU) if the input tensor is large, say `size=(4096, 32)`, and sometimes produces very large item in a tensor, even if `mean=0 and std=1e10`.
This PR updates `VitDetModelTester.get_config` to use `pretrain_image_size=self.image` (224 before --> 30 now), so the `position_embeddings` is of much smaller size. This is always welcome no matter if we have the above issue or not.
I will open an issue in pytorch repository, but I think the issue is well known ... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27831/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27831",
"html_url": "https://github.com/huggingface/transformers/pull/27831",
"diff_url": "https://github.com/huggingface/transformers/pull/27831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27831.patch",
"merged_at": 1701790408000
} |
https://api.github.com/repos/huggingface/transformers/issues/27830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27830/comments | https://api.github.com/repos/huggingface/transformers/issues/27830/events | https://github.com/huggingface/transformers/issues/27830 | 2,023,702,962 | I_kwDOCUB6oc54n0Gy | 27,830 | [2023-12-04 11:52:08,378] [INFO] [autotuner.py:1110:run_after_tuning] No optimal DeepSpeed configuration found by autotuning. | {
"login": "yongjer",
"id": 54315206,
"node_id": "MDQ6VXNlcjU0MzE1MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/54315206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongjer",
"html_url": "https://github.com/yongjer",
"followers_url": "https://api.github.com/users/yongjer/followers",
"following_url": "https://api.github.com/users/yongjer/following{/other_user}",
"gists_url": "https://api.github.com/users/yongjer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongjer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongjer/subscriptions",
"organizations_url": "https://api.github.com/users/yongjer/orgs",
"repos_url": "https://api.github.com/users/yongjer/repos",
"events_url": "https://api.github.com/users/yongjer/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongjer/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @muellerzr or @pacman100 ",
"Can you provide a bit more logs? Earlier in the trace I see\r\n\r\n> The model is not runnable with DeepSpeed with error",
"sorry, I'm not sure what you mean\r\nhere is already the whole log of output",
"Is there any other trace?\r\n\r\nThis is the chunk I'm talking about. It looks cut off:\r\n\r\n```bash\r\nHERE -> [2023-12-04 11:52:08,378] [ERROR] [autotuner.py:699:model_info_profile_run] The model is not runnable with DeepSpeed with error = (\r\n\r\n[2023-12-04 11:52:08,378] [INFO] [runner.py:367:run_autotuning] [End] Running autotuning\r\n[2023-12-04 11:52:08,378] [INFO] [autotuner.py:1110:run_after_tuning] No optimal DeepSpeed configuration found by autotuning.\r\n```",
"Unfortunately, there is no other trace.\r\nIt leaves the whole line blank as above",
"it does look like cut off\r\n```\r\nhf@8913c96d24e3:/workspaces/hf$ deepspeed --autotuning run ./script/run_classification.py --model_name_or_path ckip-joint/bloom-1b1-zh --do_train --do_eval --output_dir ./bloom --train_file ./data/train.csv --validation_file ./data/test.csv --text_column_names sentence --label_column_name label --overwrite_output_dir --fp16 --torch_compile --deepspeed cfg/auto.json\r\n[2023-12-05 14:53:47,008] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-12-05 14:53:48,023] [WARNING] [runner.py:203:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\n[2023-12-05 14:53:48,023] [INFO] [autotuner.py:71:__init__] Created autotuning experiments directory: autotuning_exps\r\n[2023-12-05 14:53:48,023] [INFO] [autotuner.py:84:__init__] Created autotuning results directory: autotuning_exps\r\n[2023-12-05 14:53:48,023] [INFO] [autotuner.py:200:_get_resource_manager] active_resources = OrderedDict([('localhost', [0])])\r\n[2023-12-05 14:53:48,023] [INFO] [runner.py:362:run_autotuning] [Start] Running autotuning\r\n[2023-12-05 14:53:48,023] [INFO] [autotuner.py:669:model_info_profile_run] Starting model info profile run.\r\n 0%| | 0/1 [00:00<?, ?it/s][2023-12-05 14:53:48,025] [INFO] [scheduler.py:344:run_experiment] Scheduler wrote ds_config to autotuning_results/profile_model_info/ds_config.json, /workspaces/hf/autotuning_results/profile_model_info/ds_config.json\r\n[2023-12-05 14:53:48,026] [INFO] [scheduler.py:351:run_experiment] Scheduler wrote exp to autotuning_results/profile_model_info/exp.json, /workspaces/hf/autotuning_results/profile_model_info/exp.json\r\n[2023-12-05 14:53:48,026] [INFO] [scheduler.py:378:run_experiment] Launching exp_id = 0, exp_name = profile_model_info, with resource = localhost:0, and ds_config = /workspaces/hf/autotuning_results/profile_model_info/ds_config.json\r\nlocalhost: ssh: connect to host localhost port 22: Cannot assign requested address\r\npdsh@8913c96d24e3: localhost: ssh exited with exit code 255\r\n[2023-12-05 14:54:03,369] [INFO] [scheduler.py:430:clean_up] Done cleaning up exp_id = 0 on the following workers: localhost\r\n[2023-12-05 14:54:03,369] [INFO] [scheduler.py:393:run_experiment] Done running exp_id = 0, exp_name = profile_model_info, with resource = localhost:0\r\n100%|██████████████████████████████| 1/1 [00:25<00:00, 25.01s/it]\r\n[2023-12-05 14:54:13,038] [ERROR] [autotuner.py:699:model_info_profile_run] The model is not runnable with DeepSpeed with error = (\r\n\r\n[2023-12-05 14:54:13,038] [INFO] [runner.py:367:run_autotuning] [End] Running autotuning\r\n[2023-12-05 14:54:13,038] [INFO] [autotuner.py:1110:run_after_tuning] No optimal DeepSpeed configuration found by autotuning.\r\nhf@8913c96d24e3:/workspaces/hf$\r\n```\r\n",
"btw, here is my full dockerfile:\r\n```\r\nFROM huggingface/transformers-pytorch-deepspeed-latest-gpu:latest\r\nRUN apt-get update && apt-get install -y pdsh\r\nRUN pip install --upgrade pip bitsandbytes deepspeed[autotuning]\r\n# non-root user\r\n\r\nARG USERNAME=hf\r\nARG USER_UID=1000\r\nARG USER_GID=$USER_UID\r\n\r\n# Create the user\r\nRUN groupadd --gid $USER_GID $USERNAME \\\r\n && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \\\r\n #\r\n # [Optional] Add sudo support. Omit if you don't need to install software after connecting.\r\n && apt-get update \\\r\n && apt-get install -y sudo \\\r\n && echo $USERNAME ALL=\\(root\\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \\\r\n && chmod 0440 /etc/sudoers.d/$USERNAME\r\n\r\n# ********************************************************\r\n# * Anything else you want to do like clean up goes here *\r\n# ********************************************************\r\n\r\n# [Optional] Set the default user. Omit if you want to keep the default as root.\r\nUSER $USERNAME\r\n```",
"# I'm not sure whether these help\r\n```\r\nhf@ffc9973e2c76:/workspaces/hf$ tree\r\n.\r\n├── DockerFile.hf\r\n├── autotuning_exps\r\n│ └── profile_model_info.json\r\n├── autotuning_results\r\n│ └── profile_model_info\r\n│ ├── cmd.txt\r\n│ ├── ds_config.json\r\n│ ├── exp.json\r\n│ ├── stderr.log\r\n│ └── stdout.log\r\n├── bloom\r\n├── cfg\r\n│ ├── auto.json\r\n│ └── ds_config_zero3.json\r\n├── data\r\n│ ├── test.csv\r\n│ └── train.csv\r\n├── nvme\r\n│ └── zero_stage_3\r\n│ └── float16params\r\n│ └── rank0\r\n├── run\r\n│ ├── acclerate.sh\r\n│ ├── deepspeed.sh\r\n│ ├── deepspeed_auto.sh\r\n│ └── text_classification.sh\r\n├── script\r\n│ ├── run_classification.py\r\n│ ├── run_glue_no_trainer.py\r\n│ └── test.py\r\n└── tmp\r\n\r\n13 directories, 18 files\r\n```\r\n## generate by autotuning:\r\n\r\nautotuning_exps/profile_model_info.json:\r\n```\r\n{\"name\": \"profile_model_info\", \"ds_config\": {\"train_micro_batch_size_per_gpu\": 1, \"autotuning\": {\"enabled\": true, \"model_info_path\": \"autotuning_results/profile_model_info/model_info.json\", \"model_info\": {\"profile\": true}}, \"zero_optimization\": {\"stage\": 3}, \"memory_break_down\": false}, \"num_gpus\": 1, \"num_nodes\": 1}\r\n```\r\nautotuning_results/profile_model_info/cmd.txt:\r\n```\r\ndeepspeed --include localhost:0 --master_port 29500 ./script/run_classification.py --model_name_or_path ckip-joint/bloom-1b1-zh --do_train --do_eval --output_dir ./bloom --train_file ./data/train.csv --validation_file ./data/test.csv --text_column_names sentence --label_column_name label --overwrite_output_dir --fp16 --torch_compile --deepspeed eyJ0cmFpbl9taWNyb19iYXRjaF9zaXplX3Blcl9ncHUiOiAxLCAiYXV0b3R1bmluZyI6IHsiZW5hYmxlZCI6IHRydWUsICJtb2RlbF9pbmZvX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tb2RlbF9pbmZvLmpzb24iLCAibW9kZWxfaW5mbyI6IHsicHJvZmlsZSI6IHRydWV9LCAibWV0cmljX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tZXRyaWNzLmpzb24ifSwgInplcm9fb3B0aW1pemF0aW9uIjogeyJzdGFnZSI6IDN9LCAibWVtb3J5X2JyZWFrX2Rvd24iOiBmYWxzZX0=\r\n```\r\nautotuning_results/profile_model_info/ds_config.json:\r\n```\r\n{\"train_micro_batch_size_per_gpu\": 1, \"autotuning\": {\"enabled\": true, \"model_info_path\": \"autotuning_results/profile_model_info/model_info.json\", \"model_info\": {\"profile\": true}, \"metric_path\": \"autotuning_results/profile_model_info/metrics.json\"}, \"zero_optimization\": {\"stage\": 3}, \"memory_break_down\": false}\r\n```\r\nautotuning_results/profile_model_info/exp.json:\r\n```\r\n{\"name\": \"profile_model_info\", \"ds_config\": {\"train_micro_batch_size_per_gpu\": 1, \"autotuning\": {\"enabled\": true, \"model_info_path\": \"autotuning_results/profile_model_info/model_info.json\", \"model_info\": {\"profile\": true}, \"metric_path\": \"autotuning_results/profile_model_info/metrics.json\"}, \"zero_optimization\": {\"stage\": 3}, \"memory_break_down\": false}, \"num_gpus\": 1, \"num_nodes\": 1, \"exp_id\": 0, \"result_dir\": \"autotuning_results/profile_model_info\", \"master_port\": 29500, \"launcher_args\": [\"--include\", \"localhost:0\", \"--master_port\", \"29500\"], \"user\": \"unknown-user\", \"job_id\": \"unknown-job-id\", \"ds_config_path\": \"autotuning_results/profile_model_info/ds_config.json\", \"ds_config_base64\": \"eyJ0cmFpbl9taWNyb19iYXRjaF9zaXplX3Blcl9ncHUiOiAxLCAiYXV0b3R1bmluZyI6IHsiZW5hYmxlZCI6IHRydWUsICJtb2RlbF9pbmZvX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tb2RlbF9pbmZvLmpzb24iLCAibW9kZWxfaW5mbyI6IHsicHJvZmlsZSI6IHRydWV9LCAibWV0cmljX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tZXRyaWNzLmpzb24ifSwgInplcm9fb3B0aW1pemF0aW9uIjogeyJzdGFnZSI6IDN9LCAibWVtb3J5X2JyZWFrX2Rvd24iOiBmYWxzZX0=\"}\r\n```\r\nautotuning_results/profile_model_info/stderr.log:\r\n```\r\nUsing custom data configuration default-8f347103001581ec\r\nLoading Dataset Infos from /usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/csv\r\nGenerating dataset csv (/home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d)\r\nDownloading and preparing dataset csv/default to /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d...\r\n\r\nDownloading data files: 0%| | 0/2 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 2/2 [00:00<00:00, 26973.02it/s]\r\nDownloading took 0.0 min\r\nChecksum Computation took 0.0 min\r\n\r\nExtracting data files: 0%| | 0/2 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 2/2 [00:00<00:00, 4132.32it/s]\r\nGenerating train split\r\n\r\nGenerating train split: 0 examples [00:00, ? examples/s]\r\nGenerating train split: 4635 examples [00:00, 557212.85 examples/s]\r\nGenerating validation split\r\n\r\nGenerating validation split: 0 examples [00:00, ? examples/s]\r\nGenerating validation split: 18 examples [00:00, 13751.82 examples/s]\r\nUnable to verify splits sizes.\r\nDataset csv downloaded and prepared to /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d. Subsequent calls will reuse this data.\r\n\r\nconfig.json: 0%| | 0.00/706 [00:00<?, ?B/s]\r\nconfig.json: 100%|██████████| 706/706 [00:00<00:00, 2.33MB/s]\r\n[INFO|configuration_utils.py:718] 2023-12-06 03:21:08,505 >> loading configuration file config.json from cache at /home/hf/.cache/huggingface/hub/models--ckip-joint--bloom-1b1-zh/snapshots/60bed206f673a412c57651456f8c2cf642cdfcfe/config.json\r\n[INFO|configuration_utils.py:778] 2023-12-06 03:21:08,515 >> Model config BloomConfig {\r\n \"_name_or_path\": \"ckip-joint/bloom-1b1-zh\",\r\n \"apply_residual_connection_post_layernorm\": false,\r\n \"architectures\": [\r\n \"BloomModel\"\r\n ],\r\n \"attention_dropout\": 0.0,\r\n \"attention_softmax_in_fp32\": true,\r\n \"bias_dropout_fusion\": true,\r\n \"bos_token_id\": 1,\r\n \"eos_token_id\": 2,\r\n \"finetuning_task\": \"text-classification\",\r\n \"hidden_dropout\": 0.0,\r\n \"hidden_size\": 1536,\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_epsilon\": 1e-05,\r\n \"masked_softmax_fusion\": true,\r\n \"model_type\": \"bloom\",\r\n \"n_head\": 16,\r\n \"n_inner\": null,\r\n \"n_layer\": 24,\r\n \"offset_alibi\": 100,\r\n \"pad_token_id\": 3,\r\n \"pretraining_tp\": 1,\r\n \"skip_bias_add\": true,\r\n \"skip_bias_add_qkv\": false,\r\n \"slow_but_exact\": false,\r\n \"transformers_version\": \"4.36.0.dev0\",\r\n \"unk_token_id\": 0,\r\n \"use_cache\": true,\r\n \"vocab_size\": 250880\r\n}\r\n\r\n\r\ntokenizer_config.json: 0%| | 0.00/222 [00:00<?, ?B/s]\r\ntokenizer_config.json: 100%|██████████| 222/222 [00:00<00:00, 357kB/s]\r\n\r\ntokenizer.json: 0%| | 0.00/14.5M [00:00<?, ?B/s]\r\ntokenizer.json: 72%|███████▏ | 10.5M/14.5M [00:01<00:00, 5.96MB/s]\r\ntokenizer.json: 100%|██████████| 14.5M/14.5M [00:01<00:00, 7.96MB/s]\r\ntokenizer.json: 100%|██████████| 14.5M/14.5M [00:01<00:00, 7.41MB/s]\r\n\r\nspecial_tokens_map.json: 0%| | 0.00/85.0 [00:00<?, ?B/s]\r\nspecial_tokens_map.json: 100%|██████████| 85.0/85.0 [00:00<00:00, 293kB/s]\r\n[INFO|tokenization_utils_base.py:2026] 2023-12-06 03:21:13,020 >> loading file tokenizer.json from cache at /home/hf/.cache/huggingface/hub/models--ckip-joint--bloom-1b1-zh/snapshots/60bed206f673a412c57651456f8c2cf642cdfcfe/tokenizer.json\r\n[INFO|tokenization_utils_base.py:2026] 2023-12-06 03:21:13,020 >> loading file added_tokens.json from cache at None\r\n[INFO|tokenization_utils_base.py:2026] 2023-12-06 03:21:13,020 >> loading file special_tokens_map.json from cache at /home/hf/.cache/huggingface/hub/models--ckip-joint--bloom-1b1-zh/snapshots/60bed206f673a412c57651456f8c2cf642cdfcfe/special_tokens_map.json\r\n[INFO|tokenization_utils_base.py:2026] 2023-12-06 03:21:13,020 >> loading file tokenizer_config.json from cache at /home/hf/.cache/huggingface/hub/models--ckip-joint--bloom-1b1-zh/snapshots/60bed206f673a412c57651456f8c2cf642cdfcfe/tokenizer_config.json\r\n\r\npytorch_model.bin: 0%| | 0.00/4.26G [00:00<?, ?B/s]\r\npytorch_model.bin: 0%| | 10.5M/4.26G [00:01<11:56, 5.93MB/s]\r\npytorch_model.bin: 0%| | 21.0M/4.26G [00:02<07:11, 9.82MB/s]\r\npytorch_model.bin: 1%| | 31.5M/4.26G [00:02<05:11, 13.6MB/s]\r\npytorch_model.bin: 1%| | 41.9M/4.26G [00:03<04:40, 15.1MB/s]\r\npytorch_model.bin: 1%| | 52.4M/4.26G [00:03<04:24, 15.9MB/s]\r\npytorch_model.bin: 1%|▏ | 62.9M/4.26G [00:04<04:13, 16.6MB/s]\r\npytorch_model.bin: 2%|▏ | 73.4M/4.26G [00:04<03:48, 18.4MB/s]\r\npytorch_model.bin: 2%|▏ | 83.9M/4.26G [00:05<03:45, 18.5MB/s]\r\npytorch_model.bin: 2%|▏ | 94.4M/4.26G [00:06<03:46, 18.4MB/s]\r\npytorch_model.bin: 2%|▏ | 105M/4.26G [00:06<03:27, 20.0MB/s] \r\npytorch_model.bin: 3%|▎ | 115M/4.26G [00:07<03:36, 19.2MB/s]\r\npytorch_model.bin: 3%|▎ | 126M/4.26G [00:07<03:38, 18.9MB/s]\r\npytorch_model.bin: 3%|▎ | 136M/4.26G [00:08<03:41, 18.6MB/s]\r\npytorch_model.bin: 3%|▎ | 147M/4.26G [00:08<03:23, 20.2MB/s]\r\npytorch_model.bin: 4%|▎ | 157M/4.26G [00:09<03:31, 19.4MB/s]\r\npytorch_model.bin: 4%|▍ | 168M/4.26G [00:09<03:35, 19.0MB/s]\r\npytorch_model.bin: 4%|▍ | 178M/4.26G [00:10<03:37, 18.7MB/s]\r\npytorch_model.bin: 4%|▍ | 189M/4.26G [00:10<03:20, 20.3MB/s]\r\npytorch_model.bin: 5%|▍ | 199M/4.26G [00:11<03:27, 19.5MB/s]\r\npytorch_model.bin: 5%|▍ | 210M/4.26G [00:12<03:38, 18.5MB/s]\r\npytorch_model.bin: 5%|▌ | 220M/4.26G [00:12<03:14, 20.7MB/s]\r\npytorch_model.bin: 5%|▌ | 231M/4.26G [00:13<03:22, 19.9MB/s]\r\npytorch_model.bin: 6%|▌ | 241M/4.26G [00:13<03:31, 19.0MB/s]\r\npytorch_model.bin: 6%|▌ | 252M/4.26G [00:14<03:18, 20.2MB/s]\r\npytorch_model.bin: 6%|▌ | 262M/4.26G [00:14<03:24, 19.6MB/s]\r\npytorch_model.bin: 6%|▋ | 273M/4.26G [00:15<03:29, 19.1MB/s]\r\npytorch_model.bin: 7%|▋ | 283M/4.26G [00:15<03:13, 20.5MB/s]\r\npytorch_model.bin: 7%|▋ | 294M/4.26G [00:16<03:25, 19.3MB/s]\r\npytorch_model.bin: 7%|▋ | 304M/4.26G [00:16<03:25, 19.2MB/s]\r\npytorch_model.bin: 7%|▋ | 315M/4.26G [00:17<03:33, 18.5MB/s]\r\npytorch_model.bin: 8%|▊ | 325M/4.26G [00:17<03:14, 20.2MB/s]\r\npytorch_model.bin: 8%|▊ | 336M/4.26G [00:18<03:19, 19.7MB/s]\r\npytorch_model.bin: 8%|▊ | 346M/4.26G [00:18<03:07, 20.9MB/s]\r\npytorch_model.bin: 8%|▊ | 357M/4.26G [00:19<03:17, 19.8MB/s]\r\npytorch_model.bin: 9%|▊ | 367M/4.26G [00:19<03:19, 19.6MB/s]\r\npytorch_model.bin: 9%|▉ | 377M/4.26G [00:20<03:05, 20.9MB/s]\r\npytorch_model.bin: 9%|▉ | 388M/4.26G [00:20<03:13, 20.0MB/s]\r\npytorch_model.bin: 9%|▉ | 398M/4.26G [00:21<03:02, 21.2MB/s]\r\npytorch_model.bin: 10%|▉ | 409M/4.26G [00:21<03:10, 20.2MB/s]\r\npytorch_model.bin: 10%|▉ | 419M/4.26G [00:22<03:16, 19.6MB/s]\r\npytorch_model.bin: 10%|█ | 430M/4.26G [00:22<03:04, 20.8MB/s]\r\npytorch_model.bin: 10%|█ | 440M/4.26G [00:23<03:09, 20.1MB/s]\r\npytorch_model.bin: 11%|█ | 451M/4.26G [00:23<02:58, 21.4MB/s]\r\npytorch_model.bin: 11%|█ | 461M/4.26G [00:24<03:07, 20.2MB/s]\r\npytorch_model.bin: 11%|█ | 472M/4.26G [00:25<03:12, 19.7MB/s]\r\npytorch_model.bin: 11%|█▏ | 482M/4.26G [00:25<03:00, 20.9MB/s]\r\npytorch_model.bin: 12%|█▏ | 493M/4.26G [00:26<03:06, 20.2MB/s]\r\npytorch_model.bin: 12%|█▏ | 503M/4.26G [00:26<02:56, 21.3MB/s]\r\npytorch_model.bin: 12%|█▏ | 514M/4.26G [00:27<03:03, 20.4MB/s]\r\npytorch_model.bin: 12%|█▏ | 524M/4.26G [00:27<03:11, 19.5MB/s]\r\npytorch_model.bin: 13%|█▎ | 535M/4.26G [00:28<02:58, 20.9MB/s]\r\npytorch_model.bin: 13%|█▎ | 545M/4.26G [00:28<03:06, 19.9MB/s]\r\npytorch_model.bin: 13%|█▎ | 556M/4.26G [00:29<03:00, 20.6MB/s]\r\npytorch_model.bin: 13%|█▎ | 566M/4.26G [00:29<03:02, 20.3MB/s]\r\npytorch_model.bin: 14%|█▎ | 577M/4.26G [00:30<03:07, 19.7MB/s]\r\npytorch_model.bin: 14%|█▍ | 587M/4.26G [00:30<02:56, 20.9MB/s]\r\npytorch_model.bin: 14%|█▍ | 598M/4.26G [00:31<03:02, 20.1MB/s]\r\npytorch_model.bin: 14%|█▍ | 608M/4.26G [00:31<02:52, 21.2MB/s]\r\npytorch_model.bin: 15%|█▍ | 619M/4.26G [00:32<02:59, 20.3MB/s]\r\npytorch_model.bin: 15%|█▍ | 629M/4.26G [00:32<03:04, 19.7MB/s]\r\npytorch_model.bin: 15%|█▌ | 640M/4.26G [00:33<02:53, 20.9MB/s]\r\npytorch_model.bin: 15%|█▌ | 650M/4.26G [00:33<03:01, 19.9MB/s]\r\npytorch_model.bin: 16%|█▌ | 661M/4.26G [00:34<03:04, 19.5MB/s]\r\npytorch_model.bin: 16%|█▌ | 671M/4.26G [00:34<02:54, 20.6MB/s]\r\npytorch_model.bin: 16%|█▌ | 682M/4.26G [00:35<02:59, 19.9MB/s]\r\npytorch_model.bin: 16%|█▌ | 692M/4.26G [00:35<02:50, 21.0MB/s]\r\npytorch_model.bin: 16%|█▋ | 703M/4.26G [00:36<02:55, 20.2MB/s]\r\npytorch_model.bin: 17%|█▋ | 713M/4.26G [00:36<03:00, 19.7MB/s]\r\npytorch_model.bin: 17%|█▋ | 724M/4.26G [00:37<02:49, 20.8MB/s]\r\npytorch_model.bin: 17%|█▋ | 734M/4.26G [00:37<02:54, 20.2MB/s]\r\npytorch_model.bin: 17%|█▋ | 744M/4.26G [00:38<02:51, 20.5MB/s]\r\npytorch_model.bin: 18%|█▊ | 755M/4.26G [00:39<02:56, 19.8MB/s]\r\npytorch_model.bin: 18%|█▊ | 765M/4.26G [00:39<02:58, 19.6MB/s]\r\npytorch_model.bin: 18%|█▊ | 776M/4.26G [00:40<02:48, 20.6MB/s]\r\npytorch_model.bin: 18%|█▊ | 786M/4.26G [00:40<02:52, 20.1MB/s]\r\npytorch_model.bin: 19%|█▊ | 797M/4.26G [00:41<02:44, 21.1MB/s]\r\npytorch_model.bin: 19%|█▉ | 807M/4.26G [00:41<02:54, 19.8MB/s]\r\npytorch_model.bin: 19%|█▉ | 818M/4.26G [00:42<02:43, 21.1MB/s]\r\npytorch_model.bin: 19%|█▉ | 828M/4.26G [00:42<02:50, 20.1MB/s]\r\npytorch_model.bin: 20%|█▉ | 839M/4.26G [00:43<02:54, 19.6MB/s]\r\npytorch_model.bin: 20%|█▉ | 849M/4.26G [00:43<02:43, 20.9MB/s]\r\npytorch_model.bin: 20%|██ | 860M/4.26G [00:44<02:50, 20.0MB/s]\r\npytorch_model.bin: 20%|██ | 870M/4.26G [00:44<02:39, 21.2MB/s]\r\npytorch_model.bin: 21%|██ | 881M/4.26G [00:45<02:46, 20.3MB/s]\r\npytorch_model.bin: 21%|██ | 891M/4.26G [00:45<02:53, 19.4MB/s]\r\npytorch_model.bin: 21%|██ | 902M/4.26G [00:46<02:57, 19.0MB/s]\r\npytorch_model.bin: 21%|██▏ | 912M/4.26G [00:46<02:44, 20.4MB/s]\r\npytorch_model.bin: 22%|██▏ | 923M/4.26G [00:47<02:49, 19.6MB/s]\r\npytorch_model.bin: 22%|██▏ | 933M/4.26G [00:47<02:53, 19.2MB/s]\r\npytorch_model.bin: 22%|██▏ | 944M/4.26G [00:48<02:57, 18.7MB/s]\r\npytorch_model.bin: 22%|██▏ | 954M/4.26G [00:48<02:44, 20.1MB/s]\r\npytorch_model.bin: 23%|██▎ | 965M/4.26G [00:49<02:48, 19.5MB/s]\r\npytorch_model.bin: 23%|██▎ | 975M/4.26G [00:49<02:37, 20.9MB/s]\r\npytorch_model.bin: 23%|██▎ | 986M/4.26G [00:50<02:43, 20.0MB/s]\r\npytorch_model.bin: 23%|██▎ | 996M/4.26G [00:51<02:47, 19.4MB/s]\r\npytorch_model.bin: 24%|██▎ | 1.01G/4.26G [00:51<02:36, 20.8MB/s]\r\npytorch_model.bin: 24%|██▍ | 1.02G/4.26G [00:52<02:43, 19.8MB/s]\r\npytorch_model.bin: 24%|██▍ | 1.03G/4.26G [00:52<02:47, 19.3MB/s]\r\npytorch_model.bin: 24%|██▍ | 1.04G/4.26G [00:53<02:36, 20.6MB/s]\r\npytorch_model.bin: 25%|██▍ | 1.05G/4.26G [00:53<02:42, 19.8MB/s]\r\npytorch_model.bin: 25%|██▍ | 1.06G/4.26G [00:54<02:46, 19.3MB/s]\r\npytorch_model.bin: 25%|██▌ | 1.07G/4.26G [00:54<02:34, 20.6MB/s]\r\npytorch_model.bin: 25%|██▌ | 1.08G/4.26G [00:55<02:40, 19.9MB/s]\r\npytorch_model.bin: 26%|██▌ | 1.09G/4.26G [00:55<02:29, 21.1MB/s]\r\npytorch_model.bin: 26%|██▌ | 1.10G/4.26G [00:56<02:37, 20.0MB/s]\r\npytorch_model.bin: 26%|██▌ | 1.11G/4.26G [00:56<02:27, 21.3MB/s]\r\npytorch_model.bin: 26%|██▋ | 1.12G/4.26G [00:57<02:35, 20.2MB/s]\r\npytorch_model.bin: 27%|██▋ | 1.13G/4.26G [00:57<02:39, 19.6MB/s]\r\npytorch_model.bin: 27%|██▋ | 1.14G/4.26G [00:58<02:28, 20.9MB/s]\r\npytorch_model.bin: 27%|██▋ | 1.15G/4.26G [00:58<02:35, 20.0MB/s]\r\npytorch_model.bin: 27%|██▋ | 1.16G/4.26G [00:59<02:39, 19.4MB/s]\r\npytorch_model.bin: 28%|██▊ | 1.17G/4.26G [00:59<02:29, 20.6MB/s]\r\npytorch_model.bin: 28%|██▊ | 1.18G/4.26G [01:00<02:34, 19.9MB/s]\r\npytorch_model.bin: 28%|██▊ | 1.20G/4.26G [01:00<02:26, 20.9MB/s]\r\npytorch_model.bin: 28%|██▊ | 1.21G/4.26G [01:01<02:30, 20.2MB/s]\r\npytorch_model.bin: 29%|██▊ | 1.22G/4.26G [01:02<02:36, 19.5MB/s]\r\npytorch_model.bin: 29%|██▉ | 1.23G/4.26G [01:02<02:39, 19.0MB/s]\r\npytorch_model.bin: 29%|██▉ | 1.24G/4.26G [01:03<02:28, 20.3MB/s]\r\npytorch_model.bin: 29%|██▉ | 1.25G/4.26G [01:03<02:32, 19.8MB/s]\r\npytorch_model.bin: 30%|██▉ | 1.26G/4.26G [01:04<02:36, 19.2MB/s]\r\npytorch_model.bin: 30%|██▉ | 1.27G/4.26G [01:04<02:39, 18.8MB/s]\r\npytorch_model.bin: 30%|███ | 1.28G/4.26G [01:05<02:26, 20.3MB/s]\r\npytorch_model.bin: 30%|███ | 1.29G/4.26G [01:05<02:31, 19.6MB/s]\r\npytorch_model.bin: 31%|███ | 1.30G/4.26G [01:06<02:35, 19.1MB/s]\r\npytorch_model.bin: 31%|███ | 1.31G/4.26G [01:06<02:37, 18.7MB/s]\r\npytorch_model.bin: 31%|███ | 1.32G/4.26G [01:07<02:39, 18.5MB/s]\r\npytorch_model.bin: 31%|███▏ | 1.33G/4.26G [01:08<02:44, 17.8MB/s]\r\npytorch_model.bin: 31%|███▏ | 1.34G/4.26G [01:08<02:43, 17.9MB/s]\r\npytorch_model.bin: 32%|███▏ | 1.35G/4.26G [01:09<02:29, 19.5MB/s]\r\npytorch_model.bin: 32%|███▏ | 1.36G/4.26G [01:09<02:32, 19.0MB/s]\r\npytorch_model.bin: 32%|███▏ | 1.37G/4.26G [01:10<02:33, 18.8MB/s]\r\npytorch_model.bin: 32%|███▏ | 1.38G/4.26G [01:10<02:21, 20.4MB/s]\r\npytorch_model.bin: 33%|███▎ | 1.39G/4.26G [01:11<02:25, 19.7MB/s]\r\npytorch_model.bin: 33%|███▎ | 1.41G/4.26G [01:11<02:15, 21.0MB/s]\r\npytorch_model.bin: 33%|███▎ | 1.42G/4.26G [01:12<02:21, 20.1MB/s]\r\npytorch_model.bin: 33%|███▎ | 1.43G/4.26G [01:12<02:27, 19.2MB/s]\r\npytorch_model.bin: 34%|███▎ | 1.44G/4.26G [01:13<02:17, 20.5MB/s]\r\npytorch_model.bin: 34%|███▍ | 1.45G/4.26G [01:13<02:22, 19.7MB/s]\r\npytorch_model.bin: 34%|███▍ | 1.46G/4.26G [01:14<02:25, 19.3MB/s]\r\npytorch_model.bin: 34%|███▍ | 1.47G/4.26G [01:14<02:14, 20.7MB/s]\r\npytorch_model.bin: 35%|███▍ | 1.48G/4.26G [01:15<02:19, 19.9MB/s]\r\npytorch_model.bin: 35%|███▍ | 1.49G/4.26G [01:16<02:23, 19.3MB/s]\r\npytorch_model.bin: 35%|███▌ | 1.50G/4.26G [01:16<02:27, 18.8MB/s]\r\npytorch_model.bin: 35%|███▌ | 1.51G/4.26G [01:17<02:16, 20.2MB/s]\r\npytorch_model.bin: 36%|███▌ | 1.52G/4.26G [01:17<02:20, 19.5MB/s]\r\npytorch_model.bin: 36%|███▌ | 1.53G/4.26G [01:18<02:10, 21.0MB/s]\r\npytorch_model.bin: 36%|███▌ | 1.54G/4.26G [01:18<02:15, 20.1MB/s]\r\npytorch_model.bin: 36%|███▋ | 1.55G/4.26G [01:19<02:09, 21.0MB/s]\r\npytorch_model.bin: 37%|███▋ | 1.56G/4.26G [01:19<02:14, 20.1MB/s]\r\npytorch_model.bin: 37%|███▋ | 1.57G/4.26G [01:20<02:16, 19.8MB/s]\r\npytorch_model.bin: 37%|███▋ | 1.58G/4.26G [01:20<02:08, 20.8MB/s]\r\npytorch_model.bin: 37%|███▋ | 1.59G/4.26G [01:21<02:12, 20.1MB/s]\r\npytorch_model.bin: 38%|███▊ | 1.60G/4.26G [01:21<02:15, 19.6MB/s]\r\npytorch_model.bin: 38%|███▊ | 1.61G/4.26G [01:22<02:07, 20.7MB/s]\r\npytorch_model.bin: 38%|███▊ | 1.63G/4.26G [01:22<02:10, 20.1MB/s]\r\npytorch_model.bin: 38%|███▊ | 1.64G/4.26G [01:23<02:04, 21.0MB/s]\r\npytorch_model.bin: 39%|███▊ | 1.65G/4.26G [01:23<02:08, 20.4MB/s]\r\npytorch_model.bin: 39%|███▉ | 1.66G/4.26G [01:24<02:03, 21.2MB/s]\r\npytorch_model.bin: 39%|███▉ | 1.67G/4.26G [01:24<02:08, 20.1MB/s]\r\npytorch_model.bin: 39%|███▉ | 1.68G/4.26G [01:25<02:12, 19.5MB/s]\r\npytorch_model.bin: 40%|███▉ | 1.69G/4.26G [01:25<02:04, 20.7MB/s]\r\npytorch_model.bin: 40%|███▉ | 1.70G/4.26G [01:26<02:08, 19.9MB/s]\r\npytorch_model.bin: 40%|████ | 1.71G/4.26G [01:27<02:12, 19.3MB/s]\r\npytorch_model.bin: 40%|████ | 1.72G/4.26G [01:27<02:03, 20.6MB/s]\r\npytorch_model.bin: 41%|████ | 1.73G/4.26G [01:28<02:07, 19.9MB/s]\r\npytorch_model.bin: 41%|████ | 1.74G/4.26G [01:28<02:02, 20.6MB/s]\r\npytorch_model.bin: 41%|████ | 1.75G/4.26G [01:28<02:03, 20.3MB/s]\r\npytorch_model.bin: 41%|████▏ | 1.76G/4.26G [01:29<02:07, 19.6MB/s]\r\npytorch_model.bin: 42%|████▏ | 1.77G/4.26G [01:30<01:59, 20.9MB/s]\r\npytorch_model.bin: 42%|████▏ | 1.78G/4.26G [01:30<02:04, 19.9MB/s]\r\npytorch_model.bin: 42%|████▏ | 1.79G/4.26G [01:31<02:08, 19.3MB/s]\r\npytorch_model.bin: 42%|████▏ | 1.80G/4.26G [01:31<02:09, 18.9MB/s]\r\npytorch_model.bin: 43%|████▎ | 1.81G/4.26G [01:32<02:11, 18.6MB/s]\r\npytorch_model.bin: 43%|████▎ | 1.82G/4.26G [01:32<02:01, 20.1MB/s]\r\npytorch_model.bin: 43%|████▎ | 1.84G/4.26G [01:33<02:05, 19.4MB/s]\r\npytorch_model.bin: 43%|████▎ | 1.85G/4.26G [01:33<02:07, 19.0MB/s]\r\npytorch_model.bin: 44%|████▎ | 1.86G/4.26G [01:34<01:57, 20.5MB/s]\r\npytorch_model.bin: 44%|████▍ | 1.87G/4.26G [01:34<02:01, 19.7MB/s]\r\npytorch_model.bin: 44%|████▍ | 1.88G/4.26G [01:35<02:04, 19.2MB/s]\r\npytorch_model.bin: 44%|████▍ | 1.89G/4.26G [01:35<01:55, 20.6MB/s]\r\npytorch_model.bin: 45%|████▍ | 1.90G/4.26G [01:36<01:58, 19.9MB/s]\r\npytorch_model.bin: 45%|████▍ | 1.91G/4.26G [01:36<01:51, 21.2MB/s]\r\npytorch_model.bin: 45%|████▌ | 1.92G/4.26G [01:37<01:55, 20.2MB/s]\r\npytorch_model.bin: 45%|████▌ | 1.93G/4.26G [01:37<01:49, 21.3MB/s]\r\npytorch_model.bin: 46%|████▌ | 1.94G/4.26G [01:38<01:54, 20.3MB/s]\r\npytorch_model.bin: 46%|████▌ | 1.95G/4.26G [01:39<01:57, 19.7MB/s]\r\npytorch_model.bin: 46%|████▌ | 1.96G/4.26G [01:39<01:49, 20.9MB/s]\r\npytorch_model.bin: 46%|████▋ | 1.97G/4.26G [01:40<01:54, 20.0MB/s]\r\npytorch_model.bin: 47%|████▋ | 1.98G/4.26G [01:40<01:57, 19.4MB/s]\r\npytorch_model.bin: 47%|████▋ | 1.99G/4.26G [01:41<02:01, 18.7MB/s]\r\npytorch_model.bin: 47%|████▋ | 2.00G/4.26G [01:41<01:51, 20.2MB/s]\r\npytorch_model.bin: 47%|████▋ | 2.01G/4.26G [01:42<01:54, 19.6MB/s]\r\npytorch_model.bin: 47%|████▋ | 2.02G/4.26G [01:42<01:57, 19.1MB/s]\r\npytorch_model.bin: 48%|████▊ | 2.03G/4.26G [01:43<01:48, 20.5MB/s]\r\npytorch_model.bin: 48%|████▊ | 2.04G/4.26G [01:43<01:52, 19.8MB/s]\r\npytorch_model.bin: 48%|████▊ | 2.06G/4.26G [01:44<01:54, 19.2MB/s]\r\npytorch_model.bin: 48%|████▊ | 2.07G/4.26G [01:44<01:46, 20.6MB/s]\r\npytorch_model.bin: 49%|████▊ | 2.08G/4.26G [01:45<01:49, 19.9MB/s]\r\npytorch_model.bin: 49%|████▉ | 2.09G/4.26G [01:45<01:42, 21.2MB/s]\r\npytorch_model.bin: 49%|████▉ | 2.10G/4.26G [01:46<01:46, 20.3MB/s]\r\npytorch_model.bin: 49%|████▉ | 2.11G/4.26G [01:46<01:40, 21.4MB/s]\r\npytorch_model.bin: 50%|████▉ | 2.12G/4.26G [01:47<01:45, 20.3MB/s]\r\npytorch_model.bin: 50%|████▉ | 2.13G/4.26G [01:47<01:47, 19.8MB/s]\r\npytorch_model.bin: 50%|█████ | 2.14G/4.26G [01:48<01:40, 21.1MB/s]\r\npytorch_model.bin: 50%|█████ | 2.15G/4.26G [01:48<01:44, 20.2MB/s]\r\npytorch_model.bin: 51%|█████ | 2.16G/4.26G [01:49<01:47, 19.5MB/s]\r\npytorch_model.bin: 51%|█████ | 2.17G/4.26G [01:49<01:40, 20.9MB/s]\r\npytorch_model.bin: 51%|█████ | 2.18G/4.26G [01:50<01:44, 20.0MB/s]\r\npytorch_model.bin: 51%|█████▏ | 2.19G/4.26G [01:50<01:37, 21.3MB/s]\r\npytorch_model.bin: 52%|█████▏ | 2.20G/4.26G [01:51<01:41, 20.3MB/s]\r\npytorch_model.bin: 52%|█████▏ | 2.21G/4.26G [01:52<01:44, 19.6MB/s]\r\npytorch_model.bin: 52%|█████▏ | 2.22G/4.26G [01:52<01:37, 21.0MB/s]\r\npytorch_model.bin: 52%|█████▏ | 2.23G/4.26G [01:53<01:41, 20.1MB/s]\r\npytorch_model.bin: 53%|█████▎ | 2.24G/4.26G [01:53<01:34, 21.3MB/s]\r\npytorch_model.bin: 53%|█████▎ | 2.25G/4.26G [01:54<01:38, 20.3MB/s]\r\npytorch_model.bin: 53%|█████▎ | 2.26G/4.26G [01:54<01:33, 21.5MB/s]\r\npytorch_model.bin: 53%|█████▎ | 2.28G/4.26G [01:55<01:37, 20.5MB/s]\r\npytorch_model.bin: 54%|█████▎ | 2.29G/4.26G [01:55<01:39, 19.8MB/s]\r\npytorch_model.bin: 54%|█████▍ | 2.30G/4.26G [01:56<01:33, 21.0MB/s]\r\npytorch_model.bin: 54%|█████▍ | 2.31G/4.26G [01:56<01:37, 20.1MB/s]\r\npytorch_model.bin: 54%|█████▍ | 2.32G/4.26G [01:57<01:32, 21.1MB/s]\r\npytorch_model.bin: 55%|█████▍ | 2.33G/4.26G [01:57<01:35, 20.3MB/s]\r\npytorch_model.bin: 55%|█████▍ | 2.34G/4.26G [01:58<01:38, 19.6MB/s]\r\npytorch_model.bin: 55%|█████▌ | 2.35G/4.26G [01:58<01:40, 19.0MB/s]\r\npytorch_model.bin: 55%|█████▌ | 2.36G/4.26G [01:59<01:33, 20.4MB/s]\r\npytorch_model.bin: 56%|█████▌ | 2.37G/4.26G [01:59<01:35, 19.8MB/s]\r\npytorch_model.bin: 56%|█████▌ | 2.38G/4.26G [02:00<01:37, 19.3MB/s]\r\npytorch_model.bin: 56%|█████▌ | 2.39G/4.26G [02:00<01:30, 20.7MB/s]\r\npytorch_model.bin: 56%|█████▋ | 2.40G/4.26G [02:01<01:34, 19.8MB/s]\r\npytorch_model.bin: 57%|█████▋ | 2.41G/4.26G [02:01<01:36, 19.2MB/s]\r\npytorch_model.bin: 57%|█████▋ | 2.42G/4.26G [02:02<01:37, 18.8MB/s]\r\npytorch_model.bin: 57%|█████▋ | 2.43G/4.26G [02:03<01:38, 18.6MB/s]\r\npytorch_model.bin: 57%|█████▋ | 2.44G/4.26G [02:03<01:31, 19.9MB/s]\r\npytorch_model.bin: 58%|█████▊ | 2.45G/4.26G [02:04<01:32, 19.4MB/s]\r\npytorch_model.bin: 58%|█████▊ | 2.46G/4.26G [02:04<01:34, 19.0MB/s]\r\npytorch_model.bin: 58%|█████▊ | 2.47G/4.26G [02:05<01:37, 18.4MB/s]\r\npytorch_model.bin: 58%|█████▊ | 2.49G/4.26G [02:05<01:36, 18.5MB/s]\r\npytorch_model.bin: 59%|█████▊ | 2.50G/4.26G [02:06<01:36, 18.4MB/s]\r\npytorch_model.bin: 59%|█████▉ | 2.51G/4.26G [02:06<01:28, 19.9MB/s]\r\npytorch_model.bin: 59%|█████▉ | 2.52G/4.26G [02:07<01:30, 19.3MB/s]\r\npytorch_model.bin: 59%|█████▉ | 2.53G/4.26G [02:08<01:31, 18.9MB/s]\r\npytorch_model.bin: 60%|█████▉ | 2.54G/4.26G [02:08<01:32, 18.6MB/s]\r\npytorch_model.bin: 60%|█████▉ | 2.55G/4.26G [02:09<01:24, 20.2MB/s]\r\npytorch_model.bin: 60%|██████ | 2.56G/4.26G [02:09<01:27, 19.5MB/s]\r\npytorch_model.bin: 60%|██████ | 2.57G/4.26G [02:10<01:28, 19.1MB/s]\r\npytorch_model.bin: 61%|██████ | 2.58G/4.26G [02:10<01:22, 20.5MB/s]\r\npytorch_model.bin: 61%|██████ | 2.59G/4.26G [02:11<01:25, 19.6MB/s]\r\npytorch_model.bin: 61%|██████ | 2.60G/4.26G [02:11<01:20, 20.7MB/s]\r\npytorch_model.bin: 61%|██████▏ | 2.61G/4.26G [02:12<01:22, 19.9MB/s]\r\npytorch_model.bin: 62%|██████▏ | 2.62G/4.26G [02:12<01:22, 19.8MB/s]\r\npytorch_model.bin: 62%|██████▏ | 2.63G/4.26G [02:13<01:18, 20.6MB/s]\r\npytorch_model.bin: 62%|██████▏ | 2.64G/4.26G [02:13<01:19, 20.2MB/s]\r\npytorch_model.bin: 62%|██████▏ | 2.65G/4.26G [02:14<01:16, 20.9MB/s]\r\npytorch_model.bin: 63%|██████▎ | 2.66G/4.26G [02:14<01:19, 20.0MB/s]\r\npytorch_model.bin: 63%|██████▎ | 2.67G/4.26G [02:15<01:19, 19.9MB/s]\r\npytorch_model.bin: 63%|██████▎ | 2.68G/4.26G [02:15<01:16, 20.7MB/s]\r\npytorch_model.bin: 63%|██████▎ | 2.69G/4.26G [02:16<01:17, 20.3MB/s]\r\npytorch_model.bin: 63%|██████▎ | 2.71G/4.26G [02:16<01:15, 20.5MB/s]\r\npytorch_model.bin: 64%|██████▎ | 2.72G/4.26G [02:17<01:16, 20.1MB/s]\r\npytorch_model.bin: 64%|██████▍ | 2.73G/4.26G [02:17<01:18, 19.5MB/s]\r\npytorch_model.bin: 64%|██████▍ | 2.74G/4.26G [02:18<01:13, 20.9MB/s]\r\npytorch_model.bin: 64%|██████▍ | 2.75G/4.26G [02:18<01:16, 19.9MB/s]\r\npytorch_model.bin: 65%|██████▍ | 2.76G/4.26G [02:19<01:19, 19.0MB/s]\r\npytorch_model.bin: 65%|██████▍ | 2.77G/4.26G [02:20<01:18, 19.0MB/s]\r\npytorch_model.bin: 65%|██████▌ | 2.78G/4.26G [02:20<01:19, 18.7MB/s]\r\npytorch_model.bin: 65%|██████▌ | 2.79G/4.26G [02:21<01:13, 20.2MB/s]\r\npytorch_model.bin: 66%|██████▌ | 2.80G/4.26G [02:21<01:15, 19.4MB/s]\r\npytorch_model.bin: 66%|██████▌ | 2.81G/4.26G [02:22<01:17, 18.8MB/s]\r\npytorch_model.bin: 66%|██████▌ | 2.82G/4.26G [02:22<01:16, 18.7MB/s]\r\npytorch_model.bin: 66%|██████▋ | 2.83G/4.26G [02:23<01:17, 18.5MB/s]\r\npytorch_model.bin: 67%|██████▋ | 2.84G/4.26G [02:23<01:11, 19.9MB/s]\r\npytorch_model.bin: 67%|██████▋ | 2.85G/4.26G [02:24<01:12, 19.5MB/s]\r\npytorch_model.bin: 67%|██████▋ | 2.86G/4.26G [02:24<01:07, 20.8MB/s]\r\npytorch_model.bin: 67%|██████▋ | 2.87G/4.26G [02:25<01:09, 20.0MB/s]\r\npytorch_model.bin: 68%|██████▊ | 2.88G/4.26G [02:26<01:11, 19.2MB/s]\r\npytorch_model.bin: 68%|██████▊ | 2.89G/4.26G [02:26<01:12, 18.9MB/s]\r\npytorch_model.bin: 68%|██████▊ | 2.90G/4.26G [02:27<01:07, 20.2MB/s]\r\npytorch_model.bin: 68%|██████▊ | 2.92G/4.26G [02:27<01:08, 19.6MB/s]\r\npytorch_model.bin: 69%|██████▊ | 2.93G/4.26G [02:28<01:09, 19.3MB/s]\r\npytorch_model.bin: 69%|██████▉ | 2.94G/4.26G [02:28<01:04, 20.7MB/s]\r\npytorch_model.bin: 69%|██████▉ | 2.95G/4.26G [02:29<01:06, 19.9MB/s]\r\npytorch_model.bin: 69%|██████▉ | 2.96G/4.26G [02:29<01:01, 21.2MB/s]\r\npytorch_model.bin: 70%|██████▉ | 2.97G/4.26G [02:30<01:03, 20.3MB/s]\r\npytorch_model.bin: 70%|██████▉ | 2.98G/4.26G [02:30<01:05, 19.5MB/s]\r\npytorch_model.bin: 70%|███████ | 2.99G/4.26G [02:31<01:01, 20.8MB/s]\r\npytorch_model.bin: 70%|███████ | 3.00G/4.26G [02:31<01:03, 19.8MB/s]\r\npytorch_model.bin: 71%|███████ | 3.01G/4.26G [02:32<01:04, 19.5MB/s]\r\npytorch_model.bin: 71%|███████ | 3.02G/4.26G [02:32<01:00, 20.5MB/s]\r\npytorch_model.bin: 71%|███████ | 3.03G/4.26G [02:33<01:02, 19.7MB/s]\r\npytorch_model.bin: 71%|███████▏ | 3.04G/4.26G [02:33<01:02, 19.4MB/s]\r\npytorch_model.bin: 72%|███████▏ | 3.05G/4.26G [02:34<00:58, 20.5MB/s]\r\npytorch_model.bin: 72%|███████▏ | 3.06G/4.26G [02:34<00:59, 20.1MB/s]\r\npytorch_model.bin: 72%|███████▏ | 3.07G/4.26G [02:35<01:01, 19.3MB/s]\r\npytorch_model.bin: 72%|███████▏ | 3.08G/4.26G [02:36<01:02, 18.9MB/s]\r\npytorch_model.bin: 73%|███████▎ | 3.09G/4.26G [02:36<00:57, 20.4MB/s]\r\npytorch_model.bin: 73%|███████▎ | 3.10G/4.26G [02:37<00:59, 19.5MB/s]\r\npytorch_model.bin: 73%|███████▎ | 3.11G/4.26G [02:37<00:59, 19.2MB/s]\r\npytorch_model.bin: 73%|███████▎ | 3.12G/4.26G [02:38<01:00, 18.8MB/s]\r\npytorch_model.bin: 74%|███████▎ | 3.14G/4.26G [02:38<00:55, 20.3MB/s]\r\npytorch_model.bin: 74%|███████▍ | 3.15G/4.26G [02:39<00:57, 19.5MB/s]\r\npytorch_model.bin: 74%|███████▍ | 3.16G/4.26G [02:39<00:58, 19.0MB/s]\r\npytorch_model.bin: 74%|███████▍ | 3.17G/4.26G [02:40<00:58, 18.7MB/s]\r\npytorch_model.bin: 75%|███████▍ | 3.18G/4.26G [02:41<01:00, 17.9MB/s]\r\npytorch_model.bin: 75%|███████▍ | 3.19G/4.26G [02:41<00:55, 19.4MB/s]\r\npytorch_model.bin: 75%|███████▌ | 3.20G/4.26G [02:42<00:56, 18.7MB/s]\r\npytorch_model.bin: 75%|███████▌ | 3.21G/4.26G [02:42<00:51, 20.6MB/s]\r\npytorch_model.bin: 76%|███████▌ | 3.22G/4.26G [02:43<00:52, 19.7MB/s]\r\npytorch_model.bin: 76%|███████▌ | 3.23G/4.26G [02:43<00:53, 19.3MB/s]\r\npytorch_model.bin: 76%|███████▌ | 3.24G/4.26G [02:44<00:49, 20.5MB/s]\r\npytorch_model.bin: 76%|███████▋ | 3.25G/4.26G [02:44<00:50, 19.9MB/s]\r\npytorch_model.bin: 77%|███████▋ | 3.26G/4.26G [02:45<00:47, 21.0MB/s]\r\npytorch_model.bin: 77%|███████▋ | 3.27G/4.26G [02:45<00:49, 20.0MB/s]\r\npytorch_model.bin: 77%|███████▋ | 3.28G/4.26G [02:46<00:49, 19.6MB/s]\r\npytorch_model.bin: 77%|███████▋ | 3.29G/4.26G [02:46<00:46, 20.9MB/s]\r\npytorch_model.bin: 78%|███████▊ | 3.30G/4.26G [02:47<00:47, 20.0MB/s]\r\npytorch_model.bin: 78%|███████▊ | 3.31G/4.26G [02:47<00:45, 21.0MB/s]\r\npytorch_model.bin: 78%|███████▊ | 3.32G/4.26G [02:48<00:46, 20.2MB/s]\r\npytorch_model.bin: 78%|███████▊ | 3.33G/4.26G [02:48<00:47, 19.5MB/s]\r\npytorch_model.bin: 78%|███████▊ | 3.34G/4.26G [02:49<00:45, 20.2MB/s]\r\npytorch_model.bin: 79%|███████▊ | 3.36G/4.26G [02:49<00:45, 20.0MB/s]\r\npytorch_model.bin: 79%|███████▉ | 3.37G/4.26G [02:50<00:46, 19.4MB/s]\r\npytorch_model.bin: 79%|███████▉ | 3.38G/4.26G [02:50<00:43, 20.2MB/s]\r\npytorch_model.bin: 79%|███████▉ | 3.39G/4.26G [02:51<00:43, 20.0MB/s]\r\npytorch_model.bin: 80%|███████▉ | 3.40G/4.26G [02:51<00:41, 20.8MB/s]\r\npytorch_model.bin: 80%|███████▉ | 3.41G/4.26G [02:52<00:43, 19.8MB/s]\r\npytorch_model.bin: 80%|████████ | 3.42G/4.26G [02:52<00:40, 20.9MB/s]\r\npytorch_model.bin: 80%|████████ | 3.43G/4.26G [02:53<00:41, 19.9MB/s]\r\npytorch_model.bin: 81%|████████ | 3.44G/4.26G [02:54<00:41, 19.7MB/s]\r\npytorch_model.bin: 81%|████████ | 3.45G/4.26G [02:54<00:39, 20.3MB/s]\r\npytorch_model.bin: 81%|████████ | 3.46G/4.26G [02:55<00:39, 20.2MB/s]\r\npytorch_model.bin: 81%|████████▏ | 3.47G/4.26G [02:55<00:36, 21.4MB/s]\r\npytorch_model.bin: 82%|████████▏ | 3.48G/4.26G [02:56<00:38, 20.4MB/s]\r\npytorch_model.bin: 82%|████████▏ | 3.49G/4.26G [02:56<00:39, 19.7MB/s]\r\npytorch_model.bin: 82%|████████▏ | 3.50G/4.26G [02:57<00:36, 20.8MB/s]\r\npytorch_model.bin: 82%|████████▏ | 3.51G/4.26G [02:57<00:41, 18.0MB/s]\r\npytorch_model.bin: 83%|████████▎ | 3.52G/4.26G [02:58<00:42, 17.3MB/s]\r\npytorch_model.bin: 83%|████████▎ | 3.53G/4.26G [02:59<00:51, 14.0MB/s]\r\npytorch_model.bin: 83%|████████▎ | 3.54G/4.26G [03:00<00:54, 13.1MB/s]\r\npytorch_model.bin: 83%|████████▎ | 3.55G/4.26G [03:01<00:53, 13.2MB/s]\r\npytorch_model.bin: 84%|████████▎ | 3.57G/4.26G [03:02<00:52, 13.3MB/s]\r\npytorch_model.bin: 84%|████████▍ | 3.58G/4.26G [03:02<00:48, 14.3MB/s]\r\npytorch_model.bin: 84%|████████▍ | 3.59G/4.26G [03:03<00:48, 14.0MB/s]\r\npytorch_model.bin: 84%|████████▍ | 3.60G/4.26G [03:04<00:48, 13.7MB/s]\r\npytorch_model.bin: 85%|████████▍ | 3.61G/4.26G [03:05<00:49, 13.2MB/s]\r\npytorch_model.bin: 85%|████████▍ | 3.62G/4.26G [03:06<00:51, 12.6MB/s]\r\npytorch_model.bin: 85%|████████▌ | 3.63G/4.26G [03:06<00:50, 12.5MB/s]\r\npytorch_model.bin: 85%|████████▌ | 3.64G/4.26G [03:07<00:50, 12.4MB/s]\r\npytorch_model.bin: 86%|████████▌ | 3.65G/4.26G [03:08<00:49, 12.3MB/s]\r\npytorch_model.bin: 86%|████████▌ | 3.66G/4.26G [03:09<00:44, 13.4MB/s]\r\npytorch_model.bin: 86%|████████▌ | 3.67G/4.26G [03:09<00:40, 14.8MB/s]\r\npytorch_model.bin: 86%|████████▋ | 3.68G/4.26G [03:10<00:36, 16.0MB/s]\r\npytorch_model.bin: 87%|████████▋ | 3.69G/4.26G [03:10<00:35, 15.9MB/s]\r\npytorch_model.bin: 87%|████████▋ | 3.70G/4.26G [03:11<00:36, 15.2MB/s]\r\npytorch_model.bin: 87%|████████▋ | 3.71G/4.26G [03:12<00:37, 14.5MB/s]\r\npytorch_model.bin: 87%|████████▋ | 3.72G/4.26G [03:13<00:41, 13.0MB/s]\r\npytorch_model.bin: 88%|████████▊ | 3.73G/4.26G [03:14<00:39, 13.4MB/s]\r\npytorch_model.bin: 88%|████████▊ | 3.74G/4.26G [03:15<00:38, 13.6MB/s]\r\npytorch_model.bin: 88%|████████▊ | 3.75G/4.26G [03:15<00:38, 13.3MB/s]\r\npytorch_model.bin: 88%|████████▊ | 3.76G/4.26G [03:16<00:37, 13.4MB/s]\r\npytorch_model.bin: 89%|████████▊ | 3.77G/4.26G [03:17<00:33, 14.3MB/s]\r\npytorch_model.bin: 89%|████████▉ | 3.79G/4.26G [03:17<00:32, 14.8MB/s]\r\npytorch_model.bin: 89%|████████▉ | 3.80G/4.26G [03:18<00:29, 15.7MB/s]\r\npytorch_model.bin: 89%|████████▉ | 3.81G/4.26G [03:19<00:33, 13.6MB/s]\r\npytorch_model.bin: 90%|████████▉ | 3.82G/4.26G [03:20<00:34, 13.0MB/s]\r\npytorch_model.bin: 90%|████████▉ | 3.83G/4.26G [03:21<00:33, 12.9MB/s]\r\npytorch_model.bin: 90%|█████████ | 3.84G/4.26G [03:22<00:35, 12.0MB/s]\r\npytorch_model.bin: 90%|█████████ | 3.85G/4.26G [03:23<00:36, 11.2MB/s]\r\npytorch_model.bin: 91%|█████████ | 3.86G/4.26G [03:24<00:37, 10.6MB/s]\r\npytorch_model.bin: 91%|█████████ | 3.87G/4.26G [03:25<00:35, 11.2MB/s]\r\npytorch_model.bin: 91%|█████████ | 3.88G/4.26G [03:25<00:28, 13.5MB/s]\r\npytorch_model.bin: 91%|█████████▏| 3.89G/4.26G [03:25<00:22, 16.2MB/s]\r\npytorch_model.bin: 92%|█████████▏| 3.90G/4.26G [03:26<00:19, 18.1MB/s]\r\npytorch_model.bin: 92%|█████████▏| 3.91G/4.26G [03:26<00:18, 18.7MB/s]\r\npytorch_model.bin: 92%|█████████▏| 3.92G/4.26G [03:27<00:17, 19.9MB/s]\r\npytorch_model.bin: 92%|█████████▏| 3.93G/4.26G [03:27<00:16, 19.5MB/s]\r\npytorch_model.bin: 93%|█████████▎| 3.94G/4.26G [03:28<00:16, 19.1MB/s]\r\npytorch_model.bin: 93%|█████████▎| 3.95G/4.26G [03:29<00:16, 18.8MB/s]\r\npytorch_model.bin: 93%|█████████▎| 3.96G/4.26G [03:29<00:14, 20.2MB/s]\r\npytorch_model.bin: 93%|█████████▎| 3.97G/4.26G [03:30<00:14, 19.6MB/s]\r\npytorch_model.bin: 94%|█████████▎| 3.98G/4.26G [03:30<00:13, 20.6MB/s]\r\npytorch_model.bin: 94%|█████████▍| 4.00G/4.26G [03:31<00:13, 19.8MB/s]\r\npytorch_model.bin: 94%|█████████▍| 4.01G/4.26G [03:31<00:12, 20.7MB/s]\r\npytorch_model.bin: 94%|█████████▍| 4.02G/4.26G [03:32<00:12, 20.3MB/s]\r\npytorch_model.bin: 94%|█████████▍| 4.03G/4.26G [03:32<00:11, 19.6MB/s]\r\npytorch_model.bin: 95%|█████████▍| 4.04G/4.26G [03:33<00:10, 20.6MB/s]\r\npytorch_model.bin: 95%|█████████▍| 4.05G/4.26G [03:33<00:10, 19.8MB/s]\r\npytorch_model.bin: 95%|█████████▌| 4.06G/4.26G [03:34<00:10, 19.6MB/s]\r\npytorch_model.bin: 95%|█████████▌| 4.07G/4.26G [03:34<00:09, 20.5MB/s]\r\npytorch_model.bin: 96%|█████████▌| 4.08G/4.26G [03:35<00:09, 19.9MB/s]\r\npytorch_model.bin: 96%|█████████▌| 4.09G/4.26G [03:35<00:08, 19.4MB/s]\r\npytorch_model.bin: 96%|█████████▌| 4.10G/4.26G [03:36<00:08, 18.7MB/s]\r\npytorch_model.bin: 96%|█████████▋| 4.11G/4.26G [03:36<00:08, 18.7MB/s]\r\npytorch_model.bin: 97%|█████████▋| 4.12G/4.26G [03:37<00:07, 18.8MB/s]\r\npytorch_model.bin: 97%|█████████▋| 4.13G/4.26G [03:38<00:06, 18.7MB/s]\r\npytorch_model.bin: 97%|█████████▋| 4.14G/4.26G [03:38<00:06, 19.9MB/s]\r\npytorch_model.bin: 97%|█████████▋| 4.15G/4.26G [03:39<00:05, 20.5MB/s]\r\npytorch_model.bin: 98%|█████████▊| 4.16G/4.26G [03:39<00:04, 19.9MB/s]\r\npytorch_model.bin: 98%|█████████▊| 4.17G/4.26G [03:40<00:04, 19.4MB/s]\r\npytorch_model.bin: 98%|█████████▊| 4.18G/4.26G [03:40<00:03, 20.7MB/s]\r\npytorch_model.bin: 98%|█████████▊| 4.19G/4.26G [03:41<00:03, 19.8MB/s]\r\npytorch_model.bin: 99%|█████████▊| 4.20G/4.26G [03:41<00:02, 19.3MB/s]\r\npytorch_model.bin: 99%|█████████▉| 4.22G/4.26G [03:42<00:02, 20.6MB/s]\r\npytorch_model.bin: 99%|█████████▉| 4.23G/4.26G [03:42<00:01, 19.8MB/s]\r\npytorch_model.bin: 99%|█████████▉| 4.24G/4.26G [03:43<00:01, 19.2MB/s]\r\npytorch_model.bin: 100%|█████████▉| 4.25G/4.26G [03:43<00:00, 19.8MB/s]\r\npytorch_model.bin: 100%|█████████▉| 4.26G/4.26G [03:44<00:00, 19.7MB/s]\r\npytorch_model.bin: 100%|██████████| 4.26G/4.26G [03:44<00:00, 19.5MB/s]\r\npytorch_model.bin: 100%|██████████| 4.26G/4.26G [03:44<00:00, 19.0MB/s]\r\n[INFO|modeling_utils.py:3196] 2023-12-06 03:24:59,733 >> loading weights file pytorch_model.bin from cache at /home/hf/.cache/huggingface/hub/models--ckip-joint--bloom-1b1-zh/snapshots/60bed206f673a412c57651456f8c2cf642cdfcfe/pytorch_model.bin\r\n[INFO|modeling_utils.py:3302] 2023-12-06 03:25:00,795 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model\r\n[INFO|modeling_utils.py:4034] 2023-12-06 03:25:01,921 >> All model checkpoint weights were used when initializing BloomForSequenceClassification.\r\n\r\n[INFO|modeling_utils.py:4042] 2023-12-06 03:25:01,921 >> All the weights of BloomForSequenceClassification were initialized from the model checkpoint at ckip-joint/bloom-1b1-zh.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use BloomForSequenceClassification for predictions without further training.\r\n\r\nRunning tokenizer on dataset: 0%| | 0/4635 [00:00<?, ? examples/s]Caching processed dataset at /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d/cache-3d872eada15ea9fd.arrow\r\n\r\nRunning tokenizer on dataset: 100%|██████████| 4635/4635 [00:00<00:00, 42887.77 examples/s]\r\nRunning tokenizer on dataset: 100%|██████████| 4635/4635 [00:00<00:00, 42266.86 examples/s]\r\n\r\nRunning tokenizer on dataset: 0%| | 0/18 [00:00<?, ? examples/s]Caching processed dataset at /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d/cache-e94ca5703f777eb9.arrow\r\n\r\nRunning tokenizer on dataset: 100%|██████████| 18/18 [00:00<00:00, 5167.17 examples/s]\r\n\r\nDownloading builder script: 0%| | 0.00/4.20k [00:00<?, ?B/s]\r\nDownloading builder script: 100%|██████████| 4.20k/4.20k [00:00<00:00, 9.52MB/s]\r\n[INFO|trainer.py:567] 2023-12-06 03:25:04,457 >> Using auto half precision backend\r\n[INFO|trainer.py:712] 2023-12-06 03:25:04,527 >> The following columns in the training set don't have a corresponding argument in `BloomForSequenceClassification.forward` and have been ignored: user, sentence. If user, sentence are not expected by `BloomForSequenceClassification.forward`, you can safely ignore this message.\r\nTraceback (most recent call last):\r\n File \"/workspaces/hf/./script/run_classification.py\", line 777, in <module>\r\n main()\r\n File \"/workspaces/hf/./script/run_classification.py\", line 712, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 1533, in train\r\n return inner_training_loop(\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/trainer.py\", line 1614, in _inner_training_loop\r\n self.optimizer, self.lr_scheduler = deepspeed_init(self, num_training_steps=max_steps)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/integrations/deepspeed.py\", line 362, in deepspeed_init\r\n hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/integrations/deepspeed.py\", line 232, in trainer_config_finalize\r\n raise ValueError(\r\nValueError: Please correct the following DeepSpeed config values that mismatch TrainingArguments values:\r\n- ds train_micro_batch_size_per_gpu=1 vs hf per_device_train_batch_size=8\r\nThe easiest method is to set these DeepSpeed config values to 'auto'.\r\n```\r\nautotuning_results/profile_model_info/stdout.log:\r\n```\r\n[2023-12-06 03:21:01,892] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-12-06 03:21:02,866] [WARNING] [runner.py:203:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\n[2023-12-06 03:21:02,867] [INFO] [runner.py:570:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None ./script/run_classification.py --model_name_or_path ckip-joint/bloom-1b1-zh --do_train --do_eval --output_dir ./bloom --train_file ./data/train.csv --validation_file ./data/test.csv --text_column_names sentence --label_column_name label --overwrite_output_dir --fp16 --torch_compile --deepspeed eyJ0cmFpbl9taWNyb19iYXRjaF9zaXplX3Blcl9ncHUiOiAxLCAiYXV0b3R1bmluZyI6IHsiZW5hYmxlZCI6IHRydWUsICJtb2RlbF9pbmZvX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tb2RlbF9pbmZvLmpzb24iLCAibW9kZWxfaW5mbyI6IHsicHJvZmlsZSI6IHRydWV9LCAibWV0cmljX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tZXRyaWNzLmpzb24ifSwgInplcm9fb3B0aW1pemF0aW9uIjogeyJzdGFnZSI6IDN9LCAibWVtb3J5X2JyZWFrX2Rvd24iOiBmYWxzZX0=\r\n[2023-12-06 03:21:03,871] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-12-06 03:21:04,844] [INFO] [launch.py:138:main] 0 NCCL_VERSION=2.19.3\r\n[2023-12-06 03:21:04,845] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0]}\r\n[2023-12-06 03:21:04,845] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=1, node_rank=0\r\n[2023-12-06 03:21:04,845] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})\r\n[2023-12-06 03:21:04,845] [INFO] [launch.py:163:main] dist_world_size=1\r\n[2023-12-06 03:21:04,845] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0\r\n[2023-12-06 03:21:06,863] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n[2023-12-06 03:21:06,987] [INFO] [comm.py:637:init_distributed] cdb=None\r\n[2023-12-06 03:21:06,987] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl\r\n12/06/2023 03:21:07 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: True\r\n12/06/2023 03:21:07 - INFO - __main__ - Training/evaluation parameters TrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=False,\r\nbf16_full_eval=False,\r\ndata_seed=None,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_persistent_workers=False,\r\ndataloader_pin_memory=True,\r\nddp_backend=None,\r\nddp_broadcast_buffers=None,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\nddp_timeout=1800,\r\ndebug=[],\r\ndeepspeed=eyJ0cmFpbl9taWNyb19iYXRjaF9zaXplX3Blcl9ncHUiOiAxLCAiYXV0b3R1bmluZyI6IHsiZW5hYmxlZCI6IHRydWUsICJtb2RlbF9pbmZvX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tb2RlbF9pbmZvLmpzb24iLCAibW9kZWxfaW5mbyI6IHsicHJvZmlsZSI6IHRydWV9LCAibWV0cmljX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tZXRyaWNzLmpzb24ifSwgInplcm9fb3B0aW1pemF0aW9uIjogeyJzdGFnZSI6IDN9LCAibWVtb3J5X2JyZWFrX2Rvd24iOiBmYWxzZX0=,\r\ndisable_tqdm=False,\r\ndispatch_batches=None,\r\ndo_eval=True,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_delay=0,\r\neval_steps=None,\r\nevaluation_strategy=no,\r\nfp16=True,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\nfsdp=[],\r\nfsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=None,\r\nfull_determinism=False,\r\ngradient_accumulation_steps=1,\r\ngradient_checkpointing=False,\r\ngradient_checkpointing_kwargs=None,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nhalf_precision_backend=auto,\r\nhub_always_push=False,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=every_save,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\ninclude_num_input_tokens_seen=False,\r\ninclude_tokens_per_second=False,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=0,\r\nlog_level=passive,\r\nlog_level_replica=warning,\r\nlog_on_each_node=True,\r\nlogging_dir=./bloom/runs/Dec06_03-21-06_b253663f8948,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=500,\r\nlogging_strategy=steps,\r\nlr_scheduler_kwargs={},\r\nlr_scheduler_type=linear,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nneftune_noise_alpha=None,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noptim=adamw_torch,\r\noptim_args=None,\r\noutput_dir=./bloom,\r\noverwrite_output_dir=True,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=8,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=./bloom,\r\nsave_on_each_node=False,\r\nsave_only_model=False,\r\nsave_safetensors=True,\r\nsave_steps=500,\r\nsave_strategy=steps,\r\nsave_total_limit=None,\r\nseed=42,\r\nskip_memory_metrics=True,\r\nsplit_batches=False,\r\ntf32=None,\r\ntorch_compile=True,\r\ntorch_compile_backend=inductor,\r\ntorch_compile_mode=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_cpu=False,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nuse_mps_device=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\n)\r\n12/06/2023 03:21:07 - INFO - __main__ - load a local file for train: ./data/train.csv\r\n12/06/2023 03:21:07 - INFO - __main__ - load a local file for validation: ./data/test.csv\r\n12/06/2023 03:21:07 - INFO - datasets.builder - Using custom data configuration default-8f347103001581ec\r\n12/06/2023 03:21:07 - INFO - datasets.info - Loading Dataset Infos from /usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/csv\r\n12/06/2023 03:21:07 - INFO - datasets.builder - Generating dataset csv (/home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d)\r\n12/06/2023 03:21:07 - INFO - datasets.builder - Downloading and preparing dataset csv/default to /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d...\r\n12/06/2023 03:21:07 - INFO - datasets.download.download_manager - Downloading took 0.0 min\r\n12/06/2023 03:21:07 - INFO - datasets.download.download_manager - Checksum Computation took 0.0 min\r\n12/06/2023 03:21:07 - INFO - datasets.builder - Generating train split\r\n12/06/2023 03:21:07 - INFO - datasets.builder - Generating validation split\r\n12/06/2023 03:21:07 - INFO - datasets.utils.info_utils - Unable to verify splits sizes.\r\n12/06/2023 03:21:07 - INFO - datasets.builder - Dataset csv downloaded and prepared to /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d. Subsequent calls will reuse this data.\r\n12/06/2023 03:21:08 - INFO - __main__ - setting problem type to single label classification\r\n[2023-12-06 03:25:01,464] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 294, num_elems = 1.07B\r\n12/06/2023 03:25:02 - WARNING - __main__ - The label2id key in the model config.json is not equal to the label2id key of this run. You can ignore this if you are doing finetuning.\r\n12/06/2023 03:25:02 - INFO - datasets.arrow_dataset - Caching processed dataset at /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d/cache-3d872eada15ea9fd.arrow\r\n12/06/2023 03:25:02 - INFO - datasets.arrow_dataset - Caching processed dataset at /home/hf/.cache/huggingface/datasets/csv/default-8f347103001581ec/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d/cache-e94ca5703f777eb9.arrow\r\n12/06/2023 03:25:02 - INFO - __main__ - Sample 912 of the training set: {'user': 'Small Chen', 'sentence': '是啊!滿滿的', 'label': 0, 'input_ids': [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 68211, 4077, 17111, 17111, 373], 'attention_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]}.\r\n12/06/2023 03:25:02 - INFO - __main__ - Sample 204 of the training set: {'user': '呱吉', 'sentence': '外國人的攔網', 'label': 0, 'input_ids': [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 67003, 6872, 125486, 8211], 'attention_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]}.\r\n12/06/2023 03:25:02 - INFO - __main__ - Sample 2253 of the training set: {'user': 'Mmmm', 'sentence': '他不是什麼華碩工程師', 'label': 0, 'input_ids': [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 205797, 17212, 7007, 81753, 126320], 'attention_mask': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]}.\r\n12/06/2023 03:25:04 - INFO - __main__ - Using accuracy as classification score, you can use --metric_name to overwrite.\r\n[2023-12-06 03:25:05,096] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 1374\r\n[2023-12-06 03:25:05,097] [ERROR] [launch.py:321:sigkill_handler] ['/usr/bin/python3', '-u', './script/run_classification.py', '--local_rank=0', '--model_name_or_path', 'ckip-joint/bloom-1b1-zh', '--do_train', '--do_eval', '--output_dir', './bloom', '--train_file', './data/train.csv', '--validation_file', './data/test.csv', '--text_column_names', 'sentence', '--label_column_name', 'label', '--overwrite_output_dir', '--fp16', '--torch_compile', '--deepspeed', 'eyJ0cmFpbl9taWNyb19iYXRjaF9zaXplX3Blcl9ncHUiOiAxLCAiYXV0b3R1bmluZyI6IHsiZW5hYmxlZCI6IHRydWUsICJtb2RlbF9pbmZvX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tb2RlbF9pbmZvLmpzb24iLCAibW9kZWxfaW5mbyI6IHsicHJvZmlsZSI6IHRydWV9LCAibWV0cmljX3BhdGgiOiAiYXV0b3R1bmluZ19yZXN1bHRzL3Byb2ZpbGVfbW9kZWxfaW5mby9tZXRyaWNzLmpzb24ifSwgInplcm9fb3B0aW1pemF0aW9uIjogeyJzdGFnZSI6IDN9LCAibWVtb3J5X2JyZWFrX2Rvd24iOiBmYWxzZX0='] exits with return code = 1\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"gently pinging @muellerzr as you self assigned this! "
] | 1,701 | 1,706 | null | NONE | null | ### System Info
docker image: ```huggingface/transformers-pytorch-deepspeed-latest-gpu:latest```
- `transformers` version: 4.36.0.dev0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: rtx4060ti 16g
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
run:
```
deepspeed --autotuning run \
./script/run_classification.py \
--model_name_or_path ckip-joint/bloom-1b1-zh \
--do_train \
--do_eval \
--output_dir ./bloom \
--train_file ./data/train.csv \
--validation_file ./data/test.csv \
--text_column_names sentence \
--label_column_name label \
--overwrite_output_dir \
--fp16 \
--torch_compile \
--deepspeed cfg/auto.json
```
cfg/auto.json:
```
{
"train_micro_batch_size_per_gpu": "auto",
"autotuning": {
"enabled": true,
"fast": false
}
}
```
the error:
```
[2023-12-04 11:51:42,325] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-12-04 11:51:43,363] [WARNING] [runner.py:203:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-12-04 11:51:43,363] [INFO] [autotuner.py:71:__init__] Created autotuning experiments directory: autotuning_exps
[2023-12-04 11:51:43,364] [INFO] [autotuner.py:84:__init__] Created autotuning results directory: autotuning_exps
[2023-12-04 11:51:43,364] [INFO] [autotuner.py:200:_get_resource_manager] active_resources = OrderedDict([('localhost', [0])])
[2023-12-04 11:51:43,364] [INFO] [runner.py:362:run_autotuning] [Start] Running autotuning
[2023-12-04 11:51:43,364] [INFO] [autotuner.py:669:model_info_profile_run] Starting model info profile run.
0%| | 0/1 [00:00<?, ?it/s][2023-12-04 11:51:43,366] [INFO] [scheduler.py:344:run_experiment] Scheduler wrote ds_config to autotuning_results/profile_model_info/ds_config.json, /workspaces/hf/autotuning_results/profile_model_info/ds_config.json
[2023-12-04 11:51:43,367] [INFO] [scheduler.py:351:run_experiment] Scheduler wrote exp to autotuning_results/profile_model_info/exp.json, /workspaces/hf/autotuning_results/profile_model_info/exp.json
[2023-12-04 11:51:43,367] [INFO] [scheduler.py:378:run_experiment] Launching exp_id = 0, exp_name = profile_model_info, with resource = localhost:0, and ds_config = /workspaces/hf/autotuning_results/profile_model_info/ds_config.json
localhost: ssh: connect to host localhost port 22: Cannot assign requested address
pdsh@b97c1584d47d: localhost: ssh exited with exit code 255
[2023-12-04 11:51:59,057] [INFO] [scheduler.py:430:clean_up] Done cleaning up exp_id = 0 on the following workers: localhost
[2023-12-04 11:51:59,057] [INFO] [scheduler.py:393:run_experiment] Done running exp_id = 0, exp_name = profile_model_info, with resource = localhost:0
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:25<00:00, 25.01s/it]
[2023-12-04 11:52:08,378] [ERROR] [autotuner.py:699:model_info_profile_run] The model is not runnable with DeepSpeed with error = (
[2023-12-04 11:52:08,378] [INFO] [runner.py:367:run_autotuning] [End] Running autotuning
[2023-12-04 11:52:08,378] [INFO] [autotuner.py:1110:run_after_tuning] No optimal DeepSpeed configuration found by autotuning.
```
### Expected behavior
train successfully | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27830/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27829/comments | https://api.github.com/repos/huggingface/transformers/issues/27829/events | https://github.com/huggingface/transformers/pull/27829 | 2,023,668,584 | PR_kwDOCUB6oc5hC0sW | 27,829 | [Seamless v2] Add FE to auto mapping | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27829). All of your documentation changes will be reflected on that endpoint.",
"Thanks for taking care of this @sanchit-gandhi !"
] | 1,701 | 1,702 | 1,701 | CONTRIBUTOR | null | As reported by @Vaibhavs10, Seamless M4T v2 is not compatible with the pipeline class. This is because the feature extractor is not present in the auto mapping. This PR adds the FE to the auto mapping, and implements a slow test to check Seamless M4T v2 works as expected with the pipeline.
### Example Usage
No chunking:
```python
from transformers import pipeline
import torch
from datasets import load_dataset
pipe = pipeline(
"automatic-speech-recognition",
model="facebook/seamless-m4t-v2-large",
device="cuda:0",
torch_dtype=torch.float16,
)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = dataset[0]["audio"]
# eng -> fra speech translation
result = pipe(sample, generate_kwargs={"tgt_lang": "fra"})
```
Chunking (for long audio files):
```python
from transformers import pipeline
import torch
from datasets import load_dataset
pipe = pipeline(
"automatic-speech-recognition",
model="facebook/seamless-m4t-v2-large",
device="cuda:0",
chunk_length_s=30,
batch_size=16,
torch_dtype=torch.float16,
)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = dataset[0]["audio"]
# eng -> fra speech translation
result = pipe(sample, generate_kwargs={"tgt_lang": "fra"})
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27829/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27829",
"html_url": "https://github.com/huggingface/transformers/pull/27829",
"diff_url": "https://github.com/huggingface/transformers/pull/27829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27829.patch",
"merged_at": 1701707654000
} |
https://api.github.com/repos/huggingface/transformers/issues/27828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27828/comments | https://api.github.com/repos/huggingface/transformers/issues/27828/events | https://github.com/huggingface/transformers/pull/27828 | 2,023,619,654 | PR_kwDOCUB6oc5hCp0v | 27,828 | Add "Fill-in-Middle" pipeline | {
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This PR is currently WIP and the pipeline code is copied from the `text_generation` pipeline. Opened this PR for discussion on implementation details.\r\n\r\n@ArthurZucker, since we don't have a Dict with all the compatible models (like `MODEL_FOR_CAUSAL_LM_MAPPING_NAMES` in text_generation), will a list (example below) be a good starting point?\r\n\r\n```python\r\nif is_torch_available():\r\n FIM_SUPPORTED_MODELS_TORCH = [\r\n \"codellama/CodeLlama-7b-hf\",\r\n \"codellama/CodeLlama-13b-hf\",\r\n \"codellama/CodeLlama-34b-hf\",\r\n \"codellama/CodeLlama-7b-Python-hf\",\r\n \"codellama/CodeLlama-13b-Python-hf\",\r\n \"codellama/CodeLlama-34b-Python-hf\",\r\n ...\r\n ]\r\n```",
"Also, here's what I am doing in the implementation of this:\r\n\r\nInstead of expecting the end user to pass three FIM tokens in the input, we can just go ahead with one token called `<FILL_ME>` (inspired by what was done in [CodeLlama](https://huggingface.co/docs/transformers/main/model_doc/code_llama#:~:text=Under%20the%20hood%2C%20the%20tokenizer%20automatically%20splits%20by%20%3CFILL_ME%3E%20to%20create%20a%20formatted%20input%20string%20that%20follows%20the%20original%20training%20pattern.)).\r\n\r\nThe prefix and suffix tokens will be loaded from the tokenizer but the user's input will only contain the `<FILL_ME>` token using which the text will be split to get prefix and suffix text (as CodeLlama's tokenizer does it [here](https://github.com/huggingface/transformers/blob/facc66457ef79879cf63e5c02d6989ce98ac023d/src/transformers/models/code_llama/tokenization_code_llama.py#L263-L264)).\r\n\r\nI will also add an option in the pipeline to return the filled text or the total text (including the prompt).\r\n",
"Hi @ArthurZucker, could you please review this PR? \r\n`FIMPipeline` works and is compatible with CodeLlama family of models.",
"Sure, could you make sure the CIs are green? ",
"@ArthurZucker The documentation CI is failing consistently with the following error:\r\n\r\n```\r\nERROR src/transformers/pipelines/fill_in_middle.py - ValueError: line 14 of the docstring for transformers.pipelines.fill_in_middle.FIMPipeline has inconsistent leading whitespace: 'def fib(x: int) -> int:'\r\n```\r\nHowever, in the example of the FIM pipeline in the docstring, the spaces are a part of the generated code output.\r\n\r\n```python\r\n>>> from transformers import pipeline\r\n>>> PROMPT = '''\r\n def fib(x: int) -> int:\r\n <FILL_ME>\r\n return fib(x-1) + fib(x-2)\r\n '''\r\n>>> generator = pipeline(model=\"codellama/CodeLlama-7b-hf\")\r\n>>> generator(PROMPT, do_sample=False)\r\n[{'generated_text': \"\\ndef fib(x: int) -> int:\\n\\tif x == 0:\\n\\t\\treturn 0\\n\\tif x == 1:\\n\\t\\treturn 1\\n\\telse:\\n\\t\\treturn fib(x-1) + fib(x-2)\\n\"}]\r\n```\r\nShould I ignore this?",
"you can probably ignore it with ` # doctest: +SKIP` ",
"Also cc @Rocketknight1 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27828). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the detailed review, @ArthurZucker!\r\n\r\nI also noticed that my auto-formatting setting in Black has made some unwanted changes in the `text_generation.py` file. Also reverting that file to the original branch version.",
"No worries, ping me whenever for another review! ",
"@ArthurZucker About your suggestion on merging the CodeLlama FIM logic with other model variants: It would complicate things a lot in the pipeline code as it will require a lot of code from the CodeLlama tokenizer to be included directly in the pipeline code since the tokenizer deals with much of the FIM related operations in the `tokenize()` and other functions.\r\n\r\nKeeping them separate (i.e; For models that support a `<FILL_ME>` token out of the box vs. those that don't support it like starcoder) will be easier to debug and maintain and will remove redundancy.\r\n\r\nFYI, I am thinking of something like this:\r\n\r\nThe supported model dict will have all model names, corresponding FIM tokens and if they support Out of the box Infilling or not (if they don't, it will be None in place of infill token)\r\n```python\r\nSUPPORTED_MODELS = {\r\n \"bigcode/starcoder\": (\"<fim_prefix>\", \"<fim_middle>\", \"<fim_suffix>\", None),\r\n \"codellama/CodeLlama-7b-hf\": (\"▁<PRE>\", \"▁<MID>\", \"▁<SUF>\", \"<FILL_ME>\"),\r\n # other models\r\n}\r\n\r\n...\r\n\r\ninfill_token = self.SUPPORTED_MODELS[self.model.name_or_path]\r\n\r\nif not infill_token:\r\n # Extract prefix and suffix\r\n input_prefix, input_suffix = self.extract_prefix_suffix(prompt_text, infill_token)\r\n\r\n if mode == \"psm\":\r\n prompt_text = (\r\n self.DEFAULT_PREFIX_TOKEN\r\n + input_prefix\r\n + self.DEFAULT_SUFFIX_TOKEN\r\n + input_suffix\r\n + self.DEFAULT_MIDDLE_TOKEN\r\n )\r\n else:\r\n prompt_text = (\r\n self.DEFAULT_SUFFIX_TOKEN\r\n + input_suffix\r\n + self.DEFAULT_PREFIX_TOKEN\r\n + input_prefix\r\n + self.DEFAULT_MIDDLE_TOKEN\r\n )\r\nelse:\r\n # Tokenizer directly, since the Infill token is supported out of the box and doesn't need any token re-arrangement\r\n ...\r\n```\r\n\r\nPlease let me know your opinion on this!",
"IMO that is exactly the purpose of this pipeline. The functions should not necessarily have been part of the tokenizer as they are only need for the FIM task. So I am more saying let's not rely on any tokenizer, but handle the processing in the pipeline",
"> IMO that is exactly the purpose of this pipeline. The functions should not necessarily have been part of the tokenizer as they are only need for the FIM task. So I am more saying let's not rely on any tokenizer, but handle the processing in the pipeline\r\n\r\nI see, thanks for clarifying, will push the updates soon!",
"@ArthurZucker I added the naming and init file changes as you requested. Working on the tests by taking the text-generation tests as a reference.\r\n\r\nOne doubt: The smallest model that supports FIM (that I know of) is `codellama-7b` which is still a pretty huge model and should use GPU for testing. Is there a way to do the tests without using such a big model?",
"You should be able to use 4bit quantization! "
] | 1,701 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds the Fill-in-Middle pipeline to 🤗 transformers.
FIM objective was proposed in [Efficient Training of Language Models to Fill in the Middle](https://arxiv.org/abs/2207.14255). They showed that autoregressive language models can learn to infill text after applying a straightforward transformation to the dataset, which simply moves a span of text from the middle of a document to its end.
As discussed in #27059
## Who can review?
@sayakpaul @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27828",
"html_url": "https://github.com/huggingface/transformers/pull/27828",
"diff_url": "https://github.com/huggingface/transformers/pull/27828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27828.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27827/comments | https://api.github.com/repos/huggingface/transformers/issues/27827/events | https://github.com/huggingface/transformers/pull/27827 | 2,023,460,326 | PR_kwDOCUB6oc5hCGlm | 27,827 | [Seamless v1] Link to v2 docs | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27827). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Links the Seamless v2 docs in the v1 docs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27827/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27827",
"html_url": "https://github.com/huggingface/transformers/pull/27827",
"diff_url": "https://github.com/huggingface/transformers/pull/27827.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27827.patch",
"merged_at": 1701690474000
} |
https://api.github.com/repos/huggingface/transformers/issues/27826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27826/comments | https://api.github.com/repos/huggingface/transformers/issues/27826/events | https://github.com/huggingface/transformers/issues/27826 | 2,023,392,812 | I_kwDOCUB6oc54moYs | 27,826 | Add LayoutLMProcessor | {
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @amyeroberts and @NielsRogge if LayoutLM is just not as good we should use newest models",
"There are several advantages in using LayoutLMv1:\r\n- It's text-only, so it can be much more lightweight. Not depening on detectron2 is also a plus (there are no pre-built detectron2 for latest versions of PyTorch/CUDA)\r\n- From what I know, v2 and v3 don't permit commercial use, while v1 does.\r\n- impira/layoutlm-document-qa is very good. I haven't found a good fine-tuned v2 and v3 on DocVQA.",
"Alright then! Feel free to open a PR if you have time \r\n",
"Thanks @gau-nernst for opening this issue, indeed we only started defining processors for v2 and v3 but we could define one for v1 as well. Your PR already looks in a great state, let me know if you need any help."
] | 1,701 | 1,705 | null | CONTRIBUTOR | null | ### Feature request
Add processor for LayoutLM. I'm not sure why v2 and v3 have their respective processors, but the original v1 doesn't. It should be almost identical their v2 and v3 counterparts (apply tesseract OCR + call the tokenizer appropriately), without returning the resized image (`pixel_values`), since LayoutLMv1 is text-only.
This would also simplify `document-question-answering` pipeline, since right now the pipeline repeats the above logic for LayoutLM.
### Motivation
Make LayoutLM feature-parity with its v2 and v3.
### Your contribution
I can submit a PR to add LayoutLMProcessor. It should be almost identical to v2 and v3, so the task should be straight-forward.
Updating `document-question-answering` pipeline to use the new processor would be too complex since I'm not familiar with the codebase. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27826/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27825/comments | https://api.github.com/repos/huggingface/transformers/issues/27825/events | https://github.com/huggingface/transformers/pull/27825 | 2,023,309,274 | PR_kwDOCUB6oc5hBlmy | 27,825 | in peft finetune, only the trainable parameters need to be saved | {
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@younesbelkada @pacman100 please help review the PR.",
"@sywangyi can you try to merge your branch with upstream main? Perhaps it will fix the current failing CI",
"@younesbelkada I have rebased to main. the code quality check failure is not related with the PR",
"@sywangyi can you try to re-run `make fixup` with `pip uninstall black && pip install -U ruff==0.1.5` ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27825). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@pacman100 could you help review the PR? Thanks."
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | to reduce the storage size and also save the time of checkpoint saving while using deepspeed for training
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27825/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27825",
"html_url": "https://github.com/huggingface/transformers/pull/27825",
"diff_url": "https://github.com/huggingface/transformers/pull/27825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27825.patch",
"merged_at": 1702909625000
} |
https://api.github.com/repos/huggingface/transformers/issues/27824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27824/comments | https://api.github.com/repos/huggingface/transformers/issues/27824/events | https://github.com/huggingface/transformers/issues/27824 | 2,023,210,325 | I_kwDOCUB6oc54l71V | 27,824 | Could we Add Linear Projection Layer in pre-trained model? | {
"login": "DopeorNope-Lee",
"id": 86828497,
"node_id": "MDQ6VXNlcjg2ODI4NDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/86828497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DopeorNope-Lee",
"html_url": "https://github.com/DopeorNope-Lee",
"followers_url": "https://api.github.com/users/DopeorNope-Lee/followers",
"following_url": "https://api.github.com/users/DopeorNope-Lee/following{/other_user}",
"gists_url": "https://api.github.com/users/DopeorNope-Lee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DopeorNope-Lee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DopeorNope-Lee/subscriptions",
"organizations_url": "https://api.github.com/users/DopeorNope-Lee/orgs",
"repos_url": "https://api.github.com/users/DopeorNope-Lee/repos",
"events_url": "https://api.github.com/users/DopeorNope-Lee/events{/privacy}",
"received_events_url": "https://api.github.com/users/DopeorNope-Lee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,701 | 1,701 | null | NONE | null | ### Model description
As you know, if our model is too high positional_embedding size, then input vectors become too sparse data, so it leads to a performance decrease.
However, I have an idea to overcome it.
Before positional embedding, we add a new trainable linear projection layer to reduce its dimension.
For example, If our model’s original positional embedding size is 4096, then we enlarge our input size as 8192.
Then trainable linear projection layer projects 8192 to 4096 and then we perform positional embedding.
let LP is a trainable Linear Projection Layer, PE(X) is positional Embedding.
Then our calculation formula is as follows.
"Original Structure"
X(R^4096)-> PE(X, where X is in R^4096)-> model(PE(X))
"Proposed Structure"
X(R^8192)-> LP(X, where X^8192) -> X'(R^4096) -> PE(X') -> model(PE(X'))
Can we run this with the current library? If not, can you add?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27824/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27823/comments | https://api.github.com/repos/huggingface/transformers/issues/27823/events | https://github.com/huggingface/transformers/pull/27823 | 2,023,184,318 | PR_kwDOCUB6oc5hBKSp | 27,823 | [`ModelOnTheFlyConversionTester`] Mark as slow for now | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27823). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
The ModelOnTheFlyConversionTester were run on each individual PRs, should not be the case IMO this should only be a slow / only be run on the main branch and not fetched all the time. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27823/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27823",
"html_url": "https://github.com/huggingface/transformers/pull/27823",
"diff_url": "https://github.com/huggingface/transformers/pull/27823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27823.patch",
"merged_at": 1701675195000
} |
https://api.github.com/repos/huggingface/transformers/issues/27822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27822/comments | https://api.github.com/repos/huggingface/transformers/issues/27822/events | https://github.com/huggingface/transformers/issues/27822 | 2,022,937,548 | I_kwDOCUB6oc54k5PM | 27,822 | [tests][models]An error about Owlv2ModelIntegrationTest.test_inference_object_detection | {
"login": "Tyx-main",
"id": 134379153,
"node_id": "U_kgDOCAJ2kQ",
"avatar_url": "https://avatars.githubusercontent.com/u/134379153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tyx-main",
"html_url": "https://github.com/Tyx-main",
"followers_url": "https://api.github.com/users/Tyx-main/followers",
"following_url": "https://api.github.com/users/Tyx-main/following{/other_user}",
"gists_url": "https://api.github.com/users/Tyx-main/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tyx-main/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tyx-main/subscriptions",
"organizations_url": "https://api.github.com/users/Tyx-main/orgs",
"repos_url": "https://api.github.com/users/Tyx-main/repos",
"events_url": "https://api.github.com/users/Tyx-main/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tyx-main/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nThis was fixed in https://github.com/huggingface/transformers/pull/27793",
"> Hi,\r\n> \r\n> This was fixed in #27793\r\n\r\nThanks!"
] | 1,701 | 1,701 | 1,701 | NONE | null | ### System Info
torch: 2.1.0
transformers: 4.35.2
accelerate: 0.24.1
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In the line 844 of tests/model/owlv2
/test_modeling_owlv2.py:
```
self.assertTrue(torch.allclose(outputs.logits[0, :3, :3], expected_slice_logits, atol=1e-4))
```
The function 'torch.allclose' expects that all the params should be on the same device, but the result seems that the 'outputs.logits[0, :3, :3]' is on the cuda with the 'expected_slice_logits' on the cpu, it makes the following error:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
Is it a problem we can fix it?
### Expected behavior
Expected the function 'test_inference_object_detection' of 'Owlv2ModelIntegrationTest' can be passed on the cuda. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27822/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27821/comments | https://api.github.com/repos/huggingface/transformers/issues/27821/events | https://github.com/huggingface/transformers/issues/27821 | 2,022,935,741 | I_kwDOCUB6oc54k4y9 | 27,821 | Inaccurate Inference Results When Using OWLv2's `image_guided_detection` | {
"login": "Itto1992",
"id": 20178227,
"node_id": "MDQ6VXNlcjIwMTc4MjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/20178227?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Itto1992",
"html_url": "https://github.com/Itto1992",
"followers_url": "https://api.github.com/users/Itto1992/followers",
"following_url": "https://api.github.com/users/Itto1992/following{/other_user}",
"gists_url": "https://api.github.com/users/Itto1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Itto1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Itto1992/subscriptions",
"organizations_url": "https://api.github.com/users/Itto1992/orgs",
"repos_url": "https://api.github.com/users/Itto1992/repos",
"events_url": "https://api.github.com/users/Itto1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/Itto1992/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nThis was addressed in #27698. It's already fixed on main: https://huggingface.co/docs/transformers/main/en/model_doc/owlv2#transformers.Owlv2ForObjectDetection",
"Thank you for your quick response!\r\nI will try the main branch😁"
] | 1,701 | 1,701 | 1,701 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): 2.14.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
@younesbelkada
@amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the sample code of OWLv2: https://huggingface.co/docs/transformers/model_doc/owlv2#transformers.Owlv2ForObjectDetection.image_guided_detection.example
### Expected behavior
- The bounding boxes intended to encircle two cats in the sample code appear in inappropriate positions.
- In the sample code (https://huggingface.co/docs/transformers/model_doc/owlv2#transformers.Owlv2ForObjectDetection.image_guided_detection.example), 13 boxes appear in locations unrelated to cats. I ran the exact same code in Google Colab and was able to replicate the same inference results.
- The previous model (OWLViT) was able to detect without any issues.
I am unsure if this behavior is expected, so I would appreciate some advice. If you are aware of any methods to improve the inference performance, please let me know. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27821/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27820/comments | https://api.github.com/repos/huggingface/transformers/issues/27820/events | https://github.com/huggingface/transformers/pull/27820 | 2,022,815,449 | PR_kwDOCUB6oc5g_5od | 27,820 | fix: non-atomic checkpoint save | {
"login": "thundergolfer",
"id": 12058921,
"node_id": "MDQ6VXNlcjEyMDU4OTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/12058921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thundergolfer",
"html_url": "https://github.com/thundergolfer",
"followers_url": "https://api.github.com/users/thundergolfer/followers",
"following_url": "https://api.github.com/users/thundergolfer/following{/other_user}",
"gists_url": "https://api.github.com/users/thundergolfer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thundergolfer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thundergolfer/subscriptions",
"organizations_url": "https://api.github.com/users/thundergolfer/orgs",
"repos_url": "https://api.github.com/users/thundergolfer/repos",
"events_url": "https://api.github.com/users/thundergolfer/events{/privacy}",
"received_events_url": "https://api.github.com/users/thundergolfer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Test failures appear unrelated to diff: \r\n\r\n```\r\n @classmethod\r\n def setUpClass(cls):\r\n cls.user = \"huggingface-hub-ci\"\r\n cls.token = os.getenv(\"HUGGINGFACE_PRODUCTION_USER_TOKEN\", None)\r\n \r\n if cls.token is None:\r\n> raise ValueError(\"Cannot run tests as secret isn't setup.\")\r\nE ValueError: Cannot run tests as secret isn't setup.\r\n\r\ntests/test_modeling_utils.py:1178: ValueError\r\n\r\n\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_conversion - ValueError: Cannot run tests as secret isn't setup.\r\nERROR tests/test_modeling_utils.py::ModelOnTheFlyConversionTester::test_safetensors_on_the_fly_conversion_gated - ValueError: Cannot run tests as secret isn't setup.\r\n```\r\n\r\nAlso, the checkpoint loading code could arguably use more sophisticated checkpoint validation and automatic fallback in cases of validation failure, but that's a follow-up issue. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27820). All of your documentation changes will be reflected on that endpoint.",
"Thanks @muellerzr. Just rebased. \r\n\r\n**Edit:** Ok that revealed an issue! Are the CI tests fail fast? I didn't see test failures related to my diff before 🤔 ",
"@thundergolfer you should see them now. Looks like you may not be cleaning up the temp dirs right during testing? (I'd also just maybe remove the tempdir if possible after you're done needing it?)",
"@muellerzr there's actually some test cases that reveal a behavior change in this diff. \r\n\r\nIf a trainer run tries to reuse a checkpoint directory, with this diff that will now fail. I'm not sure how common this is in practice, but one of the test cases writes checkpoints `5, 10, 15, 20` and then restarts the training from checkpoint `15`. Thus when the trainer gets again to the checkpoint at step `20` it fails, because my diff does not support overwriting an existing checkpoint.\r\n\r\nFor now I can adjust the change to make it compatible with non-empty destination checkpoint directories, and log a warning that the checkpoint will be non-atomic. \r\n\r\nThe alternative of deleting everything in the destination directory is destructive and could break things for people.",
"@muellerzr alright tests are passing now. There's still code quality CI steps failing because it doesn't find a `token`. Doesn't seem like something I can fix? Do I need to rebase again?",
"Don't quite see the token issue, can you do `pip install -e .[quality] -U; make style; make quality`? ",
"You can also try rebasing again, as I assume this got fixed somewhere else",
"On the build documentation step, if I click through I see: \r\n\r\n<img width=\"1120\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/12058921/18a21074-0787-4bae-852a-d23941ba327b\">\r\n",
"Will rebase",
"`pip install -e .[quality] -U; make style; make quality` works on my branch (exit `0`, no diff). The CI issues appear to be some step setup problem, not an actual failure.",
"Hmm I'm getting an error running with deepspeed. Is this what https://github.com/huggingface/transformers/pull/27929 is addressing?\r\n\r\n```\r\nSaving model checkpoint to /scratch/sanitycheck-20231211/tmp-checkpoint-10\r\nTrainer.model is not a `PreTrainedModel`, only saving its state dict.\r\ntokenizer config file saved in /scratch/sanitycheck-20231211/tmp-checkpoint-10/tokenizer_config.json\r\nSpecial tokens file saved in /scratch/sanitycheck-20231211/tmp-checkpoint-10/special_tokens_map.json\r\n[2023-12-12 06:20:12,261] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step10 is about to be saved!\r\n[2023-12-12 06:20:12,266] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /scratch/sanitycheck-20231211/tmp-checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-12-12 06:20:12,266] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /scratch/sanitycheck-20231211/tmp-checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-12-12 06:20:12,268] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /scratch/sanitycheck-20231211/tmp-checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-12-12 06:20:12,269] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /scratch/sanitycheck-20231211/tmp-checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-12-12 06:20:12,270] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /scratch/sanitycheck-20231211/tmp-checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-12-12 06:20:12,271] [INFO] [engine.py:3421:_save_zero_checkpoint] zero checkpoint saved /scratch/sanitycheck-20231211/tmp-checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-12-12 06:20:12,274] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!\r\nTraceback (most recent call last):\r\n File \"/opt/venv/lib/python3.11/site-packages/transformers/trainer.py\", line 1543, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/venv/lib/python3.11/site-packages/transformers/trainer.py\", line 1942, in _inner_training_loop\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/opt/venv/lib/python3.11/site-packages/transformers/trainer.py\", line 2302, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n File \"/opt/venv/lib/python3.11/site-packages/transformers/trainer.py\", line 2405, in _save_checkpoint\r\n self.state.save_to_json(os.path.join(staging_output_dir, TRAINER_STATE_NAME))\r\n File \"/opt/venv/lib/python3.11/site-packages/transformers/trainer_callback.py\", line 114, in save_to_json\r\n with open(json_path, \"w\", encoding=\"utf-8\") as f:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFileNotFoundError: [Errno 2] No such file or directory: '/scratch/sanitycheck-20231211/tmp-checkpoint-10/trainer_state.json'\r\n```"
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | ## What does this PR do?
`transformers.trainer.Trainer` currently does not do atomic checkpointing. On checkpoint save, the code creates a checkpoint directory and then progressively adds more files to it. An exception can occur for various reasons in the middle of the checkpoint save process, leaving the checkpoint directory in an invalid state.
Some examples that can result in partial checkpoint save:
* Out of disk space
* Pre-emption of the host
* Serialization issues in trainer components
On restore, the trainer just looks up the latest available checkpoint by directory name. It does not check for and ignore directories with partial checkpoints.
This change adjusts the `_save_checkpoint` method to do all writing into a 'temporary directory' which is renamed to its final name only after all save operations have succeeded.
I don't put the temporary directory in `/tmp/` because that's sometimes a different filesystem mount to where checkpoints are saved and renaming in that situation produces `OSError: [Errno 18] Invalid cross-device link`. I don't use `shutil.move` because that's not atomic for directories.
**Fixes:** # (didn't bother making one, just submitted PR :))
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@muellerzr, @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27820/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27820/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27820",
"html_url": "https://github.com/huggingface/transformers/pull/27820",
"diff_url": "https://github.com/huggingface/transformers/pull/27820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27820.patch",
"merged_at": 1702040934000
} |
https://api.github.com/repos/huggingface/transformers/issues/27819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27819/comments | https://api.github.com/repos/huggingface/transformers/issues/27819/events | https://github.com/huggingface/transformers/issues/27819 | 2,022,763,001 | I_kwDOCUB6oc54kOn5 | 27,819 | [modelling] Missing causal mask in Llama model | {
"login": "KexinFeng",
"id": 23562091,
"node_id": "MDQ6VXNlcjIzNTYyMDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/23562091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KexinFeng",
"html_url": "https://github.com/KexinFeng",
"followers_url": "https://api.github.com/users/KexinFeng/followers",
"following_url": "https://api.github.com/users/KexinFeng/following{/other_user}",
"gists_url": "https://api.github.com/users/KexinFeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KexinFeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KexinFeng/subscriptions",
"organizations_url": "https://api.github.com/users/KexinFeng/orgs",
"repos_url": "https://api.github.com/users/KexinFeng/repos",
"events_url": "https://api.github.com/users/KexinFeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/KexinFeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Attention masks were recently refactored, see the https://github.com/huggingface/transformers/blob/2c658b5a4282f2e824b4e23dc3bcda7ef27d5827/src/transformers/models/llama/modeling_llama.py#L889 method.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### System Info
transformers-4.35.2
### Who can help?
@ArthurZucker @younesbelkd
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Line https://github.com/huggingface/transformers/blob/2c658b5a4282f2e824b4e23dc3bcda7ef27d5827/src/transformers/models/llama/modeling_llama.py#L413C9-L413C9f
Here the causal mask is missing. See the gpt2 example in the expected behaviour below. Consequently, when using llama model to do inference, at the prefill step, the casuality is broken, i.e. the early tokens attends to the later ones.
### Expected behavior
https://github.com/huggingface/transformers/blob/2c658b5a4282f2e824b4e23dc3bcda7ef27d5827/src/transformers/models/gpt2/modeling_gpt2.py#L194-L202 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27819/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27818/comments | https://api.github.com/repos/huggingface/transformers/issues/27818/events | https://github.com/huggingface/transformers/pull/27818 | 2,022,603,266 | PR_kwDOCUB6oc5g_Og3 | 27,818 | fix prompt strip to support tensors and np arrays | {
"login": "AvivSham",
"id": 43371254,
"node_id": "MDQ6VXNlcjQzMzcxMjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/43371254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AvivSham",
"html_url": "https://github.com/AvivSham",
"followers_url": "https://api.github.com/users/AvivSham/followers",
"following_url": "https://api.github.com/users/AvivSham/following{/other_user}",
"gists_url": "https://api.github.com/users/AvivSham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AvivSham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AvivSham/subscriptions",
"organizations_url": "https://api.github.com/users/AvivSham/orgs",
"repos_url": "https://api.github.com/users/AvivSham/repos",
"events_url": "https://api.github.com/users/AvivSham/events{/privacy}",
"received_events_url": "https://api.github.com/users/AvivSham/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I was not able to make it pass the CI/CD tests due to import issues. @sanchit-gandhi can you please guide?",
"Thank you for reviewing @sanchit-gandhi !\r\nI was not aware of the framework agnostic constraint, I think we are ok now, can you please review it again?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This PR is not stale, waiting for @sanchit-gandhi to approve."
] | 1,701 | 1,707 | null | NONE | null | # What does this PR do?
`WhisperTokenizer` does not strip the prompt ids when calling `decode` for `torch.Tensor`, `tf.Tensor`, and `np.ndarray`. It does not perform the slicing in `_strip_prompt` because the condition `isinstance(token_ids, list)` is not met.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
@ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27818/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27818",
"html_url": "https://github.com/huggingface/transformers/pull/27818",
"diff_url": "https://github.com/huggingface/transformers/pull/27818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27818.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27817/comments | https://api.github.com/repos/huggingface/transformers/issues/27817/events | https://github.com/huggingface/transformers/issues/27817 | 2,022,599,611 | I_kwDOCUB6oc54jmu7 | 27,817 | Minor enhancement: Bounding box drawing of object detection should follow some edge cases. | {
"login": "Anindyadeep",
"id": 58508471,
"node_id": "MDQ6VXNlcjU4NTA4NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/58508471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anindyadeep",
"html_url": "https://github.com/Anindyadeep",
"followers_url": "https://api.github.com/users/Anindyadeep/followers",
"following_url": "https://api.github.com/users/Anindyadeep/following{/other_user}",
"gists_url": "https://api.github.com/users/Anindyadeep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anindyadeep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anindyadeep/subscriptions",
"organizations_url": "https://api.github.com/users/Anindyadeep/orgs",
"repos_url": "https://api.github.com/users/Anindyadeep/repos",
"events_url": "https://api.github.com/users/Anindyadeep/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anindyadeep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the suggestion, please feel free to open a PR :)"
] | 1,701 | 1,703 | 1,703 | CONTRIBUTOR | null | ### System Info
```
- `transformers` version: 4.35.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@stevhliu and @MKhalusova
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
So, I was trying to do object detection using the [official example](https://huggingface.co/docs/transformers/tasks/object_detection). I was at the dataset exploration stage. There I was using some another dataset. For starting out, I simply copy pasted this part of the code:
```python
import numpy as np
import os
from PIL import Image, ImageDraw
image = cppe5["train"][0]["image"]
annotations = cppe5["train"][0]["objects"]
draw = ImageDraw.Draw(image)
categories = cppe5["train"].features["objects"].feature["category"].names
id2label = {index: x for index, x in enumerate(categories, start=0)}
label2id = {v: k for k, v in id2label.items()}
for i in range(len(annotations["id"])):
box = annotations["bbox"][i]
class_idx = annotations["category"][i]
x, y, w, h = tuple(box)
draw.rectangle((x, y, x + w, y + h), outline="red", width=1)
draw.text((x, y), id2label[class_idx], fill="white")
image
```
This worked for `cppe-5` dataset. Here is an example of this dataset
```
cppe5["train"][0]
{'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x7F9EC9E77C10>,
'width': 943,
'height': 663,
'objects': {'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]],
'category': [4, 4, 0, 0]}}
```
But, I was using a [different dataset](https://huggingface.co/datasets/kili-technology/plastic_in_river), with the following info:
```
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1280x720>,
'litter': {'label': [2, 2],
'bbox': [[0.6937950849533081,
0.17073695361614227,
0.017922647297382355,
0.011738809756934643],
[0.5574886202812195,
0.18079878389835358,
0.021695835515856743,
0.010061836801469326]]}}
```
The simply copy-pasting the initial code will not draw the bounding box. So, a simple if-else check, of whether the box is normalized or not can make this code more robust and optimized and helps new-comer not to get blocked, when they are simply starting out and understanding by copy-pasting the code.
### Expected behavior
The expected behaviour, should be the same for the sample dataset, when I swap out the dataset name with the new one and replace it's metadata. Here is a sample solution for this:
```python
image = cppe5["train"][0]["image"]
annotations = cppe5["train"][0]["objects"]
draw = ImageDraw.Draw(image)
width, height = image.size
categories = cppe5["train"].features["objects"].feature["category"].names
id2label = {index: x for index, x in enumerate(categories, start=0)}
label2id = {v: k for k, v in id2label.items()}
for i in range(len(annotations["id"])):
box = annotations["bbox"][i]
class_idx = annotations["category"][i]
x, y, w, h = tuple(box)
# Check if coordinates are normalized or not
if max(box) > 1.0:
# Coordinates are un-normalized, rescale them
x1, y1 = int(x), int(y)
x2, y2 = int(x + w), int(y + h)
else:
# Coordinates are normalized, scale them
x1 = int(x * width)
y1 = int(y * height)
x2 = int((x + w) * width)
y2 = int((y + h) * height)
draw.rectangle((x1, y1, x2, y2), outline="red", width=1)
draw.text((x1, y1), id2label[class_idx], fill="white")
image
```
This generates the same output. **
Let me know, I am open to fix this under a small PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27817/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27816/comments | https://api.github.com/repos/huggingface/transformers/issues/27816/events | https://github.com/huggingface/transformers/pull/27816 | 2,022,592,865 | PR_kwDOCUB6oc5g_Map | 27,816 | Fix beam score calculation issue for JAX version | {
"login": "VsonicV",
"id": 23429580,
"node_id": "MDQ6VXNlcjIzNDI5NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/23429580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VsonicV",
"html_url": "https://github.com/VsonicV",
"followers_url": "https://api.github.com/users/VsonicV/followers",
"following_url": "https://api.github.com/users/VsonicV/following{/other_user}",
"gists_url": "https://api.github.com/users/VsonicV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VsonicV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VsonicV/subscriptions",
"organizations_url": "https://api.github.com/users/VsonicV/orgs",
"repos_url": "https://api.github.com/users/VsonicV/repos",
"events_url": "https://api.github.com/users/VsonicV/events{/privacy}",
"received_events_url": "https://api.github.com/users/VsonicV/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27816). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the same beam score calculation issue following #27351 , for JAX version.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27816/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27816",
"html_url": "https://github.com/huggingface/transformers/pull/27816",
"diff_url": "https://github.com/huggingface/transformers/pull/27816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27816.patch",
"merged_at": 1701927258000
} |
https://api.github.com/repos/huggingface/transformers/issues/27815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27815/comments | https://api.github.com/repos/huggingface/transformers/issues/27815/events | https://github.com/huggingface/transformers/issues/27815 | 2,022,556,339 | I_kwDOCUB6oc54jcKz | 27,815 | transformers==4.35.2 compatibility issue with pytorch==1.7.1+cu110 | {
"login": "rabjab",
"id": 149430981,
"node_id": "U_kgDOCOgixQ",
"avatar_url": "https://avatars.githubusercontent.com/u/149430981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabjab",
"html_url": "https://github.com/rabjab",
"followers_url": "https://api.github.com/users/rabjab/followers",
"following_url": "https://api.github.com/users/rabjab/following{/other_user}",
"gists_url": "https://api.github.com/users/rabjab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabjab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabjab/subscriptions",
"organizations_url": "https://api.github.com/users/rabjab/orgs",
"repos_url": "https://api.github.com/users/rabjab/repos",
"events_url": "https://api.github.com/users/rabjab/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabjab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! A few PRs aimed to fix this were opened #27803 and will address this",
"Thanks, @ArthurZucker - I'll close this since y'all are obviously on top of it already. Apologies for the noise. "
] | 1,701 | 1,701 | 1,701 | NONE | null | ### System Info
transformers==4.35.2
pytorch==1.7.1+cu110
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install pytorch (torch==1.7.1+cu110) (assume using nvidia 470 drivers)
2. pip install transformers
3. `from transformers import AutoTokenizer, AutoModelForCausalLM`
### Expected behavior
`No module named 'torch.utils._pytree'`
Previous versions of transformers package (i.e., 4.33.0) work fine.
Maybe something to look into re dependency...
Maybe something to add to docs? @stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27815/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27814/comments | https://api.github.com/repos/huggingface/transformers/issues/27814/events | https://github.com/huggingface/transformers/pull/27814 | 2,022,555,164 | PR_kwDOCUB6oc5g_EFl | 27,814 | Fix beam score calculation issue for Tensorflow version | {
"login": "VsonicV",
"id": 23429580,
"node_id": "MDQ6VXNlcjIzNDI5NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/23429580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VsonicV",
"html_url": "https://github.com/VsonicV",
"followers_url": "https://api.github.com/users/VsonicV/followers",
"following_url": "https://api.github.com/users/VsonicV/following{/other_user}",
"gists_url": "https://api.github.com/users/VsonicV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VsonicV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VsonicV/subscriptions",
"organizations_url": "https://api.github.com/users/VsonicV/orgs",
"repos_url": "https://api.github.com/users/VsonicV/repos",
"events_url": "https://api.github.com/users/VsonicV/events{/privacy}",
"received_events_url": "https://api.github.com/users/VsonicV/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@gante I have updated the PR following your suggestions. Now the `cur_len` represents the length of the entire sequence including the decoder prompt, which is consistent with the codebase."
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the same beam score calculation issue following #27351 , for Tensorflow version.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27814/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27814",
"html_url": "https://github.com/huggingface/transformers/pull/27814",
"diff_url": "https://github.com/huggingface/transformers/pull/27814.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27814.patch",
"merged_at": 1702041014000
} |
https://api.github.com/repos/huggingface/transformers/issues/27813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27813/comments | https://api.github.com/repos/huggingface/transformers/issues/27813/events | https://github.com/huggingface/transformers/issues/27813 | 2,022,544,441 | I_kwDOCUB6oc54jZQ5 | 27,813 | whisper prompt_ids gets added to decoded predictions | {
"login": "MightyStud",
"id": 73258591,
"node_id": "MDQ6VXNlcjczMjU4NTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/73258591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MightyStud",
"html_url": "https://github.com/MightyStud",
"followers_url": "https://api.github.com/users/MightyStud/followers",
"following_url": "https://api.github.com/users/MightyStud/following{/other_user}",
"gists_url": "https://api.github.com/users/MightyStud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MightyStud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MightyStud/subscriptions",
"organizations_url": "https://api.github.com/users/MightyStud/orgs",
"repos_url": "https://api.github.com/users/MightyStud/repos",
"events_url": "https://api.github.com/users/MightyStud/events{/privacy}",
"received_events_url": "https://api.github.com/users/MightyStud/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #27594, will be closed by https://github.com/huggingface/transformers/pull/27836.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### System Info
duplicated the issue in google collab:
- `transformers` version: 4.35.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (True)
- Tensorflow version (GPU?): 2.14.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
created a collab reproduction of the issue [here](https://colab.research.google.com/drive/1v0NaGrw1NVbrj14Y8LTmJKMy8zVDJf1u?usp=sharing)
here is the main code snippet:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "openai/whisper-large-v3"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
prompt_text = "Mr. kuilter"
prompt_ids = processor.get_prompt_ids(prompt_text, return_tensors="pt")
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
generate_kwargs={
"prompt_ids": prompt_ids
}
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
```
### Expected behavior
without prompting the output text is:
> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter. He tells us that at this festive season of the year, with Christmas and roast beef looming before us, similes drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of rocky Ithaca. Linnell's pictures are a sort of Upguards and Adam paintings, and Mason's exquisite idylls are as national as a jingo poem. Mr. Burkett Foster's landscapes smile at one much in the same way that Mr. Carker used to flash his teeth. And Mr. John Collier gives his sitter a cheerful slap on the back before he says, like a shampooer in a Turkish bath, Next man!
with prompting the output text is:
> Mr. kuilter Mr. Kuilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Kuilter's manner less interesting than his matter. He tells us that at this festive season of the year, with Christmas and roast beef looming before us, similes drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Leighton's work is really Greek after all, and can discover in it but little of rocky Ithaca. Lynell's pictures are a sort of Upgards and Adam paintings, and Mason's exquisite idylls are as national as a django poem. Mr. Burkett Foster's landscapes smile at one much in the same way that Mr. Carker used to flash his teeth. And Mr. John Collier gives his sitter a cheerful slap on the back before he says, like a shampooer in a Turkish bath, next man.
The prompt did effect the generated decoded predictions changing all instance of "Quilter" to "kuilter" as intended, however, the output result contain the prompt "Mr Kuilter" in the beginning of the decoded predictions (an extra one coming from the prompt).
I'm not sure why, but prompt is getting added to the prediction. That shouldn't be the case as shown in [openAI cookbook](https://github.com/openai/openai-cookbook/blob/main/examples/Whisper_prompting_guide.ipynb) . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27813/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27812/comments | https://api.github.com/repos/huggingface/transformers/issues/27812/events | https://github.com/huggingface/transformers/pull/27812 | 2,022,532,516 | PR_kwDOCUB6oc5g-_Uo | 27,812 | Fix beam score calculation issue for Tensorflow | {
"login": "VsonicV",
"id": 23429580,
"node_id": "MDQ6VXNlcjIzNDI5NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/23429580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VsonicV",
"html_url": "https://github.com/VsonicV",
"followers_url": "https://api.github.com/users/VsonicV/followers",
"following_url": "https://api.github.com/users/VsonicV/following{/other_user}",
"gists_url": "https://api.github.com/users/VsonicV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VsonicV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VsonicV/subscriptions",
"organizations_url": "https://api.github.com/users/VsonicV/orgs",
"repos_url": "https://api.github.com/users/VsonicV/repos",
"events_url": "https://api.github.com/users/VsonicV/events{/privacy}",
"received_events_url": "https://api.github.com/users/VsonicV/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This PR is mixed with another PR. Close it for now. Will reopen another PR only with the relevant commits: #27814 ."
] | 1,701 | 1,702 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the same beam score calculation issue following #27351 , for Tensorflow version.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27812/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27812",
"html_url": "https://github.com/huggingface/transformers/pull/27812",
"diff_url": "https://github.com/huggingface/transformers/pull/27812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27812.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27811/comments | https://api.github.com/repos/huggingface/transformers/issues/27811/events | https://github.com/huggingface/transformers/pull/27811 | 2,022,484,608 | PR_kwDOCUB6oc5g-1tC | 27,811 | Update __init__.py | {
"login": "Andron00e",
"id": 67924720,
"node_id": "MDQ6VXNlcjY3OTI0NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/67924720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andron00e",
"html_url": "https://github.com/Andron00e",
"followers_url": "https://api.github.com/users/Andron00e/followers",
"following_url": "https://api.github.com/users/Andron00e/following{/other_user}",
"gists_url": "https://api.github.com/users/Andron00e/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andron00e/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andron00e/subscriptions",
"organizations_url": "https://api.github.com/users/Andron00e/orgs",
"repos_url": "https://api.github.com/users/Andron00e/repos",
"events_url": "https://api.github.com/users/Andron00e/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andron00e/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hey! This class does not seem to exist, what is the motivation here?\r\n\r\nI want to create a new class \"CLIPForImageClassification\" for such models. It can benefit as a backbone for further novel architectures in field of Contrastive Learning and Bottleneck models.",
"Linked to #27802 and #27805 let's try to put everything in a single PR",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added methods of new models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27811/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27811",
"html_url": "https://github.com/huggingface/transformers/pull/27811",
"diff_url": "https://github.com/huggingface/transformers/pull/27811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27811.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27810/comments | https://api.github.com/repos/huggingface/transformers/issues/27810/events | https://github.com/huggingface/transformers/issues/27810 | 2,022,414,833 | I_kwDOCUB6oc54i5nx | 27,810 | deepspeed autotuning intergration | {
"login": "yongjer",
"id": 54315206,
"node_id": "MDQ6VXNlcjU0MzE1MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/54315206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongjer",
"html_url": "https://github.com/yongjer",
"followers_url": "https://api.github.com/users/yongjer/followers",
"following_url": "https://api.github.com/users/yongjer/following{/other_user}",
"gists_url": "https://api.github.com/users/yongjer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongjer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongjer/subscriptions",
"organizations_url": "https://api.github.com/users/yongjer/orgs",
"repos_url": "https://api.github.com/users/yongjer/repos",
"events_url": "https://api.github.com/users/yongjer/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongjer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,701 | 1,701 | null | NONE | null | ### Feature request
intergrate deepspeed autotuning
### Motivation
without setting any config, make deepspeed easier to use
### Your contribution
none | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27810/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27808/comments | https://api.github.com/repos/huggingface/transformers/issues/27808/events | https://github.com/huggingface/transformers/pull/27808 | 2,022,369,742 | PR_kwDOCUB6oc5g-e-M | 27,808 | Fix remaining issues in beam score calculation | {
"login": "VsonicV",
"id": 23429580,
"node_id": "MDQ6VXNlcjIzNDI5NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/23429580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VsonicV",
"html_url": "https://github.com/VsonicV",
"followers_url": "https://api.github.com/users/VsonicV/followers",
"following_url": "https://api.github.com/users/VsonicV/following{/other_user}",
"gists_url": "https://api.github.com/users/VsonicV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VsonicV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VsonicV/subscriptions",
"organizations_url": "https://api.github.com/users/VsonicV/orgs",
"repos_url": "https://api.github.com/users/VsonicV/repos",
"events_url": "https://api.github.com/users/VsonicV/events{/privacy}",
"received_events_url": "https://api.github.com/users/VsonicV/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Good catch, thanks for fixing!\r\n> \r\n> BTW, run `RUN_SLOW=1 py.test tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::ViT2GPT2ModelIntegrationTest::test_inference_coco_en` -- this test may need an update in its value\r\n\r\n@gante I have updated the expectation value for this test. I have also incorporated all your suggestions. Ready to go!",
"@gante I have also updated the usage of `cur_len` in this Pytorch version following your suggestions in #27814 , now it represents the length of the entire sequence including the decoder prompt, which is consistent with the remaining codebase.",
"All suggested changes are incorporated. Ready to go! @gante @ArthurZucker "
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
This PR further fixes the remaining issues in beam score calculation following #27351 .
More specifically:
1) When adding new hypothesis, the `hyp` in `process` does not include the next generated token on which the current beam score is calculated, but the `hyp` in `finalize` includes all the generated tokens so far. This inconsistency is resolved by changing the `add` function of `BeamHypotheses`. Now we directly pass the current length of the generated tokens to `add`.
2) When calculating best possible beam score in `is_done` function of `BeamHypotheses`, we are directly using `max_length` without deducting `decoder_prompt_len`. It is fixed now.
3) Updated the testing expectation accordingly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@gante
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27808",
"html_url": "https://github.com/huggingface/transformers/pull/27808",
"diff_url": "https://github.com/huggingface/transformers/pull/27808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27808.patch",
"merged_at": 1702041256000
} |
https://api.github.com/repos/huggingface/transformers/issues/27807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27807/comments | https://api.github.com/repos/huggingface/transformers/issues/27807/events | https://github.com/huggingface/transformers/pull/27807 | 2,022,185,929 | PR_kwDOCUB6oc5g94Ph | 27,807 | Documentation: Spanish translation of perplexity.mdx | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello. I am open to any feedback. Thanks.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27807). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Add the Spanish version of perplexity.mdx to transformers/docs/source/es
Fixes #15947
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@omarespejel @sgugger @osanseviero | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27807/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27807",
"html_url": "https://github.com/huggingface/transformers/pull/27807",
"diff_url": "https://github.com/huggingface/transformers/pull/27807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27807.patch",
"merged_at": 1701802435000
} |
https://api.github.com/repos/huggingface/transformers/issues/27806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27806/comments | https://api.github.com/repos/huggingface/transformers/issues/27806/events | https://github.com/huggingface/transformers/issues/27806 | 2,022,160,599 | I_kwDOCUB6oc54h7jX | 27,806 | Support for tokenising higher batch dimensions | {
"login": "saswat0",
"id": 32325136,
"node_id": "MDQ6VXNlcjMyMzI1MTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/32325136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saswat0",
"html_url": "https://github.com/saswat0",
"followers_url": "https://api.github.com/users/saswat0/followers",
"following_url": "https://api.github.com/users/saswat0/following{/other_user}",
"gists_url": "https://api.github.com/users/saswat0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saswat0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saswat0/subscriptions",
"organizations_url": "https://api.github.com/users/saswat0/orgs",
"repos_url": "https://api.github.com/users/saswat0/repos",
"events_url": "https://api.github.com/users/saswat0/events{/privacy}",
"received_events_url": "https://api.github.com/users/saswat0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"You should probably pack your data, will also be more efficient I think. Separate them with the EOS ",
"@ArthurZucker I'm not sure if I got your point entirely.\r\nFor now, I'm flattening the whole batch of tensors before passing in into the model and then reshaping it after getting the output.\r\nDo you mean that I concatenate the sentences together with a <EOS> token before tokenising them? If so, how do I represent the sentence pairs (i.e., concatenate across which axis)?",
"What I mean is that you cannot pass list of list of list of string to a tokenizer. What you need to do is to seperate with a special token like `<SEP>` for example. \r\nI am not sure I understand your last comment as the tokenizer does not take tensors as an input. Models usually support batches of input so no need to flattent but if it works alrifht. \r\n\r\n"
] | 1,701 | 1,704 | 1,704 | NONE | null | ### Feature request
Can inference be performed with larger batch dimensions?
Currently, tokenisation is supported upto List[List[str]] and any dimension higher than that needs to be traversed via a for loop. Can we reduce this requirement and process the entire batch in one go?
### Motivation
Tokenisation for a batch of batch of strings has to be done sequentially by iterating over each string. On a GPU, it should be possible to do this in one go, thereby reducing latency and improving utilisation by a large margin.
Currently, the following snippet works fine
```
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
But if while encoding a batch of strings for comparison, it fails.
```
query_list = [[['what is panda?', 'hi'], ['what is panda?', 'The giant panda is a bear species endemic to China.']] for _ in range(4)]
```
Following is the error (**irrespective of the tokenizer**)
```
File ~/anaconda3/envs/reranking_project/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/anaconda3/envs/reranking_project/lib/python3.10/site-packages/FlagEmbedding/flag_models.py:149, in FlagReranker.compute_score(self, sentence_pairs, batch_size, max_length)
146 for start_index in tqdm(range(0, len(sentence_pairs), batch_size), desc=\"Compute Scores\",
147 disable=len(sentence_pairs) < 128):
148 sentences_batch = sentence_pairs[start_index:start_index + batch_size]
--> 149 inputs = self.tokenizer(
150 sentences_batch,
151 padding=True,
152 truncation=True,
153 return_tensors='pt',
154 max_length=max_length,
155 ).to(self.device)
157 scores = self.model(**inputs, return_dict=True).logits.view(-1, ).float()
158 all_scores.extend(scores.cpu().numpy().tolist())
File ~/anaconda3/envs/reranking_project/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2602, in PreTrainedTokenizerBase.__call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2600 if not self._in_target_context_manager:
2601 self._switch_to_input_mode()
-> 2602 encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
2603 if text_target is not None:
2604 self._switch_to_target_mode()
File ~/anaconda3/envs/reranking_project/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2660, in PreTrainedTokenizerBase._call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2657 return False
2659 if not _is_valid_text_input(text):
-> 2660 raise ValueError(
2661 \"text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) \"
2662 \"or `List[List[str]]` (batch of pretokenized examples).\"
2663 )
2665 if text_pair is not None and not _is_valid_text_input(text_pair):
2666 raise ValueError(
2667 \"text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) \"
2668 \"or `List[List[str]]` (batch of pretokenized examples).\"
2669 )
ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples)."
}
```
Is there any way to perform inference at once for a List[List[List[str]]] without iterating one by one?
### Your contribution
No contribution. Just raising a request. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27806/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27805/comments | https://api.github.com/repos/huggingface/transformers/issues/27805/events | https://github.com/huggingface/transformers/pull/27805 | 2,022,079,978 | PR_kwDOCUB6oc5g9jXY | 27,805 | CLIPForImageClassification has been added | {
"login": "Andron00e",
"id": 67924720,
"node_id": "MDQ6VXNlcjY3OTI0NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/67924720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andron00e",
"html_url": "https://github.com/Andron00e",
"followers_url": "https://api.github.com/users/Andron00e/followers",
"following_url": "https://api.github.com/users/Andron00e/following{/other_user}",
"gists_url": "https://api.github.com/users/Andron00e/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andron00e/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andron00e/subscriptions",
"organizations_url": "https://api.github.com/users/Andron00e/orgs",
"repos_url": "https://api.github.com/users/Andron00e/repos",
"events_url": "https://api.github.com/users/Andron00e/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andron00e/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds a new model to the hub. Called CLIPForImage classification.
Details about implementation and pre-trained version on hub can be seen in my [repo](https://github.com/Andron00e/CLIPForImageClassification).
[My "New model" issue with describing of idea](https://github.com/huggingface/transformers/issues/27802)
[Model on hub link](https://huggingface.co/Andron00e/CLIPForImageClassification-v1)
[Code link](https://github.com/Andron00e/CLIPForImageClassification/blob/main/clip_for_classification/modeling_clipforimageclassification.py)
Tags:
vision models: @amyeroberts
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27805/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27805",
"html_url": "https://github.com/huggingface/transformers/pull/27805",
"diff_url": "https://github.com/huggingface/transformers/pull/27805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27805.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27804/comments | https://api.github.com/repos/huggingface/transformers/issues/27804/events | https://github.com/huggingface/transformers/pull/27804 | 2,022,065,007 | PR_kwDOCUB6oc5g9gS4 | 27,804 | fix pytree warning | {
"login": "wm901115nwpu",
"id": 13309349,
"node_id": "MDQ6VXNlcjEzMzA5MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/13309349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wm901115nwpu",
"html_url": "https://github.com/wm901115nwpu",
"followers_url": "https://api.github.com/users/wm901115nwpu/followers",
"following_url": "https://api.github.com/users/wm901115nwpu/following{/other_user}",
"gists_url": "https://api.github.com/users/wm901115nwpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wm901115nwpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wm901115nwpu/subscriptions",
"organizations_url": "https://api.github.com/users/wm901115nwpu/orgs",
"repos_url": "https://api.github.com/users/wm901115nwpu/repos",
"events_url": "https://api.github.com/users/wm901115nwpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/wm901115nwpu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,702 | 1,702 | NONE | null | fix
UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
warning
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27804/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27804",
"html_url": "https://github.com/huggingface/transformers/pull/27804",
"diff_url": "https://github.com/huggingface/transformers/pull/27804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27804.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27803/comments | https://api.github.com/repos/huggingface/transformers/issues/27803/events | https://github.com/huggingface/transformers/pull/27803 | 2,022,056,701 | PR_kwDOCUB6oc5g9eqB | 27,803 | Fix the deprecation warning of _torch_pytree._register_pytree_node | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Hey! Could you share the warning and make sure this is backward compatible? 😉\r\n\r\nBackward compatible is guaranteed by using reflection, in that I used getattr to check the existence of the new method.",
"Rebase again.",
"you also need to run `make style` and have the ruff package `pip install ruff==0.1.5`",
"> you also need to run `make style` and have the ruff package `pip install ruff==0.1.5`\r\n\r\n`make style` formats \r\n```\r\n\tmodified: src/transformers/models/bloom/modeling_bloom.py\r\n\tmodified: src/transformers/models/fuyu/image_processing_fuyu.py\r\n\tmodified: src/transformers/models/mpt/modeling_mpt.py\r\n```\r\nwhich are unrelated to this PR. Is it permitted to submit such changes?",
"They should not be modified no. No big deal, no I think rebasing on main might help as well! Otherwise I can checkout your PR and push the styling . Also `pip uninstall black` could help ",
"I submitted another PR to format them.",
"No these files should not be modified 😓 ",
"Thank you! "
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Fix a deprecation warning triggered by latest PyTorch:
```
UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27803/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27803",
"html_url": "https://github.com/huggingface/transformers/pull/27803",
"diff_url": "https://github.com/huggingface/transformers/pull/27803.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27803.patch",
"merged_at": 1702808022000
} |
https://api.github.com/repos/huggingface/transformers/issues/27802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27802/comments | https://api.github.com/repos/huggingface/transformers/issues/27802/events | https://github.com/huggingface/transformers/issues/27802 | 2,022,043,340 | I_kwDOCUB6oc54he7M | 27,802 | CLIPForImageClassification | {
"login": "Andron00e",
"id": 67924720,
"node_id": "MDQ6VXNlcjY3OTI0NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/67924720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Andron00e",
"html_url": "https://github.com/Andron00e",
"followers_url": "https://api.github.com/users/Andron00e/followers",
"following_url": "https://api.github.com/users/Andron00e/following{/other_user}",
"gists_url": "https://api.github.com/users/Andron00e/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Andron00e/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andron00e/subscriptions",
"organizations_url": "https://api.github.com/users/Andron00e/orgs",
"repos_url": "https://api.github.com/users/Andron00e/repos",
"events_url": "https://api.github.com/users/Andron00e/events{/privacy}",
"received_events_url": "https://api.github.com/users/Andron00e/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thanks for the suggestion, I've review the PR above",
"This is now supported by #28952."
] | 1,701 | 1,708 | 1,708 | NONE | null | ### Model description
CLIPForImageClassification is an add-on CLIPModel from hub. With one linear classifier on top of the CLIP it performs in Image Classification task as good as [ViTForImageClassification](https://github.com/huggingface/transformers/blob/v4.35.2/src/transformers/models/vit/modeling_vit.py#L756) does. Moreover, there are several options how to produce a classification with this model.
The first one is to use "image_embeds" output of CLIP and then produce a classification with [projection_dim, num_classes] Linear Layer.
The second one is to use "logits_per_image" output of CLIP and then do classification.
Currently, the model implementation is partially available. With weights of it, retrained on CIFAR10 dataset. My next goal is to expand the range of labels and implement the second variant of this model.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Repo with implementation](https://github.com/Andron00e/CLIPForImageClassification)
[Pre-trained model on hub](https://huggingface.co/Andron00e/CLIPForImageClassification-v1) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27802/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27801/comments | https://api.github.com/repos/huggingface/transformers/issues/27801/events | https://github.com/huggingface/transformers/issues/27801 | 2,021,870,882 | I_kwDOCUB6oc54g00i | 27,801 | Training for ZeroShotImageClassification | {
"login": "hxydxn",
"id": 64323922,
"node_id": "MDQ6VXNlcjY0MzIzOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/64323922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hxydxn",
"html_url": "https://github.com/hxydxn",
"followers_url": "https://api.github.com/users/hxydxn/followers",
"following_url": "https://api.github.com/users/hxydxn/following{/other_user}",
"gists_url": "https://api.github.com/users/hxydxn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hxydxn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hxydxn/subscriptions",
"organizations_url": "https://api.github.com/users/hxydxn/orgs",
"repos_url": "https://api.github.com/users/hxydxn/repos",
"events_url": "https://api.github.com/users/hxydxn/events{/privacy}",
"received_events_url": "https://api.github.com/users/hxydxn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nThe example you refer to is not meant for zero-shot image classification. It's meant for supervised image classification, when you have a dataset of (image, label) pairs.\r\n\r\nCLIP can be fine-tuned using this script: https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text if you want to improve zero-shot image classification on a certain domain.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.1
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @pacman100 @muellerz
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
[https://colab.research.google.com/drive/1ugxSI63fQd7YvO4IX5H9YrxyY7NDu2il?usp=sharing](https://colab.research.google.com/drive/1ugxSI63fQd7YvO4IX5H9YrxyY7NDu2il?usp=sharing)
### Expected behavior
I am trying to finetune a ZeroShotImageClassification model, particularly `geolocal/StreetCLIP`. I'm encountering the following error:

I'm restricted to a batch size of 1 for train and eval set, otherwise I will get an error:
```Attention mask should be of size (1, 1, 8, 8), but is torch.Size([8, 1, 8, 8])```
I'm currently finetuning on 1M images across test and train sets and hope to train with 8 A40 GPUs.
I based the flow very heavily from [https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb)
and
[https://stackoverflow.com/questions/75802931/i-cant-fine-tune-clip-model-from-huggingface](https://stackoverflow.com/questions/75802931/i-cant-fine-tune-clip-model-from-huggingface)
Thank you for the help! Any sample code or advice is appreciated!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27801/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27800/comments | https://api.github.com/repos/huggingface/transformers/issues/27800/events | https://github.com/huggingface/transformers/pull/27800 | 2,021,769,958 | PR_kwDOCUB6oc5g8jk- | 27,800 | Add support for PyTorch/XLA 2.1+ PJRT (Neuron SDK) | {
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27800). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"We don't need this anymore as we have overwritten init_process_group within torch-neuronx to have pjrt as default. You might need this for torch-xla for other platforms.",
"@jeffhataws are we okay to close this then? Or are you saying this needs to be merged. ",
"We can close this. Thanks!"
] | 1,701 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
In PyTorch/XLA 2.1+, `init_method="xla://"` is now needed in `dist.init_process_group` to properly initialize PJRT.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27800/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27800",
"html_url": "https://github.com/huggingface/transformers/pull/27800",
"diff_url": "https://github.com/huggingface/transformers/pull/27800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27800.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27799/comments | https://api.github.com/repos/huggingface/transformers/issues/27799/events | https://github.com/huggingface/transformers/pull/27799 | 2,021,739,465 | PR_kwDOCUB6oc5g8dQl | 27,799 | [Hot-Fix][XLA] Re-enable broken _tpu_save for XLATensors | {
"login": "yeounoh",
"id": 7146489,
"node_id": "MDQ6VXNlcjcxNDY0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7146489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeounoh",
"html_url": "https://github.com/yeounoh",
"followers_url": "https://api.github.com/users/yeounoh/followers",
"following_url": "https://api.github.com/users/yeounoh/following{/other_user}",
"gists_url": "https://api.github.com/users/yeounoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeounoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeounoh/subscriptions",
"organizations_url": "https://api.github.com/users/yeounoh/orgs",
"repos_url": "https://api.github.com/users/yeounoh/repos",
"events_url": "https://api.github.com/users/yeounoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeounoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I see this in the CI test run, `tests_torch`\r\n```\r\n if cls.token is None:\r\n> raise ValueError(\"Cannot run tests as secret isn't setup.\")\r\nE ValueError: Cannot run tests as secret isn't setup.\r\n```",
"Also, `run_tests` is blocked, with \"This workflow is awaiting approval from a maintainer in https://github.com/huggingface/transformers/pull/27799\"",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27799). All of your documentation changes will be reflected on that endpoint.",
"No worries, the test is skipped on main feel free to rebase! ",
"There was an issue with this recently #27578 that should be fixed by this I think "
] | 1,701 | 1,703 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
This re-enables checkpointing model on `xla` device, since that wasn't allowed in some cases where XLATensors have invalid storage.data_ptr(). This explicitly moves the model to `cpu` before checkpointing to ensure the storage is valid.
This is a *hot-fix* to unblock all failing HF model checkpointing on XLA device.
This fixes the existing `_save_tpu` in `trainer.py` implementation, which is currently broken:
```
100%|███████████████████████████████████████████| 10/10 [00:44<00:00, 4.49s/it]
[INFO|trainer.py:2821] 2023-12-01 23:04:53,098 >> Saving model checkpoint to /workspace/MNLI
[INFO|configuration_utils.py:458] 2023-12-01 23:05:09,794 >> Configuration saved in /workspace/MNLI/config.json
/usr/local/lib/python3.8/site-packages/safetensors-0.4.1rc1-py3.8-linux-x86_64.egg/safetensors/torch.py:17: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return tensor.storage().data_ptr()
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/safetensors-0.4.1rc1-py3.8-linux-x86_64.egg/safetensors/torch.py", line 13, in storage_ptr
return tensor.untyped_storage().data_ptr()
RuntimeError: Attempted to access the data pointer on an invalid python storage.
```
With this change, it works,
```
100%|█████████████████████████████████████████████| 1/1 [00:00<00:00, 2.28it/s][INFO|trainer.py:1975] 2023-12-01 23:44:53,563 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 7.1392, 'train_samples_per_second': 1.121, 'train_steps_per_second': 0.14, 'train_loss': 1.0944234132766724, 'epoch': 1.0}
100%|█████████████████████████████████████████████| 1/1 [00:07<00:00, 7.14s/it]
[INFO|trainer.py:2821] 2023-12-01 23:45:00,259 >> Saving model checkpoint to /workspace/MNLI
[INFO|configuration_utils.py:458] 2023-12-01 23:45:18,980 >> Configuration saved in /workspace/MNLI/config.json
[INFO|modeling_utils.py:1851] 2023-12-01 23:45:20,612 >> Model weights saved in /workspace/MNLI/pytorch_model.bin
[INFO|tokenization_utils_base.py:2210] 2023-12-01 23:45:20,613 >> tokenizer config file saved in /workspace/MNLI/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-12-01 23:45:20,613 >> Special tokens file saved in /workspace/MNLI/special_tokens_map.json
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27799/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27799",
"html_url": "https://github.com/huggingface/transformers/pull/27799",
"diff_url": "https://github.com/huggingface/transformers/pull/27799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27799.patch",
"merged_at": 1701698161000
} |
https://api.github.com/repos/huggingface/transformers/issues/27798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27798/comments | https://api.github.com/repos/huggingface/transformers/issues/27798/events | https://github.com/huggingface/transformers/issues/27798 | 2,021,611,162 | I_kwDOCUB6oc54f1aa | 27,798 | Apparent Problem deploying to AWS SageMaker Endpoint with VQA Models | Version Incompatability? | {
"login": "buyblvd-ryan",
"id": 117312895,
"node_id": "U_kgDOBv4Nfw",
"avatar_url": "https://avatars.githubusercontent.com/u/117312895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buyblvd-ryan",
"html_url": "https://github.com/buyblvd-ryan",
"followers_url": "https://api.github.com/users/buyblvd-ryan/followers",
"following_url": "https://api.github.com/users/buyblvd-ryan/following{/other_user}",
"gists_url": "https://api.github.com/users/buyblvd-ryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buyblvd-ryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buyblvd-ryan/subscriptions",
"organizations_url": "https://api.github.com/users/buyblvd-ryan/orgs",
"repos_url": "https://api.github.com/users/buyblvd-ryan/repos",
"events_url": "https://api.github.com/users/buyblvd-ryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/buyblvd-ryan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @buyblvd-ryan 👋 \r\n\r\nTo be able to help, we would need a reproducible script using `transformers` (and not sagemaker, to rule out that it is not a sagemaker issue) :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### System Info
Transformers: 4.26.0
PyTorch: 1.13.
Python: 3.9
Model: Salesforce/blip-vqa-capfilt-large
Deployment code (default from Hugging Face):
```python
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'Salesforce/blip-vqa-capfilt-large',
'HF_TASK':'visual-question-answering'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.26.0',
pytorch_version='1.13.1',
py_version='py39',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
```
### Who can help?
@amyeroberts @Narsil @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Spin up JupyterLab notebook on AWS Sagemaker
2. Run default deployment code above
3. In SageMaker Studio, navigate to Inference > Endpoints, select the endpoint created and Test Inference with the JSON below
```
{
"inputs": {
"question": "[question]"
"image": "[url to remote image]"
}
```
These steps have been tested repeatedly with numerous different inputs. I was unable to test with a B64 encoded image due to another issue already reported. The models, methods, questions, and image URLs used to test work without issue locally and in a JupyterLab notebook, but using up-to-date versions of Python (3.11), Torch (2.1.1), and Transformers (4.35.2). SageMaker doesn't support configurations besides those specified in the default deployment script (3.9/1.13.1/4.26).
### Expected behavior
It shouldn't error out. Best guess is the error relates to image_processing and feature_extraction in the VisualQuestionAnsweringPipeline in the versions required by SageMaker.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27798/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27797/comments | https://api.github.com/repos/huggingface/transformers/issues/27797/events | https://github.com/huggingface/transformers/pull/27797 | 2,021,598,953 | PR_kwDOCUB6oc5g7-lH | 27,797 | Fix: Raise informative exception when `prefix_allowed_tokens_fn` return empty set of tokens | {
"login": "Saibo-creator",
"id": 53392976,
"node_id": "MDQ6VXNlcjUzMzkyOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saibo-creator",
"html_url": "https://github.com/Saibo-creator",
"followers_url": "https://api.github.com/users/Saibo-creator/followers",
"following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}",
"gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions",
"organizations_url": "https://api.github.com/users/Saibo-creator/orgs",
"repos_url": "https://api.github.com/users/Saibo-creator/repos",
"events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saibo-creator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The failing tests seem to be irrelevant ",
"Rebase on main will help, It will get fix there ",
"@ArthurZucker Rebased ",
"(I'd commit Arthur's suggestion, as it saves a redundant computation :) )",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27797). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> (I'd commit Arthur's suggestion, as it saves a redundant computation :) )\r\n\r\n@gante \r\nI somehow missed the suggestion from Arthur. Thanks for reminding me! It's now committed :)"
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Fixes #27676 #22890
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/22890 and https://github.com/huggingface/transformers/issues/27676
- [x] Did you make sure to update the documentation with your changes? I don't think there is anything to update in the doc
- [x] Did you write any new necessary tests?
## Who can review?
@gante @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27797/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27797",
"html_url": "https://github.com/huggingface/transformers/pull/27797",
"diff_url": "https://github.com/huggingface/transformers/pull/27797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27797.patch",
"merged_at": 1702031149000
} |
https://api.github.com/repos/huggingface/transformers/issues/27796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27796/comments | https://api.github.com/repos/huggingface/transformers/issues/27796/events | https://github.com/huggingface/transformers/pull/27796 | 2,021,409,770 | PR_kwDOCUB6oc5g7VKY | 27,796 | Generate: All logits processors are documented and have examples | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27796). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | MEMBER | null | # What does this PR do?
This PR retouches the docstrings of the logits processors such that:
a) they all have a short description of what they do
b) there is a clear example for each class, ideally showcasing its usefulness (note: this will also means the class is doctested)
Moreover, to ensure this information is found, cross references to these classes were added.
Related issue #24575 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27796/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27796",
"html_url": "https://github.com/huggingface/transformers/pull/27796",
"diff_url": "https://github.com/huggingface/transformers/pull/27796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27796.patch",
"merged_at": 1701961895000
} |
https://api.github.com/repos/huggingface/transformers/issues/27795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27795/comments | https://api.github.com/repos/huggingface/transformers/issues/27795/events | https://github.com/huggingface/transformers/pull/27795 | 2,021,340,011 | PR_kwDOCUB6oc5g7GV_ | 27,795 | [Whisper] Fix doctest in timestamp logits processor | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27795). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | Doctest for Whisper timestamp logits processor was reported as failing in https://github.com/huggingface/transformers/pull/27485#issuecomment-1810876813. This PR corrects the code example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27795/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27795",
"html_url": "https://github.com/huggingface/transformers/pull/27795",
"diff_url": "https://github.com/huggingface/transformers/pull/27795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27795.patch",
"merged_at": 1701690502000
} |
https://api.github.com/repos/huggingface/transformers/issues/27794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27794/comments | https://api.github.com/repos/huggingface/transformers/issues/27794/events | https://github.com/huggingface/transformers/pull/27794 | 2,021,339,292 | PR_kwDOCUB6oc5g7GML | 27,794 | Proper build() methods for TF | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27794). All of your documentation changes will be reflected on that endpoint.",
"oh my god the tests pass i didn't think this was ever going to happen",
"Thanks! There's one thing left to add - some layers are buildable with int shapes in Keras 2, but that always fails in Keras 3. I'm going to do a quick replacement so that those become actual shapes (with extra `None` dimensions) - it should have no impact on Keras 2 either way.",
"Quick update: The build shapes all have proper ranks instead of just being ints now, but our old method of controlling names with `tf.name_scope()` isn't working for Keras 3 - I've asked Chollet what the recommended solution there is",
"Got a solution, but I think it fits better in another PR! I'm gonna merge this one for now and see what shakes out in the nightly CI, while I work on the next phase of Keras 3 compatibility.",
"Not sure I get why this was merged? "
] | 1,701 | 1,702 | 1,702 | MEMBER | null | TensorFlow builds weights **lazily**. This means that layers do not have an `input_dim` argument and do not create weight tensors in the model `__init__()`. Instead, the layers wait until their `build()` method is called, which usually happens implicitly the first time the layer receives an input. Layers use the shape of the first input they see, or the value explicitly passed to their `build()` method, to infer their input dim and build their weight tensors.
Up until now, almost none of our TF models had explicit `build()` methods. This meant that weights were built implicitly when the model was called, which required lots of tiny hacks all over the codebase:
- We had to do an entire forward pass *inside `from_pretrained()`* to prepare the model weights so that we could load a checkpoint
- We had to be careful about call stacks and name scopes to ensure that models did not accidentally build themselves inside an existing call/name context and destroy their weight names. This meant our code had to sniff the existing TF call stack, which (among many other issues) completely breaks Keras 3.
- Several models had explicit calls to `tf.name_scope()` inside their forward pass (!) to control their weight names, which only worked because the weights were always built there
This had always been a big chunk of tech debt that I'd wanted to fix, but it was such a large task that I never really had time. However, with Keras 3 approaching, it became quite urgent. I tried getting GPT-4 to figure out the `build()` shapes automatically, but it generally failed, so I had to resort to using `ast` and static analysis of the PyTorch and TF modeling files to cross-match layers from TF to PyTorch code, using the input size arguments from PyTorch to automatically create and populate new `build()` methods, and then did a manual pass afterwards to fix up the remaining issues.
As a result, after this PR:
- All models now have correct `build()` methods
- Weight names are correct even if models are called weirdly, because we can now tightly control the `build()` hierarchy
- Probably the single biggest source of TF bugs we had is gone
- No more forward passes when building models, including with `from_pretrained()`! Should make model loading significantly faster, especially on CPU, and should help in the CI.
- A major Keras 3 blocker is removed.
While I was working on this PR, I also encountered some other issues that I fixed in passing:
- Added a `build_in_name_scope()` method and refactored some tests/methods to use it instead. Calling this method yields the same name hierarchy as implicitly calling `build()` when doing a forward pass, whereas directly calling `model.build()` does not (because TF enters a name_scope in `__call__()`)
- Updated TFAdaptivePool2D for Data2Vec, should massively improve model performance
- Fixed some details in the `TFSequenceSummary` and `TFConv1D` classes. These are mostly used by older models.
**Note to reviewers:** Most of this PR was generated automatically, and just consists of walls of new `build()` methods. You can generally trust that these methods are correct so long as the CI is green, so you hopefully don't have to read them all - there's >11,000 lines of them! The main things to review are the changes in core files like `modeling_tf_utils.py`, the new `build_in_name_scope()` method, etc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27794/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27794",
"html_url": "https://github.com/huggingface/transformers/pull/27794",
"diff_url": "https://github.com/huggingface/transformers/pull/27794.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27794.patch",
"merged_at": 1702567050000
} |
https://api.github.com/repos/huggingface/transformers/issues/27793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27793/comments | https://api.github.com/repos/huggingface/transformers/issues/27793/events | https://github.com/huggingface/transformers/pull/27793 | 2,021,250,141 | PR_kwDOCUB6oc5g6y5T | 27,793 | Fix `Owlv2ModelIntegrationTest::test_inference_object_detection` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27793). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
device issue + update expected values | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27793/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27793",
"html_url": "https://github.com/huggingface/transformers/pull/27793",
"diff_url": "https://github.com/huggingface/transformers/pull/27793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27793.patch",
"merged_at": 1701679522000
} |
https://api.github.com/repos/huggingface/transformers/issues/27792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27792/comments | https://api.github.com/repos/huggingface/transformers/issues/27792/events | https://github.com/huggingface/transformers/pull/27792 | 2,021,226,337 | PR_kwDOCUB6oc5g6tsZ | 27,792 | Fix `TvpModelIntegrationTests` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27792). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
Device issue. Missed in #27695 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27792/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27792",
"html_url": "https://github.com/huggingface/transformers/pull/27792",
"diff_url": "https://github.com/huggingface/transformers/pull/27792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27792.patch",
"merged_at": 1701679243000
} |
https://api.github.com/repos/huggingface/transformers/issues/27791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27791/comments | https://api.github.com/repos/huggingface/transformers/issues/27791/events | https://github.com/huggingface/transformers/pull/27791 | 2,021,169,385 | PR_kwDOCUB6oc5g6g7p | 27,791 | Add "Fill-in-Middle" pipeline | {
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Were you waiting for a review or do you not have time to work on this? ",
"> Hey! Were you waiting for a review or do you not have time to work on this? \n\n@ArthurZucker I am working on this but I accidentally opened a wrong PR, fixing and opening a new one today.\n\nSorry for the inconvenience :)"
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the Fill-in-Middle pipeline to 🤗 transformers.
FIM objective was proposed in [Efficient Training of Language Models to Fill in the Middle](https://arxiv.org/abs/2207.14255). They showed that autoregressive language models can learn to infill text after applying a straightforward transformation to the dataset, which simply moves a span of text from the middle of a document to its end.
As discussed in #27059
## Who can review?
@sayakpaul @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27791/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27791",
"html_url": "https://github.com/huggingface/transformers/pull/27791",
"diff_url": "https://github.com/huggingface/transformers/pull/27791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27791.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27790/comments | https://api.github.com/repos/huggingface/transformers/issues/27790/events | https://github.com/huggingface/transformers/pull/27790 | 2,021,043,913 | PR_kwDOCUB6oc5g6Ffv | 27,790 | [CLAP] Replace hard-coded batch size to enable dynamic ONNX export | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27790). All of your documentation changes will be reflected on that endpoint.",
"Added back the docstring 👍 - ready to merge imo? :) ",
"Yep just make style and rebase should do the trick for the CI",
"Done 👍 (failed style seemed to be unrelated; should be fixed now)\r\n\r\n",
"Thanks 👍🏻 "
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
This PR unblocks adding ONNX export support for CLAP models in Optimum (see [PR](https://github.com/huggingface/optimum/pull/1552)), by replacing hard-coded batch size in the `window_reverse` function. This was [already done in the past for other similar models](https://github.com/huggingface/transformers/commit/b651efe59ea506d38173e3a60a4228e7e74719f9) (by @lewtun), but since then, [CLAP was added](https://github.com/huggingface/transformers/pull/21370) (by @ArthurZucker), and this change was not brought across there.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/optimum/pull/1552#issuecomment-1836233113
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @lewtun @echarlaix
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27790/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27790/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27790",
"html_url": "https://github.com/huggingface/transformers/pull/27790",
"diff_url": "https://github.com/huggingface/transformers/pull/27790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27790.patch",
"merged_at": 1702114779000
} |
https://api.github.com/repos/huggingface/transformers/issues/27789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27789/comments | https://api.github.com/repos/huggingface/transformers/issues/27789/events | https://github.com/huggingface/transformers/issues/27789 | 2,020,694,258 | I_kwDOCUB6oc54cVjy | 27,789 | _prepare_4d_causal_attention_mask doesn't work with torch.compile | {
"login": "PhilJd",
"id": 16101605,
"node_id": "MDQ6VXNlcjE2MTAxNjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16101605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilJd",
"html_url": "https://github.com/PhilJd",
"followers_url": "https://api.github.com/users/PhilJd/followers",
"following_url": "https://api.github.com/users/PhilJd/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilJd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilJd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilJd/subscriptions",
"organizations_url": "https://api.github.com/users/PhilJd/orgs",
"repos_url": "https://api.github.com/users/PhilJd/repos",
"events_url": "https://api.github.com/users/PhilJd/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilJd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
] | [
"Ow nice catch and nice reproducer! 🤗 \r\nI'll try to have a look, otherwise pinging @fxmarty ",
"Hi @PhilJd the issue is fixed in https://github.com/huggingface/transformers/pull/27868. This a bug in PyTorch 2.0/2.1, that is solved in pytorch nightly.",
"Thanks a lot! :)"
] | 1,701 | 1,702 | 1,702 | NONE | null | ### System Info
Transformers version: 4.35.2
Pytorch version: 2.1.1+cu118
Python version: 3.11
### Who can help?
Tagging @ArthurZucker as the original author
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce
```python
import torch
import torch.nn as nn
from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask
# LLama models use this wrapper but this does not fix the compilation issue.
# if is_torch_fx_available():
# _prepare_4d_causal_attention_mask = torch.fx.wrap(_prepare_4d_causal_attention_mask)
class Model(nn.Module):
def forward(self, inputs_embeds):
batch_size, seq_length, _ = inputs_embeds.shape
past_key_values_length = 10
attention_mask = _prepare_4d_causal_attention_mask(
None, (batch_size, seq_length), inputs_embeds, past_key_values_length
)
return attention_mask
model = Model()
model = torch.compile(model, fullgraph=True)
model(torch.ones([1,5, 32]))
```
Gives a torch.compile error:
```
[...]
return SourcelessBuilder()(tx, val).add_options(options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/.virtualenvs/pytorch311/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 1650, in __call__
unimplemented(f"Unexpected type in sourceless builder {type(value)}")
File "/home/ubuntu/.virtualenvs/pytorch311/lib/python3.11/site-packages/torch/_dynamo/exc.py", line 172, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder <class 'torch.dtype'>
from user code:
File "<stdin>", line 6, in forward
File "/home/ubuntu/.virtualenvs/pytorch311/lib/python3.11/site-packages/transformers/modeling_attn_mask_utils.py", line 197, in _prepare_4d_causal_attention_mask
attention_mask = attn_mask_converter.to_causal_4d(
```
Note that I could reproduced this by e.g. trying to compile a Llama2 model.
Removing the dtype argument and just setting dtype=torch.float32 in the function body compiles.
I'm not sure whether this is a bug in the torch compiler or unintended usage in transformers, so happy to open an issue there as well if required :)
### Expected behavior
No errors during torch.compile stge.
Let me know if there's something I can do to help debug! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27789/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27788/comments | https://api.github.com/repos/huggingface/transformers/issues/27788/events | https://github.com/huggingface/transformers/pull/27788 | 2,020,666,377 | PR_kwDOCUB6oc5g4yvp | 27,788 | Fix typo in max_length deprecation warnings | {
"login": "siegeln",
"id": 7674950,
"node_id": "MDQ6VXNlcjc2NzQ5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7674950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siegeln",
"html_url": "https://github.com/siegeln",
"followers_url": "https://api.github.com/users/siegeln/followers",
"following_url": "https://api.github.com/users/siegeln/following{/other_user}",
"gists_url": "https://api.github.com/users/siegeln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siegeln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siegeln/subscriptions",
"organizations_url": "https://api.github.com/users/siegeln/orgs",
"repos_url": "https://api.github.com/users/siegeln/repos",
"events_url": "https://api.github.com/users/siegeln/events{/privacy}",
"received_events_url": "https://api.github.com/users/siegeln/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker in this case they are not interchangeable, `MaxLengthCriteria` expects `max_length`. In `generate`, `max_new_tokens` is converted into its `max_length` equivalent before `MaxLengthCriteria` is initialized :)",
"Ah right! Missed that, thanks 😉 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27788). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Specifying the max_length parameter in some text generation functions results in a deprecation warning. Currently, these warnings say to fix by using `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))`; however, this results in the error `TypeError: 'MaxLengthCriteria' object is not iterable`, as `StoppingCriteriaList` expects an iterable.
This PR fixes this typo.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27788/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27788",
"html_url": "https://github.com/huggingface/transformers/pull/27788",
"diff_url": "https://github.com/huggingface/transformers/pull/27788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27788.patch",
"merged_at": 1701672110000
} |
https://api.github.com/repos/huggingface/transformers/issues/27787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27787/comments | https://api.github.com/repos/huggingface/transformers/issues/27787/events | https://github.com/huggingface/transformers/pull/27787 | 2,020,611,908 | PR_kwDOCUB6oc5g4m2M | 27,787 | [PatchTST] make tests more robust | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
fix the fragile Slow CI test | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27787/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27787",
"html_url": "https://github.com/huggingface/transformers/pull/27787",
"diff_url": "https://github.com/huggingface/transformers/pull/27787.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27787.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27786/comments | https://api.github.com/repos/huggingface/transformers/issues/27786/events | https://github.com/huggingface/transformers/issues/27786 | 2,020,406,118 | I_kwDOCUB6oc54bPNm | 27,786 | A100 makes too large check point and slow learning | {
"login": "alexxony",
"id": 63861370,
"node_id": "MDQ6VXNlcjYzODYxMzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/63861370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexxony",
"html_url": "https://github.com/alexxony",
"followers_url": "https://api.github.com/users/alexxony/followers",
"following_url": "https://api.github.com/users/alexxony/following{/other_user}",
"gists_url": "https://api.github.com/users/alexxony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexxony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexxony/subscriptions",
"organizations_url": "https://api.github.com/users/alexxony/orgs",
"repos_url": "https://api.github.com/users/alexxony/repos",
"events_url": "https://api.github.com/users/alexxony/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexxony/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"> Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n> \r\n> Thanks!\r\n\r\nI uploaded, but can you help me with your code running?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### System Info
transformers 4.36.0
Python 3.8.10
Driver Version: 535.104.12
CUDA Version: 12.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
best_state_dict = fit(
model=model,
train_loader=train_loader,
valid_loader=valid_loader,
max_epoch=config.max_epoch,
lr=config.lr,
gradient_accumulate_step=config.gradient_accumulate_step,
early_stop_patience=config.early_stop_patience,
)
model.load_state_dict(best_state_dict)
model.save_pretrained(os.path.join(config.save_dir, "best_model"))
feature_extractor.save_pretrained(os.path.join(config.save_dir, "best_model"))
### Expected behavior
I've run a Audio classification code in 4090, H100, A100.
In 4090, H100 Code ran well and not large check point , also speed was fast
When it comes to run in 4*A100,
It made too large check point and was too slow ; I thouth it is due to **No** pararell process code
So , I made a container with a A100 GPU
But it caused also same problems
Here is my code and data link
Can you help me using A100?
Data
https://drive.google.com/file/d/1tKNgHiy-b9_oL8hWG4vpDKePqAv8GHNc/view?usp=drive_link
Code
https://drive.google.com/file/d/1zU0UziwtI8SN7PJD35E7-Tg1NSKnYbSr/view?usp=drive_link | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27786/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27785/comments | https://api.github.com/repos/huggingface/transformers/issues/27785/events | https://github.com/huggingface/transformers/issues/27785 | 2,020,323,106 | I_kwDOCUB6oc54a68i | 27,785 | max_new_tokens is set, but longer output is generated | {
"login": "yuqie",
"id": 34560542,
"node_id": "MDQ6VXNlcjM0NTYwNTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/34560542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuqie",
"html_url": "https://github.com/yuqie",
"followers_url": "https://api.github.com/users/yuqie/followers",
"following_url": "https://api.github.com/users/yuqie/following{/other_user}",
"gists_url": "https://api.github.com/users/yuqie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuqie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuqie/subscriptions",
"organizations_url": "https://api.github.com/users/yuqie/orgs",
"repos_url": "https://api.github.com/users/yuqie/repos",
"events_url": "https://api.github.com/users/yuqie/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuqie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are not setting the correct parameter, what you are looking for is `max_length` 🤗 ",
"Hi @ArthurZucker , just a doubt....in the documentation, it is written that they both serve the same purpose or am i missing something. Would be happy to be corrected.\r\nShould I approach the forum for this doubt?\r\n<img width=\"443\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/56391795/fe178fa0-2f0e-4423-90ff-4f6aa9a18fe9\">\r\n",
"> You are not setting the correct parameter, what you are looking for is `max_length` 🤗\r\n\r\n`max_length` works well, thx. \r\nAs @drunkeninja42 mentioned above, I am also confused with the parameters `max_length` and `max_new_tokens`, what the difference and when should `max_new_tokens` be used?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"`max_length` means input_length + generated_tokens < max_length, while max_new_tokens mean only generated_tokens < max_new_tokens"
] | 1,701 | 1,704 | 1,704 | NONE | null | Hi, I use llama2-7b and set `max_new_tokens=1024`, when I test with several samples, some output larger than 1024 are obtained. I have no idea with this, could anyone can help me.
```
llm = AutoModelForCausalLM.from_pretrained(model, torch_dtype=torch.float16, trust_remote_code=trust_remote_code)
tokenizer.pad_token = tokenizer.eos_token
llm = llm.cuda()
output_num_tokens = []
for i in tqdm(range(len(requests))):
prompt = requests[i]
# Generate the sequences.
input_ids = tokenizer(prompt, return_tensors="pt",padding=True).input_ids
llm_outputs = llm.generate(
input_ids=input_ids.cuda(),
do_sample=not use_beam_search,
num_return_sequences=1,
temperature=1.0,
top_p=1.0,
use_cache=True,
max_new_tokens=1024,
)
tokenizer.decode(llm_outputs[0], skip_special_tokens=True)
output_num_tokens.append(len(llm_outputs[0]))
print(output_num_tokens)
```
the results are as following:
```
[1062, 664, 219, 553, 1042, 1001, 313, 1128, 584, 984, 330, 710, 1415, 1429, 1526, 362, 906, 642, 1036, 207, 111, 394, 561, 672, 407, 223, 849, 869, 1675, 1170]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27785/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27784/comments | https://api.github.com/repos/huggingface/transformers/issues/27784/events | https://github.com/huggingface/transformers/issues/27784 | 2,020,239,917 | I_kwDOCUB6oc54amot | 27,784 | Is there any difference between hf-implemented llama2 and facebook-implemented llama2 ? | {
"login": "artetaout",
"id": 128046886,
"node_id": "U_kgDOB6HXJg",
"avatar_url": "https://avatars.githubusercontent.com/u/128046886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artetaout",
"html_url": "https://github.com/artetaout",
"followers_url": "https://api.github.com/users/artetaout/followers",
"following_url": "https://api.github.com/users/artetaout/following{/other_user}",
"gists_url": "https://api.github.com/users/artetaout/gists{/gist_id}",
"starred_url": "https://api.github.com/users/artetaout/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/artetaout/subscriptions",
"organizations_url": "https://api.github.com/users/artetaout/orgs",
"repos_url": "https://api.github.com/users/artetaout/repos",
"events_url": "https://api.github.com/users/artetaout/events{/privacy}",
"received_events_url": "https://api.github.com/users/artetaout/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | In my view, hf-implemented llama2, the "intermediate size" is 11008, but in facebook-implemeted llama2, it's 4 * hidden_size ?
why?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27784/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27783/comments | https://api.github.com/repos/huggingface/transformers/issues/27783/events | https://github.com/huggingface/transformers/issues/27783 | 2,020,132,961 | I_kwDOCUB6oc54aMhh | 27,783 | `ZeroShotAudioClassificationPipeline` documentation includes example to duplicate hypothesis template | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"FWIW, the output is near-identical, but this might cause unexpected issues later down the line.",
"Yeah, not super clear what we should do, maybe regex the hypothesis template? Updating the doc is a good start \r\n",
"I'll open PRs for this and Whisper ",
"Oops opening this today! 🤗 ",
"Happy to take care of this Arthur or want myself or @ylacombe to take a look?",
"sure, completely forgot about it! "
] | 1,701 | 1,707 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker (original implementer) @Vaibhavs10 @sanchit-gandhi (audio team)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Visit [documentation](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.ZeroShotAudioClassificationPipeline)
2. Example is:
```py
from transformers import pipeline
from datasets import load_dataset
dataset = load_dataset("ashraq/esc50")
audio = next(iter(dataset["train"]["audio"]))["array"]
classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
```
3. However, this is duplicated due to [this](https://github.com/huggingface/transformers/blob/29f1aee3b6c560182415fcb9e2238125e2f5b29c/src/transformers/pipelines/zero_shot_audio_classification.py#L118-L122) portion of code:
https://github.com/huggingface/transformers/blob/29f1aee3b6c560182415fcb9e2238125e2f5b29c/src/transformers/pipelines/zero_shot_audio_classification.py#L118-L122
4. Sentences become: `['This is a sound of Sound of a dog.', 'This is a sound of Sound of vaccum cleaner.']`
### Expected behavior
Sentences should be either:
- `['Sound of a dog.', 'Sound of vaccum cleaner.']`
- `['This is a sound of a dog.', 'This is a sound of vaccum cleaner.']` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27783/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27782/comments | https://api.github.com/repos/huggingface/transformers/issues/27782/events | https://github.com/huggingface/transformers/pull/27782 | 2,020,112,433 | PR_kwDOCUB6oc5g25Oa | 27,782 | [SeamlessM4Tv2] Fix links in README | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Fixes links for SeamlessM4Tv2 model in the README.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27782",
"html_url": "https://github.com/huggingface/transformers/pull/27782",
"diff_url": "https://github.com/huggingface/transformers/pull/27782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27782.patch",
"merged_at": 1701423573000
} |
https://api.github.com/repos/huggingface/transformers/issues/27781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27781/comments | https://api.github.com/repos/huggingface/transformers/issues/27781/events | https://github.com/huggingface/transformers/issues/27781 | 2,019,911,443 | I_kwDOCUB6oc54ZWcT | 27,781 | RuntimeError occurs when using CLIPProcessor and CLIPModel on aarch64 | {
"login": "shunkominato",
"id": 64582197,
"node_id": "MDQ6VXNlcjY0NTgyMTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/64582197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shunkominato",
"html_url": "https://github.com/shunkominato",
"followers_url": "https://api.github.com/users/shunkominato/followers",
"following_url": "https://api.github.com/users/shunkominato/following{/other_user}",
"gists_url": "https://api.github.com/users/shunkominato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shunkominato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shunkominato/subscriptions",
"organizations_url": "https://api.github.com/users/shunkominato/orgs",
"repos_url": "https://api.github.com/users/shunkominato/repos",
"events_url": "https://api.github.com/users/shunkominato/events{/privacy}",
"received_events_url": "https://api.github.com/users/shunkominato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for reporting, never had an issue with my macs and running in native code, would recommend you to just run on the native os. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Running inside of a container (aarch64 on MacOS arm64) is a useful feature.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,707 | 1,707 | NONE | null | ### System Info
Hi thank you for the great library
Running on docker
The machine environment is as follows
```
aarch64
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
host machine
```
Apple M2 tip
Ventura 13.0
```
transformers-cli env environment is as follows
```
- `transformers` version: 4.35.2
- Platform: Linux-6.4.16-linuxkit-aarch64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
pip3 list environment is as follows
```
Package Version
------------------ ----------
certifi 2023.11.17
charset-normalizer 3.3.2
filelock 3.13.1
fsspec 2023.10.0
huggingface-hub 0.19.4
idna 3.6
Jinja2 3.1.2
MarkupSafe 2.1.3
mpmath 1.3.0
networkx 3.2.1
numpy 1.26.2
packaging 23.2
Pillow 10.1.0
pip 20.3.4
PyYAML 6.0.1
regex 2023.10.3
requests 2.31.0
safetensors 0.4.1
setuptools 52.0.0
sympy 1.12
tokenizers 0.15.0
torch 2.1.1
torchvision 0.16.1
tqdm 4.66.1
transformers 4.35.2
typing-extensions 4.8.0
urllib3 2.1.0
wheel 0.34.2
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The code executed is below
```
import torch
from PIL import Image
from transformers import CLIPProcessor, CLIPModel
emotions = ["喜び", "悲しみ", "怒り", "驚き", "恐怖", "嫌悪"]
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image = Image.open("1.jpg")
inputs = processor(text=emotions, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
for i, emotion in enumerate(emotions):
print(f"{emotion}: {probs[0][i].item()}")
```
An error occurs in the following places
`outputs = model(**inputs)`
### Expected behavior
When you run the code in the above environment
```
Traceback (most recent call last):
File "/app/lib/ai/image_ai.py", line 28, in <module>
outputs = model(**inputs)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/transformers/models/clip/modeling_clip.py", line 1130, in forward
logits_per_text = torch.matmul(text_embeds, image_embeds.t()) * logit_scale
RuntimeError: could not create a primitive descriptor for a matmul primitive
```
We also confirmed that it ran without any problems when executed on x86_64.
Sorry if there is a similar issue
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27781/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27780/comments | https://api.github.com/repos/huggingface/transformers/issues/27780/events | https://github.com/huggingface/transformers/pull/27780 | 2,019,624,272 | PR_kwDOCUB6oc5g1QP8 | 27,780 | Updates the distributed CPU training documentation to add instructions for running on a Kubernetes cluster | {
"login": "dmsuehir",
"id": 13952606,
"node_id": "MDQ6VXNlcjEzOTUyNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/13952606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmsuehir",
"html_url": "https://github.com/dmsuehir",
"followers_url": "https://api.github.com/users/dmsuehir/followers",
"following_url": "https://api.github.com/users/dmsuehir/following{/other_user}",
"gists_url": "https://api.github.com/users/dmsuehir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmsuehir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmsuehir/subscriptions",
"organizations_url": "https://api.github.com/users/dmsuehir/orgs",
"repos_url": "https://api.github.com/users/dmsuehir/repos",
"events_url": "https://api.github.com/users/dmsuehir/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmsuehir/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for adding practice in Kubernetes cluster.",
"@stevhliu Thank you for the review, I made the updates that you suggested."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
This PR updates the distributed CPU training documentation to include instructions for using multiple CPU nodes from a Kubernetes cluster. The instructions are kept pretty simple, assuming that the user already has access to a Kubernetes cluster and understands the basics of how to use it. The Kubernetes instructions build upon the existing distributed CPU example by using the same [question answering script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) with [Kubeflow's PyTorch training operator](https://www.kubeflow.org/docs/components/training/pytorch/), while explaining along the way how to use Intel software optimizations and set the CPU/memory limits/requests in the specification file.
CC: @sywangyi @ashahba
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu and @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27780/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27780/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27780",
"html_url": "https://github.com/huggingface/transformers/pull/27780",
"diff_url": "https://github.com/huggingface/transformers/pull/27780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27780.patch",
"merged_at": 1701975046000
} |
https://api.github.com/repos/huggingface/transformers/issues/27779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27779/comments | https://api.github.com/repos/huggingface/transformers/issues/27779/events | https://github.com/huggingface/transformers/pull/27779 | 2,019,275,309 | PR_kwDOCUB6oc5g0DQm | 27,779 | Add SeamlessM4T v2 | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm astonished you got this added so quickly! Do you have plans to support seamless streaming as well?",
"I am unable to see it on the transformers library yet. How can i test this out ?",
"We'll do a release this week and it will be part of it! You have to install from source for now `pip install -e git+https://github.com/huggingface/transformers`",
"No plans yet for seamless streaming since the architecture diverges somewhat from Seamless M4T. The license is non-commercial unfortunately, which is also a barrier to usage. But if anyone in the community is interested in adding the model I'd sure be happy to lend a hand with the integration by answering any questions/queries and reviewing PRs! WDYT @ylacombe @yclicc - should we open a feature request up to the community for this?"
] | 1,701 | 1,704 | 1,701 | COLLABORATOR | null | # What does this PR do?
Meta just released [Seamless M4T v2](https://ai.meta.com/resources/models-and-libraries/seamless-communication-models/#seamlessm4t), this model has enough difference to justify a new version: differences listed [here](https://github.com/ylacombe/transformers/blob/add-S2S-2/docs/source/en/model_doc/seamless_m4t_v2.md#difference-with-seamlessm4t-v1) in the .md
@ArthurZucker reviewed most of the files internally but a few last changes might require a final approval!
cc @ArthurZucker @LysandreJik @sanchit-gandhi | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27779/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27779",
"html_url": "https://github.com/huggingface/transformers/pull/27779",
"diff_url": "https://github.com/huggingface/transformers/pull/27779.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27779.patch",
"merged_at": 1701372284000
} |
https://api.github.com/repos/huggingface/transformers/issues/27778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27778/comments | https://api.github.com/repos/huggingface/transformers/issues/27778/events | https://github.com/huggingface/transformers/issues/27778 | 2,019,274,979 | I_kwDOCUB6oc54W7Dj | 27,778 | Whisper is unable to transcribe in spoken language | {
"login": "markakash",
"id": 16388744,
"node_id": "MDQ6VXNlcjE2Mzg4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16388744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markakash",
"html_url": "https://github.com/markakash",
"followers_url": "https://api.github.com/users/markakash/followers",
"following_url": "https://api.github.com/users/markakash/following{/other_user}",
"gists_url": "https://api.github.com/users/markakash/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markakash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markakash/subscriptions",
"organizations_url": "https://api.github.com/users/markakash/orgs",
"repos_url": "https://api.github.com/users/markakash/repos",
"events_url": "https://api.github.com/users/markakash/events{/privacy}",
"received_events_url": "https://api.github.com/users/markakash/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's important to note that the behavior you described might be influenced by various factors, including the quality of the audio input and the specific model being used. While it's not accurate to label anyone as \"stupid,\" as you rightly pointed out, there are ways to address this concern.\r\n\r\nFirstly, when using any Whisper model other than large-v3, it seems to translate the content before transcribing it, rather than automatically detecting the language and transcribing in the detected language.\r\n\r\nTo address this, you can try enforcing the language or transcription via flags in the generate method. Here's an example of how you might modify your code:\r\n\r\n\r\nIn the above code snippet (which we should definitely add to the doc somewhere cc @sanchit-gandhi I'll open a PR), \r\n\r\nThis is not an issue with the code but with the documentation lacking this and your understanding 🤗 \r\n",
"There's some detailed documentation on the audio transformers course on setting the language/task parameters: https://huggingface.co/learn/audio-course/chapter5/asr_models#graduation-to-seq2seq\r\n\r\nWe'll definitely add this to the model cards + docs in our next efforts on Whisper generation (cc @Vaibhavs10)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | NONE | null | ### System Info
Currently, when I use any other Whisper models except for large-v3, instead of it auto detecting the language and transcribing it, it translates it and then transcribes it. I looked but couldn't find any flag/config to remove this behaviour. If I use the large-v3, then it detects the language perfectly and transcribes it in the detected language. I am using the latest version of transformer and huggingface-hub. I am also not sure, if this is a known issue, in any case would be grateful if anyone could point me to a fix for it.
### Who can help?
@sac
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model_id = "openai/whisper-tiny"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model = model.to_bettertransformer()
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
This is my pipeline configuration
### Expected behavior
Expected behaviour should be, Whisper detecting the spoken language and then transcribing it in the detected language, instead of it translating and then transcribing.
Expected:
Get audio data -> detect language -> transcribe | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27778/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27777/comments | https://api.github.com/repos/huggingface/transformers/issues/27777/events | https://github.com/huggingface/transformers/pull/27777 | 2,018,987,213 | PR_kwDOCUB6oc5gzC1Z | 27,777 | Fixes for PatchTST Config | {
"login": "wgifford",
"id": 79663411,
"node_id": "MDQ6VXNlcjc5NjYzNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/79663411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wgifford",
"html_url": "https://github.com/wgifford",
"followers_url": "https://api.github.com/users/wgifford/followers",
"following_url": "https://api.github.com/users/wgifford/following{/other_user}",
"gists_url": "https://api.github.com/users/wgifford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wgifford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgifford/subscriptions",
"organizations_url": "https://api.github.com/users/wgifford/orgs",
"repos_url": "https://api.github.com/users/wgifford/repos",
"events_url": "https://api.github.com/users/wgifford/events{/privacy}",
"received_events_url": "https://api.github.com/users/wgifford/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"fee free to ping me or @kashif when this is ready for review 😉 "
] | 1,701 | 1,702 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Fixes a bug in the handling of the `num_patches` parameter which was removed from the config.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27777",
"html_url": "https://github.com/huggingface/transformers/pull/27777",
"diff_url": "https://github.com/huggingface/transformers/pull/27777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27777.patch",
"merged_at": 1701439070000
} |
https://api.github.com/repos/huggingface/transformers/issues/27776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27776/comments | https://api.github.com/repos/huggingface/transformers/issues/27776/events | https://github.com/huggingface/transformers/pull/27776 | 2,018,819,402 | PR_kwDOCUB6oc5gydu5 | 27,776 | Disallow `pickle.load` unless `TRUST_REMOTE_CODE=True` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not really. `TRUST_REMOTE_CODE` is for `remote` code, here is the code in `transformers`. And the pickle file could be a local file downloaded or a link to a file (not code).",
"Happy the give another name if we come up with a better one though!",
"Thanks for the implementation @ydshieh!\r\n\r\nAn environment variable for remote code has also been asked in the past, and given the proximity to this I think it would make sense to have the same variable name for the two.\r\n\r\n`TRUST_REMOTE_CODE` looks good to me as we're indeed taking the risk of executing code from the Hub when downloading pickle files from there and running them; and `trust_remote_code=True` has the same approach as these files might also be locally cached and still require the flag to be set.\r\n\r\nIf you're strongly against `TRUST_REMOTE_CODE` for this I'd be happy to hear your proposal, but I think it would be nice to unify the two and to be in-line with the `trust_remote_code` argument that we have used elsewhere.\r\n\r\nWould that be ok for you?",
"I am fine with the name, but I think we might adjust the documentation a bit for `TRUST_REMOTE_CODE`. The current one for `trust_remote_code` is\r\n\r\n```\r\n \"Whether or not to allow for custom models defined on the Hub in their own modeling files. This option\"\r\n \"should only be set to `True` for repositories you trust and in which you have read the code, as it will \"\r\n \"execute code present on the Hub on your local machine.\"\r\n```\r\nAnd this **sounds** like `if an user doesn't run custom models defined on the Hub`, they are safe.\r\n\r\nI will update the PR.\r\n\r\n",
"Updated. It turns out that I don't need to update the message - it's clear enough 😅 ",
"Thanks! and i agree that using the same env var is a good idea / makes sense 👍 ",
"Sorry about that, but there is a tiny issue with the condition `if not os.getenv(\"TRUST_REMOTE_CODE\", False):`. I will fix this then merge. \r\n\r\nThank you for all the review.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27776). All of your documentation changes will be reflected on that endpoint.",
"This seems to have now triggered a dependabot update in downstream repos: https://github.com/huggingface/api-inference-community/security/dependabot/78\r\n\r\nThe concerned userbase is extremely limited and all we did was protect from pickle loading so really unsure about the \"critical\" report, but good to have done it nonetheless. Thanks Yih-Dar!",
"I show scan dependabot in all the dependency GitHub repositories to show my impact 😆 "
] | 1,701 | 1,703 | 1,701 | COLLABORATOR | null | # What does this PR do?
For security reason, require the users to explicitly allow access to the places where `pickle.load` is used.
```
import os
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
os.environ["ALLOW_ACCESS_TO_POTENTIAL_INSECURE_CODE"] = "True"
checkpoint = 'transfo-xl-wt103'
revision = '40a186da79458c9f9de846edfaea79c412137f97'
tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision)
model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision)
```
## Before merge
There are a few `pickle.load` in `examples/research_projects/`, some conversion files and test files. These are considered less severe, but we can still apply the change to all of them to get rid of this unlovely pickle stuff. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27776",
"html_url": "https://github.com/huggingface/transformers/pull/27776",
"diff_url": "https://github.com/huggingface/transformers/pull/27776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27776.patch",
"merged_at": 1701704918000
} |
https://api.github.com/repos/huggingface/transformers/issues/27775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27775/comments | https://api.github.com/repos/huggingface/transformers/issues/27775/events | https://github.com/huggingface/transformers/pull/27775 | 2,018,618,438 | PR_kwDOCUB6oc5gxxqi | 27,775 | Adding Prompt lookup decoding | {
"login": "apoorvumang",
"id": 1957903,
"node_id": "MDQ6VXNlcjE5NTc5MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1957903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apoorvumang",
"html_url": "https://github.com/apoorvumang",
"followers_url": "https://api.github.com/users/apoorvumang/followers",
"following_url": "https://api.github.com/users/apoorvumang/following{/other_user}",
"gists_url": "https://api.github.com/users/apoorvumang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apoorvumang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apoorvumang/subscriptions",
"organizations_url": "https://api.github.com/users/apoorvumang/orgs",
"repos_url": "https://api.github.com/users/apoorvumang/repos",
"events_url": "https://api.github.com/users/apoorvumang/events{/privacy}",
"received_events_url": "https://api.github.com/users/apoorvumang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@apoorvumang #27750 is now merged, you can rebase this PR with main!",
"Amazing! Will try to do that asap @gante ",
"@apoorvumang checking on this PR -- do you have a timeline for its completion? I'm happy to help (or to integrate the feature myself) 🤗 ",
"@gante I've rebased with main, and code seems to be working - I checked generation with\r\n```\r\noutputs = model.generate(\r\n input_ids,\r\n prompt_lookup_num_tokens=10,\r\n max_new_tokens=20,\r\n)\r\n```\r\nWill try to add tests too - it would be really helpful if u could guide me as to what needs to be done. I can spend some more time tomorrow and day after on coding. I haven't yet been able to figure out a better way to do hyperparams/hyperparam updates, so going with some static ones (I plan to spend some time very soon doing proper experiments, but that might needlessly delay this)\r\n\r\nIf it feels I'm slowing you down, please do let me know, and please feel free to implement the current version of prompt lookup - I really don't have anything better since the day I first posted 😭 \r\n",
"@gante Could you please review this PR? I have added tests, fixed most issues (not sure why torch_flax test is failing)",
"@ArthurZucker Adding for review if you're available (since you reviewed #27750 )",
"(I hope you don't mind -- I've fixed a minor syntax error to make our CI happy :) )",
"Yes please do edit as you see fit - and please let me know if I need to do anything 😺 ",
"@apoorvumang actually I need an action from your end :) After [this PR](https://github.com/huggingface/transformers/pull/28432) gets merged, I'd like to ask you to rebase this branch with `main` and force-push it. Otherwise, the CI won't go green :(",
"@gante done with rebase, seems like all tests passed?",
"@apoorvumang correct, we just need a greenlight for our core maintainer :)",
"Reviewing now! ",
"@apoorvumang now let's amplify this feature :D I'll make some comms on Monday",
"This new feature is broken on:\r\n\r\n```python\r\n>>> transformers.__version__\r\n'4.37.0.dev0'\r\n```\r\n\r\n```sh\r\n File \"/home/user/llm/test_speeds.py\", line 110, in test_batch_size\r\n out_toks = model.generate(\r\n File \"/home/user/llm/.env/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/user/_/lib/transformers/src/transformers/generation/utils.py\", line 1455, in generate\r\n return self.assisted_decoding(\r\n File \"/home/user/_/lib/transformers/src/transformers/generation/utils.py\", line 4337, in assisted_decoding\r\n .tile(eos_token_id_tensor.shape[0], 1)\r\nAttributeError: 'NoneType' object has no attribute 'shape'\r\n```\r\n\r\nReproduction:\r\n\r\n```python\r\nimport transformers\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport os\r\nimport torch\r\n\r\nMODEL_PATH = \"~/_/models/phi-2\"\r\nMODEL_PATH = os.path.expanduser(MODEL_PATH)\r\n\r\ntry:\r\n model_loaded\r\n print('model already loaded')\r\nexcept:\r\n print('loading model')\r\n model = AutoModelForCausalLM.from_pretrained(\r\n MODEL_PATH,\r\n device_map=\"auto\",\r\n torch_dtype=torch.float16,\r\n # load_in_8bit=True,\r\n trust_remote_code=False,\r\n attn_implementation=\"flash_attention_2\",\r\n )\r\n model.eval()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, use_fast=True)\r\n tokenizer.pad_token = tokenizer.eos_token\r\n tokenizer.padding_side = \"left\"\r\n\r\ninp = \"hi\"\r\ntokenized = tokenizer(inp, padding='longest', return_tensors='pt', add_special_tokens=True)\r\ntokenized['attention_mask'] = tokenized['attention_mask'].to('cuda')\r\ntokenized['input_ids'] = tokenized['input_ids'].to('cuda')\r\n\r\nout_toks = model.generate(\r\n **tokenized,\r\n max_new_tokens=32, # VARIABLE\r\n use_cache=True, # (huge slowdown without)\r\n prompt_lookup_num_tokens=10,\r\n)\r\nout = tokenizer.decode(out_toks)\r\nprint(out)\r\n```",
"Also, not supported for RWKV:\r\n\r\n```\r\nAttributeError: 'Rwkv5CausalLMOutput' object has no attribute 'past_key_values'\r\n```",
"Hi @freckletonj 👋 \r\n\r\nI've just run the following script on `main`, it is working as expected:\r\n\r\n```py\r\nimport transformers\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport os\r\nimport torch\r\n\r\n\r\nprint('loading model')\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"microsoft/phi-2\",\r\n device_map=\"auto\",\r\n torch_dtype=torch.float16,\r\n # load_in_8bit=True,\r\n trust_remote_code=False,\r\n attn_implementation=\"flash_attention_2\",\r\n)\r\nmodel.eval()\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/phi-2\", use_fast=True)\r\ntokenizer.pad_token = tokenizer.eos_token\r\ntokenizer.padding_side = \"left\"\r\n\r\ninp = \"hi\"\r\ntokenized = tokenizer(inp, padding='longest', return_tensors='pt', add_special_tokens=True)\r\ntokenized['attention_mask'] = tokenized['attention_mask'].to('cuda')\r\ntokenized['input_ids'] = tokenized['input_ids'].to('cuda')\r\n\r\nout_toks = model.generate(\r\n **tokenized,\r\n max_new_tokens=32, # VARIABLE\r\n use_cache=True, # (huge slowdown without)\r\n prompt_lookup_num_tokens=10,\r\n eos_token_id=-1, # this line shouldn't be needed, the model config needs retouching\r\n)\r\nout = tokenizer.decode(out_toks[0])\r\nprint(out)\r\n```\r\n\r\nAs for RWKV, it doesn't have `past_key_values`, so it won't work with this technique (well, with any technique that does `past_key_value` manipulation). I'm going to open a PR to improve the exception message.",
"@gante I've produced a more minimal version that definitely demonstrates this issue.\r\n\r\nI'm on `transformers:main` and have the latest `microsoft/phi-2`.\r\n\r\nIt gives me 2 questions:\r\n\r\n1. The difference between working or not is just the `prompt_lookup_num_tokens`, so I think it's clearly broken.\r\n2. **Batches:** It seems like the speculative suffixes could be shared across the batch, and for certain workloads have a payoff. Could we enable PLD for batches?\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\nMODEL_PATH = \"microsoft/phi-2\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n MODEL_PATH,\r\n device_map=\"auto\",\r\n torch_dtype=torch.float16,\r\n trust_remote_code=False,\r\n attn_implementation=\"flash_attention_2\",\r\n)\r\nmodel.eval()\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, use_fast=True)\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\ninp = [\r\n \"hi\",\r\n # \"wow\", # batches don't work with `prompt_lookup_num_tokens`\r\n]\r\n\r\ntokenized = tokenizer(inp, padding='longest', return_tensors='pt', add_special_tokens=True)\r\ntokenized['input_ids'] = tokenized['input_ids'].to('cuda')\r\ntokenized['attention_mask'] = tokenized['attention_mask'].to('cuda')\r\n\r\nout_toks = model.generate(\r\n **tokenized,\r\n max_new_tokens=32,\r\n use_cache=True,\r\n prompt_lookup_num_tokens=10, # TOGGLING THIS OFF MAKES IT WORK\r\n)\r\n\r\nfor x in out_toks:\r\n print(tokenizer.decode(x))\r\n```\r\n\r\n\r\nThe error:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/user/bots/t01_prompt_lookup_decoding_sandbox.py\", line 43, in <module>\r\n out_toks = model.generate(\r\n File \"/home/user/bots/.env/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/user/lib/transformers/src/transformers/generation/utils.py\", line 1457, in generate\r\n return self.assisted_decoding(\r\n File \"/home/user/lib/transformers/src/transformers/generation/utils.py\", line 4348, in assisted_decoding\r\n .tile(eos_token_id_tensor.shape[0], 1)\r\nAttributeError: 'NoneType' object has no attribute 'shape'\r\n```\r\n",
"> so I think it's clearly broken\r\n\r\n@freckletonj you either think it's broken or it is clearly broken 😉 In this case, it is the former: the root issue is a poor model configuration on Phi-2, as it lacks an EOS token. In other words, it is not a `transformers` issue. In any case, I'm going to open a PR against the Microsoft Phi-2 repo, so other issues don't run against the same issue :)\r\n\r\nMeanwhile, feel free to set `eos_token_id=50256` in the `.generate()` call"
] | 1,701 | 1,706 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Adds the prompt lookup decoding method from https://github.com/apoorvumang/prompt-lookup-decoding , issue #27722
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27775/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27775/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27775",
"html_url": "https://github.com/huggingface/transformers/pull/27775",
"diff_url": "https://github.com/huggingface/transformers/pull/27775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27775.patch",
"merged_at": 1705166158000
} |
https://api.github.com/repos/huggingface/transformers/issues/27774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27774/comments | https://api.github.com/repos/huggingface/transformers/issues/27774/events | https://github.com/huggingface/transformers/pull/27774 | 2,018,603,518 | PR_kwDOCUB6oc5gxuZ4 | 27,774 | Translate `model_doc` files from `clip` to `cpm` to JP | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27774). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
translation of model docs
<!-- Remove if not applicable -->
Fixes #27773
## Who can review?
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27774/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27774",
"html_url": "https://github.com/huggingface/transformers/pull/27774",
"diff_url": "https://github.com/huggingface/transformers/pull/27774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27774.patch",
"merged_at": 1701976344000
} |
https://api.github.com/repos/huggingface/transformers/issues/27773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27773/comments | https://api.github.com/repos/huggingface/transformers/issues/27773/events | https://github.com/huggingface/transformers/issues/27773 | 2,018,603,091 | I_kwDOCUB6oc54UXBT | 27,773 | [i18n-<Jp>] Translating docs to JP | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the jp speaking community 🌐
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `jp` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `jp/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27773/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27772/comments | https://api.github.com/repos/huggingface/transformers/issues/27772/events | https://github.com/huggingface/transformers/issues/27772 | 2,018,585,892 | I_kwDOCUB6oc54US0k | 27,772 | `spectrogram` function may silently produce incorrect values for certain inputs | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Feel free to open a PR for a fix 😈 ",
"cc @ylacombe would you be able to take a look at protecting against this edge case?"
] | 1,701 | 1,706 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux
- Python version: 3.8.3
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi @hollance
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code for reference:
https://github.com/huggingface/transformers/blob/62ab32b2997846526cdffd629c72c47bc7b4f215/src/transformers/audio_utils.py#L451-L457
If `power=None` and `mel_filters != None`, then we will be performing `np.maximum` between 1 array of real numbers and 1 array of complex numbers. Since complex numbers do not have an ordering, `np.maximum` will only look at the real parts (for some reason, that is how it is implemented):
```py
>>> import numpy as np
>>> np.maximum(np.array([1,2,3]), 5j)
array([1.+0.j, 2.+0.j, 3.+0.j])
```
As a result, the function will continue without throwing an error.
Possible fixes:
1. If `power=None`, still perform `np.abs()`
2. Throw an error if this combination of arguments is provided
It should be noted that although I haven't found a processor which uses the function with these inputs, it may cause an issue in future, or for users of the spectrogram function directly.
### Expected behavior
An error should be thrown, or the absolute value should be taken if power is None. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27772/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27771/comments | https://api.github.com/repos/huggingface/transformers/issues/27771/events | https://github.com/huggingface/transformers/pull/27771 | 2,018,519,990 | PR_kwDOCUB6oc5gxb81 | 27,771 | Tree of Thoughts Tutorial | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @MKhalusova "
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Adding ToT Framework
<!-- Remove if not applicable -->
Fixes #26726
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada, @gante
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27771/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27771",
"html_url": "https://github.com/huggingface/transformers/pull/27771",
"diff_url": "https://github.com/huggingface/transformers/pull/27771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27771.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27770/comments | https://api.github.com/repos/huggingface/transformers/issues/27770/events | https://github.com/huggingface/transformers/issues/27770 | 2,018,490,801 | I_kwDOCUB6oc54T7mx | 27,770 | ver 4.35.2 transformers.Trainer, deepspeed backward(loss) is not used if distributed_state is NO | {
"login": "haixpham",
"id": 32718796,
"node_id": "MDQ6VXNlcjMyNzE4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/32718796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haixpham",
"html_url": "https://github.com/haixpham",
"followers_url": "https://api.github.com/users/haixpham/followers",
"following_url": "https://api.github.com/users/haixpham/following{/other_user}",
"gists_url": "https://api.github.com/users/haixpham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haixpham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haixpham/subscriptions",
"organizations_url": "https://api.github.com/users/haixpham/orgs",
"repos_url": "https://api.github.com/users/haixpham/repos",
"events_url": "https://api.github.com/users/haixpham/events{/privacy}",
"received_events_url": "https://api.github.com/users/haixpham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I ended up rewriting `Trainer._inner_training_loop()` and `Trainer.training_step()` to force deepspeed when distributed training is not used, as follows:\r\n- _inner_training_loop\r\n\r\n```\r\n# prepare using `accelerator` prepare\r\nif use_accelerator_prepare:\r\n self.model.train()\r\n if hasattr(self.lr_scheduler, \"step\"):\r\n if self.use_apex:\r\n model = self.accelerator.prepare(self.model)\r\n else:\r\n #model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n ## add the workaroud for deepspeed\r\n if not self.accelerator.use_distributed and self.is_deepspeed_enabled:\r\n ## the workaround to force using deepspeed\r\n model, self.optimizer = self.accelerator._prepare_deepspeed(self.model, self.optimizer)\r\n else:\r\n # old behavior\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n else:\r\n # to handle cases wherein we pass \"DummyScheduler\" such as when it is specified in DeepSpeed config.\r\n # model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(\r\n # self.model, self.optimizer, self.lr_scheduler\r\n # )\r\n if not self.accelerator.use_distributed and self.is_deepspeed_enabled:\r\n ## the workaround to force using deepspeed\r\n model, self.optimizer, self.lr_scheduler = self.accelerator._prepare_deepspeed(\r\n self.model, self.optimizer, self.lr_scheduler\r\n )\r\n else:\r\n # old behavior\r\n model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(\r\n self.model, self.optimizer, self.lr_scheduler\r\n )\r\n```\r\n- training_step\r\n```\r\n# back-prop\r\nif self.do_grad_scaling:\r\n self.scaler.scale(loss).backward()\r\nelif self.use_apex:\r\n with amp.scale_loss(loss, self.optimizer) as scaled_loss:\r\n scaled_loss.backward()\r\n## this is a hack to force accelerator to backward loss with deepspeed in non-distributed setting\r\nelif self._check_deepspeed_no_distributed():\r\n self.accelerator.deepspeed_engine_wrapped.backward(loss)\r\nelse:\r\n self.accelerator.backward(loss)\r\n```\r\nI think the problem should be addressed in accelerator - in case the code is not launched with distributed launcher like torchrun or deepspeed or accelerate",
"cc @muellerzr and @pacman100 ",
"Hello, as per the documentation in https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-one-gpu and https://www.deepspeed.ai/getting-started/#launching-deepspeed-training, you need to use `deepspeed`, `torchrun` or `accelerate` launcher when using DeepSpeed. Just pass `--num_gpus=1` (with deepspeed launcher) or `nproc_per_node=1` (with torchrun launcher) or `num_processes=1` (with accelerate launcher). ",
"I knew it would work if launched with distributed launcher. However, in ver 4.28.1, deepspeed would work WITHOUT using launcher - I could actually debug with deepspeed invoked. I'm not sure is the current way (using launcher) is intended by design or a stop gap measure.",
"Let me look into it a bit more but the recommended way is to use the distributed launcher irrespective of whether or not it was working before without it.",
"Hello, could you please let us know if the above PR resolves your issue?",
"> Hello, could you please let us know if the above PR resolves your issue?\r\n\r\nyes that would solve it at first glance. I will report back after running the whole training/eval process"
] | 1,701 | 1,701 | 1,701 | NONE | null | ### System Info
transformers 4.35.2
deepspeed 0.12.3
torch 2.1
CUDA 12.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In 4.35.2 `Trainer.training_step()`:
```
if self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
else:
self.accelerator.backward(loss)
```
` self.accelerator.backward(loss)` only uses deepspeed engine's `backward(loss)` if distributed_state is Deepspeed, which is only set if launched with torchrun. If this is not the intended behavior, the fix is to either
- modify Trainer to detect this case
- modify Accelerator to handle this case in `backward()`
### Expected behavior
The below piece of code is from transformers 4.28.1, which always uses deepspeed when training_args.deepspeed is present.
```
if self.do_grad_scaling:
self.scaler.scale(loss).backward()
elif self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
elif self.deepspeed:
# loss gets scaled under gradient_accumulation_steps in deepspeed
loss = self.deepspeed.backward(loss)
else:
loss.backward()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27770/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27769/comments | https://api.github.com/repos/huggingface/transformers/issues/27769/events | https://github.com/huggingface/transformers/issues/27769 | 2,018,433,434 | I_kwDOCUB6oc54Ttma | 27,769 | RuntimeError: "addmm_impl_cpu_" not implemented for 'Half' | {
"login": "jerin-scalers-ai",
"id": 125901005,
"node_id": "U_kgDOB4EYzQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125901005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerin-scalers-ai",
"html_url": "https://github.com/jerin-scalers-ai",
"followers_url": "https://api.github.com/users/jerin-scalers-ai/followers",
"following_url": "https://api.github.com/users/jerin-scalers-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/jerin-scalers-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerin-scalers-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerin-scalers-ai/subscriptions",
"organizations_url": "https://api.github.com/users/jerin-scalers-ai/orgs",
"repos_url": "https://api.github.com/users/jerin-scalers-ai/repos",
"events_url": "https://api.github.com/users/jerin-scalers-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerin-scalers-ai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, the issue does not lie with transformers but `torch` which does not support FP16 on CPU. 🤗 ",
"@ArthurZucker Thanks for the quick response\r\n\r\nDoes INT8 is supported on CPU ? I was trying out load_in_8bit argument but got below error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/jerin/infer.py\", line 14, in <module>\r\n model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)\r\n File \"/home/ubuntu/jerin/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py\", line 566, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/home/ubuntu/jerin/venv/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 2714, in from_pretrained\r\n raise ImportError(\r\nImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes` \r\n```\r\n\r\nNote: I've already installed the latest version of accelerate and bitsandbytes.",
"Yep this is also gonna be fixed by #27764, you need a GPU for this package. Maybe some very specific libraries (like `candle`). with custom kernels (Maybe GPTQ?) support it on CPU but it's rare and def not torch!",
"Hi @jerin-scalers-ai, you can also check the following [repository](https://github.com/intel/intel-extension-for-transformers). The intel team recently added support for QLora on CPU. We might support it on transformers in a near future.",
"bfloat16 simply has no CPU support because no CPU supports bf16 instructions, just use `torch_dtype=torch.float16`. It might fail on some particular models, but should work for most.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### System Info
Transformers: 4.35.2
Torch: 2.1.1-cpu
### Who can help?
@Narsil
I attempted text generation with the Llama 2 model using the transformer text generation pipeline on a CPU. The model precision was configured to float16 using the torch_dtype argument, but encountered the following issues. While I successfully executed FP32 and BF16 models on the CPU, it seems that transformers does not support FP16 on CPU. Is FP16 supported by transformers on CPU ? The detailed error is provided below:
```
Traceback (most recent call last):
File "/home/user/sagar/wrkld-llm-hf-cpu/performance-testing/llm/test.py", line 21, in <module>
output = text_generator(
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 208, in __call__
return super().__call__(text_inputs, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1140, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1147, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1046, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 271, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1673, in generate
return self.greedy_search(
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 2521, in greedy_search
outputs = self(
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 1009, in forward
outputs = self.model(
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 897, in forward
layer_outputs = decoder_layer(
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 626, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 244, in forward
query_states = self.q_proj(hidden_states)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.cache/pypoetry/virtualenvs/llama2-benchmark-qWVSPitQ-py3.10/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
model_id = "meta-llama/Llama-2-7b-chat-hf"
device = "cpu"
torch_dtype = torch.float16
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "Discuss the history and evolution of artificial intelligence in 80 words"
text_generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
return_tensors=True,
device=device,
torch_dtype = torch_dtype,
)
# Inference benchmarking
output = text_generator(
input_text,
max_new_tokens=100,
temperature=0.1,
)
print(tokenizer.decode(output[0]["generated_token_ids"]))
num_tokens = len(output[0]["generated_token_ids"])
```
### Expected behavior
Is FP16 supported by transformers on CPU ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27769/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27768/comments | https://api.github.com/repos/huggingface/transformers/issues/27768/events | https://github.com/huggingface/transformers/pull/27768 | 2,018,329,180 | PR_kwDOCUB6oc5gwxst | 27,768 | [WIP] Uniformize processors in text+image multimodal models. | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Still being worked on but a longer-term project; putting the `WIP` label so that the bot doesn't close it."
] | 1,701 | 1,704 | null | CONTRIBUTOR | null | # What does this PR do?
This PR is a work in progress aiming at uniformizing all text-image multimodal processors. Ideally, leveraging `AutoProcessor(...)` or an equivalent for every model would be the best.
The processor is one of the most fundamental blocks of transformers, and modifying it can only be done with careful deprecation cycles. It is however the opportunity to _enforce a standard_, design-wise, for future processing utilties and down-the-line pipeline integrations.
For instance align has a current `__call__` method ` def __call__(self, text=None, images=None, padding="max_length", max_length=64, return_tensors=None, **kwargs)`
altclip has `__call__(self, text=None, images=None, return_tensors=None, **kwargs)`
blip has
```python
def __call__(
self,
images: ImageInput = None,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
add_special_tokens: bool = True,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str, TruncationStrategy] = None,
max_length: Optional[int] = None,
stride: int = 0,
pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None,
return_overflowing_tokens: bool = False,
return_special_tokens_mask: bool = False,
return_offsets_mapping: bool = False,
return_token_type_ids: bool = False,
return_length: bool = False,
verbose: bool = True,
return_tensors: Optional[Union[str, TensorType]] = None,
**kwargs,
) -> BatchEncoding:
```
And so on, with recently for instance Kosmos-2
```python
def __call__(
self,
images: ImageInput = None,
text: Union[TextInput, List[TextInput]] = None,
bboxes: BboxInput = None,
num_image_tokens: Optional[int] = 64,
first_image_token_id: Optional[int] = None,
add_special_tokens: bool = True,
add_eos_token: bool = False,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str, TruncationStrategy] = None,
max_length: Optional[int] = None,
pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None,
return_length: bool = False,
verbose: bool = True,
return_tensors: Optional[Union[str, TensorType]] = None,
**kwargs,
) -> BatchFeature:
```
Currently, there are 30 text + image models that have a dedicated `processing_<model>` file. All should be reviewed and made pipeline-compatible. All of them have to be checked, modified or wrapped with a common class.
- [ ] align
- [ ] altclip
- [ ] blip
- [ ] blip_2
- [ ] bridgetower
- [ ] chinese_clip
- [ ] clipseg
- [ ] clip
- [ ] donut
- [ ] flava
- [ ] fuyu
- [ ] git
- [ ] idefics
- [ ] instructblip
- [ ] kosmos2
- [ ] layoutlmv2
- [ ] layoutlmv3
- [ ] layoutxlm
- [ ] mgp_str
- [ ] nougat
- [ ] oneformer
- [ ] owlv2
- [ ] owlvit
- [ ] perceiver
- [ ] pix2struct
- [ ] troc
- [ ] tvp
- [ ] vilt
- [ ] vision_text_dual_encoder
- [ ] x_clip
Related works:
- See the insightful discussion in this PR https://github.com/huggingface/transformers/pull/26885 about _invariants_ and their importance.
- @NielsRogge has started working on adding new processor tests in a separate PR as well. https://github.com/huggingface/transformers/pull/27720. Please follow both as tests will enforce signatures.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27768/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27768/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27768",
"html_url": "https://github.com/huggingface/transformers/pull/27768",
"diff_url": "https://github.com/huggingface/transformers/pull/27768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27768.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27767/comments | https://api.github.com/repos/huggingface/transformers/issues/27767/events | https://github.com/huggingface/transformers/pull/27767 | 2,018,142,004 | PR_kwDOCUB6oc5gwIj3 | 27,767 | Remove `check_runner_status.yml` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
Since our CI is migrated with the VMs spun up dynamically, this check for runner status no longer makes sense. This PR removes it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27767/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27767",
"html_url": "https://github.com/huggingface/transformers/pull/27767",
"diff_url": "https://github.com/huggingface/transformers/pull/27767.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27767.patch",
"merged_at": 1701335845000
} |
https://api.github.com/repos/huggingface/transformers/issues/27766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27766/comments | https://api.github.com/repos/huggingface/transformers/issues/27766/events | https://github.com/huggingface/transformers/issues/27766 | 2,017,948,228 | I_kwDOCUB6oc54R3JE | 27,766 | Wrong image processing behavior of Fuyu | {
"login": "SeungyounShin",
"id": 20262536,
"node_id": "MDQ6VXNlcjIwMjYyNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/20262536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeungyounShin",
"html_url": "https://github.com/SeungyounShin",
"followers_url": "https://api.github.com/users/SeungyounShin/followers",
"following_url": "https://api.github.com/users/SeungyounShin/following{/other_user}",
"gists_url": "https://api.github.com/users/SeungyounShin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeungyounShin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeungyounShin/subscriptions",
"organizations_url": "https://api.github.com/users/SeungyounShin/orgs",
"repos_url": "https://api.github.com/users/SeungyounShin/repos",
"events_url": "https://api.github.com/users/SeungyounShin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeungyounShin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"close : I believe the observed behavior is likely expected and necessary for batch processing.\""
] | 1,701 | 1,701 | 1,701 | NONE | null | ### System Info
At https://github.com/huggingface/transformers/blob/083e36923a19650fa264c4173db2f63ab124bb27/src/transformers/models/fuyu/processing_fuyu.py#L539
image processing should accept size variables if not it will pad image to this extent.
<img width="1139" alt="image" src="https://github.com/huggingface/transformers/assets/20262536/695f703f-7256-4716-b4a9-1083d166e0ab">
And while fuyu is trained to accept variable resolution it should be not intended behavior I guess.
### Who can help?
text models: @ArthurZucker and @younesbelkada
vision models: @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import FuyuProcessor, FuyuForCausalLM
from PIL import Image
import requests
import numpy as np
processor = FuyuProcessor.from_pretrained("adept/fuyu-8b", cache_dir = ".cache")
model = FuyuForCausalLM.from_pretrained("adept/fuyu-8b", cache_dir = ".cache").to("cuda:0")
# prepare inputs for the model
text_prompt = "generate coco-style prompt\n"
url1 = "https://huggingface.co/adept/fuyu-8b/resolve/main/bus.png"
image1 = Image.open(requests.get(url1, stream=True).raw)
inputs = processor(text=text_prompt, images=image, return_tensors="pt").to("cuda:0")
print(inputs["input_ids"].shape)
# autoregressively generate text
generation_output = model.generate(**inputs, max_new_tokens=128)
prompt_len = inputs["input_ids"].shape[-1]
generation_text = processor.decode(generation_output[0][prompt_len:], skip_special_tokens=True)
print(generation_text)
```
### Expected behavior
Expected to be not padded to that extent. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27766/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27765/comments | https://api.github.com/repos/huggingface/transformers/issues/27765/events | https://github.com/huggingface/transformers/issues/27765 | 2,017,470,097 | I_kwDOCUB6oc54QCaR | 27,765 | The Phi 1.5 tokenizer is giving different tokens for same word | {
"login": "bnicholl",
"id": 26211830,
"node_id": "MDQ6VXNlcjI2MjExODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26211830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bnicholl",
"html_url": "https://github.com/bnicholl",
"followers_url": "https://api.github.com/users/bnicholl/followers",
"following_url": "https://api.github.com/users/bnicholl/following{/other_user}",
"gists_url": "https://api.github.com/users/bnicholl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bnicholl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bnicholl/subscriptions",
"organizations_url": "https://api.github.com/users/bnicholl/orgs",
"repos_url": "https://api.github.com/users/bnicholl/repos",
"events_url": "https://api.github.com/users/bnicholl/events{/privacy}",
"received_events_url": "https://api.github.com/users/bnicholl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The phi-1_5 tokenizer takes the space character in between into account and produces a different token for ` dog`, in this case it is `3290`. \r\n\r\nIf you use \r\n1) encoded_inputs = tokenizer([\"dog,dog\"])\r\n2) encoded_inputs = tokenizer([\"dogdog\"])\r\n\r\nyou will get \r\n1) `output: {'input_ids': [[9703, 11, 9703]], 'attention_mask': [[1, 1, 1]]}`\r\n2) `output: {'input_ids': [[9703, 9703]], 'attention_mask': [[1, 1]]}`\r\nrespectively. \r\n\r\nI am not 100% sure if this was intended. I will look into that.",
"I have found this \r\n\r\n> the BPE vocabulary of GPT-2 is extended by special tokens representing\r\n> repeating tokens of tabs and white spaces.\r\n\r\nin the [paper](https://arxiv.org/pdf/2203.13474.pdf) where the tokenizer is borrowed from. ",
"> the BPE vocabulary of GPT-2 is extended by special tokens representing\r\n> repeating tokens of tabs and white spaces.\r\n\r\nThis seems to be for white spaces only though.\r\n\r\nUpon further inspection\r\n`tokenizer.convert_ids_to_tokens(9703)` gives the word dog\r\nand \r\n`tokenizer.convert_ids_to_tokens(3290)` is giving the word 'Ġdog'\r\n\r\nalso the sentence \r\n`encoded_inputs = tokenizer([\"moose is a dog a good dog\"])`\r\ngives output \r\n`output: {'input_ids': [[5908, 577, 318, 257, 3290, 257, 922, 3290]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1]]}`\r\nSo it seems \"dog\"(indice 9703) is the token if it is the first word in a sequence(not sentence) separated by a space, but 'Ġdog'(indice 3290) is used if it is any word other than the first word in a sequence. This also seems to be true for other words as well. \r\n",
"I'll look at the paper and see if I can figure out if this is expected. \r\n\r\nIt's important to note, I found this potential discrepancy because I'm getting very poor results when fine tuning phi 1.5. Not sure if this would be the reason or not. ",
"Hi there 👋 This is intended behaviour for the tokenizer: i.e., to treat the first word a bit differently. Check out [this answer](https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/2) by @thomwolf for a full explanation why (TLDR: it performs better this way).\r\n\r\nYou can see this setting in the [tokenizer_config.json](https://huggingface.co/microsoft/phi-1_5/blob/main/tokenizer_config.json#L2):\r\n```json\r\n{\r\n \"add_prefix_space\": false,\r\n ...\r\n}\r\n```\r\n\r\nIf you really want to override this, you can set `add_prefix_space` to `True` as follows:\r\n```py\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/phi-1_5\", add_prefix_space=True)\r\nprint(tokenizer(\"dog\")) # {'input_ids': [3290], 'attention_mask': [1]}\r\nprint(tokenizer(\"dog dog\")) # {'input_ids': [3290, 3290], 'attention_mask': [1, 1]}\r\n```\r\n\r\nAs you can see, the same token is now used for `\"dog\"` in both cases (the one corresponding to `\" dog\"`; with a prefix space).",
"@xenova thanks for the response and the link. I'll close this."
] | 1,701 | 1,701 | 1,701 | NONE | null | ### System Info
The Phi 1.5 tokenizer is giving different tokens for same word. For example tokenizing dog twice will give two different tokens.
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
```
```
# example word to tokenize
encoded_inputs = tokenizer(["dog"])
```
`output: {'input_ids': [[9703]], 'attention_mask': [[1]]}`
Now when I use a sentence with dog twice, the tokens for dog are different. Below is an example simply having dog twice in the tokenizer
```
encoded_inputs = tokenizer(["dog dog"])
output: {'input_ids': [[9703, 3290]], 'attention_mask': [[1, 1]]}
```
### Expected behavior
The output for
`encoded_inputs = tokenizer(["dog dog"])`
should be
`output: {'input_ids': [[9703, 9703]], 'attention_mask': [[1, 1]]}`
assuming the correct token indices for dog is indices 9703 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27765/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27764/comments | https://api.github.com/repos/huggingface/transformers/issues/27764/events | https://github.com/huggingface/transformers/pull/27764 | 2,017,337,480 | PR_kwDOCUB6oc5gtZ7h | 27,764 | Better error message for bitsandbytes import | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,701 | 1,701 | MEMBER | null | # What does this PR do ?
This PR adds more context to the `ImportError` that is raised when `is_bitsandbytes_available() `is False. In this [PR](https://github.com/huggingface/transformers/pull/24995), we introduced a GPU check. Hence, `is_bitsandbytes_available() `can be `False` even though `bitsandbytes` is installed.
Solves issue #26165
cc @NielsRogge as this happened to you too | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27764",
"html_url": "https://github.com/huggingface/transformers/pull/27764",
"diff_url": "https://github.com/huggingface/transformers/pull/27764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27764.patch",
"merged_at": 1701449954000
} |
https://api.github.com/repos/huggingface/transformers/issues/27763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27763/comments | https://api.github.com/repos/huggingface/transformers/issues/27763/events | https://github.com/huggingface/transformers/pull/27763 | 2,017,166,960 | PR_kwDOCUB6oc5gs0ZH | 27,763 | uses dvclive_test mode in examples/pytorch/test_accelerate_examples.py | {
"login": "dberenbaum",
"id": 2308172,
"node_id": "MDQ6VXNlcjIzMDgxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dberenbaum",
"html_url": "https://github.com/dberenbaum",
"followers_url": "https://api.github.com/users/dberenbaum/followers",
"following_url": "https://api.github.com/users/dberenbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions",
"organizations_url": "https://api.github.com/users/dberenbaum/orgs",
"repos_url": "https://api.github.com/users/dberenbaum/repos",
"events_url": "https://api.github.com/users/dberenbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dberenbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker this will fix a ton of pesky test failures in the examples :) "
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
Fixes tests in examples/pytorch/test_accelerate_examples.py to use test mode for dvclive tracking, so that it does not try to do git operations in tests/CI.
Fixes comments [here](https://github.com/huggingface/transformers/pull/27352#issuecomment-1819131456) and [here](https://github.com/huggingface/accelerate/pull/2139#discussion_r1408383251).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Could you take a look @muellerzr? I didn't find anywhere else that needs a change, but you likely know better whether I missed any tests.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27763",
"html_url": "https://github.com/huggingface/transformers/pull/27763",
"diff_url": "https://github.com/huggingface/transformers/pull/27763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27763.patch",
"merged_at": 1701352323000
} |
https://api.github.com/repos/huggingface/transformers/issues/27762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27762/comments | https://api.github.com/repos/huggingface/transformers/issues/27762/events | https://github.com/huggingface/transformers/pull/27762 | 2,017,065,011 | PR_kwDOCUB6oc5gseVi | 27,762 | Fix `low_cpu_mem_usage` Flag Conflict with DeepSpeed Zero 3 in `from_pretrained` for Models with `keep_in_fp32_modules`" | {
"login": "kotarotanahashi",
"id": 1223571,
"node_id": "MDQ6VXNlcjEyMjM1NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1223571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kotarotanahashi",
"html_url": "https://github.com/kotarotanahashi",
"followers_url": "https://api.github.com/users/kotarotanahashi/followers",
"following_url": "https://api.github.com/users/kotarotanahashi/following{/other_user}",
"gists_url": "https://api.github.com/users/kotarotanahashi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kotarotanahashi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kotarotanahashi/subscriptions",
"organizations_url": "https://api.github.com/users/kotarotanahashi/orgs",
"repos_url": "https://api.github.com/users/kotarotanahashi/repos",
"events_url": "https://api.github.com/users/kotarotanahashi/events{/privacy}",
"received_events_url": "https://api.github.com/users/kotarotanahashi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | ## Summary
This pull request addresses a compatibility issue in the `from_pretrained` method. Specifically, when using models with `keep_in_fp32_modules` not set to None (e.g., BLIP2, T5) in conjunction with DeepSpeed's Zero 3, an unexpected error occurs due to the improper handling of the `low_cpu_mem_usage`.
## Problem Description
Currently, in the `from_pretrained` method, the `low_cpu_mem_usage` flag is set to `True` if `use_keep_in_fp32_modules` is `True` and accelerate is available. This logic does not account for the incompatibility with DeepSpeed Zero 3. When Zero 3 is enabled, setting `low_cpu_mem_usage` to True can lead to unexpected errors.
## Proposed Change
I propose to modify the condition to check whether DeepSpeed Zero3 is enabled. The `low_cpu_mem_usage` should be set to `True` only if Accelerate is available and DeepSpeed Zero3 is not enabled. The revised code snippet is:
```python
if is_accelerate_available() and not is_deepspeed_zero3_enabled():
low_cpu_mem_usage = True
```
This change prevents the `low_cpu_mem_usage` flag from being incorrectly set in scenarios where DeepSpeed Zero 3 is in use, thereby avoiding the aforementioned issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27762/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27762",
"html_url": "https://github.com/huggingface/transformers/pull/27762",
"diff_url": "https://github.com/huggingface/transformers/pull/27762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27762.patch",
"merged_at": 1702659822000
} |
https://api.github.com/repos/huggingface/transformers/issues/27761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27761/comments | https://api.github.com/repos/huggingface/transformers/issues/27761/events | https://github.com/huggingface/transformers/pull/27761 | 2,016,958,518 | PR_kwDOCUB6oc5gsG7V | 27,761 | Save `Processor` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27761). All of your documentation changes will be reflected on that endpoint.",
"Thanks for giving your opinion 👍 Ready for the review.\r\n\r\nThere is no test specific for this new feature so far, as we don't have a common processor test file yet. But the current set of tests can detect issues if any.\r\n\r\nThere is #27720 however.",
"@ydshieh Thanks for all the work in this PR! \r\n\r\nI'm wondering if we should hold off on saving processor configs out until we refactor the processors to standardise their inputs and outputs. Otherwise we're going to introduce config files and values which we'll have to maintain backward compatibility with cc @molbap \r\n\r\nWould not merging be blocking anything else? ",
"Good point! It makes a lot of sense, thank you.\r\n\r\n#27718 is waiting this PR I believe.\r\n\r\nBut let me double check:\r\n\r\n> refactor the processors to standardise their inputs and outputs\r\n\r\nthis sounds to me that we are talking about the arguments in a process's `call` method, in particular `text`, `images` etc (the most important ones). However, saving a config file for a processor only saves `__init__` argument, which is not about input/outpu of processor.\r\n\r\nDo I miss anything in your message?",
"@ydshieh Nope - you're completely right - I forgot it's just the processing classes passed to the processor init. \r\n\r\nPart of the future plan is to potentially have modality specific processors e.g. TextImageProcessor to make it more explicit in e.g. pipelines how inputs and outputs should be handled. I'm guessing this won't be an issue here as the e.g. TextImageProcessor acts more like the Auto classes and can use model specific configs to construct themselves?",
"If we ever introduce `TextImageProcessor` and/or `TextAudioProcessor` etc, I think they are not exactly same as auto classes but instead an abstraction one level above the explicit classes.\r\n\r\n- auto: allow to load any checkpoint: `AutoProcessor.from_pretrained(\".....\")` should work\r\n- explicit class: only allow to load a checkpoint of a specific model type: `CLIPProcessor.from_pretrained(\".....\")` should work for checkpoint of a CLIP processor but not others (well, we allow it by give warnings)\r\n- `TextImageProcessor`: they should not to be used as `TextImageProcessor.from_pretrained()`, just like we don't use `ProcessorMixin.from_pretrained()`\r\n\r\n------------------\r\n\r\nBut back to the question\r\n\r\n> I'm guessing this won't be an issue here and can use model specific configs to construct themselves\r\n\r\nIf we ever allow to use `TextImageProcessor.from_pretrained()` (which I am against) and behave like `AutoProcessor`, then under the hood it still try to find the actual type (class) and use that type + the config file to construct the object of that type.\r\n\r\n**So I don't see any issue regarding such `TextImageProcessor` with the change in this PR 🤗** ",
"Well, after a 2nd though, I think it's better to add the processor common testing for this PR to avoid the situation where we won't be able to fix in the future once the new processor config file being saved have some issues.\r\n\r\nchange this to draft for now.",
"The PR is ready for review. Regarding [the comment](https://github.com/huggingface/transformers/pull/27761#discussion_r1448533919), see [my response](https://github.com/huggingface/transformers/pull/27761#discussion_r1448552231).\r\n\r\nFor [we only call the hub once](https://github.com/huggingface/transformers/pull/27761#discussion_r1448620683), as we don't have such tests for image processor/feature extractor/processor currently, and it's not critical, I think it's better to deal with them in a follow up PR.\r\n\r\n(In fact, I don't know if it is true that we can have only one call to the Hub for processor - considering processor have 2 components)",
"I guess we are all good and I can merge? ",
"Still think we can simplify the way you handle kwargs, but alright otherwise"
] | 1,701 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Save `Processor`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27761/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27761",
"html_url": "https://github.com/huggingface/transformers/pull/27761",
"diff_url": "https://github.com/huggingface/transformers/pull/27761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27761.patch",
"merged_at": 1705573306000
} |
https://api.github.com/repos/huggingface/transformers/issues/27760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27760/comments | https://api.github.com/repos/huggingface/transformers/issues/27760/events | https://github.com/huggingface/transformers/issues/27760 | 2,016,901,176 | I_kwDOCUB6oc54N3g4 | 27,760 | ver 4.35.2 transformers.Trainer breaks CUDA AMP support | {
"login": "haixpham",
"id": 32718796,
"node_id": "MDQ6VXNlcjMyNzE4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/32718796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haixpham",
"html_url": "https://github.com/haixpham",
"followers_url": "https://api.github.com/users/haixpham/followers",
"following_url": "https://api.github.com/users/haixpham/following{/other_user}",
"gists_url": "https://api.github.com/users/haixpham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haixpham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haixpham/subscriptions",
"organizations_url": "https://api.github.com/users/haixpham/orgs",
"repos_url": "https://api.github.com/users/haixpham/repos",
"events_url": "https://api.github.com/users/haixpham/events{/privacy}",
"received_events_url": "https://api.github.com/users/haixpham/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @muellerzr ",
"Yep I see what happened with https://github.com/huggingface/transformers/pull/25702/files, it removed CUDA amp entirely for just CPU AMP. I'll work on this",
"cc @statelesshz ",
"\r\nIf I'm not mistaken, the native AMP (Automatic Mixed Precision) has now been entirely migrated to Accelerate, see: https://github.com/huggingface/transformers/blob/fe41647afc98c7b5ce6c93a5f7cf57314bcd56f6/src/transformers/training_args.py#L1555-L1564",
"I wonder if it’s not picking it up in time as an env variable potentially. ",
"@muellerzr is this issue resolved?"
] | 1,701 | 1,706 | null | NONE | null | ### System Info
torch 2.1.1
transformers 4.35.2
CUDA 12.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In the latest version of transformers (4.35.2), in `Trainer.__init__()`, there is no option for `use_cuda_amp`, and consequently, in `Trainer.autocast_smart_context_manager()`, `torch.cuda.amp.autocast()` is not invoked, which will lead to runtime error during loss backward.
This is the `__init__()` part of version 4.35.2, which does not enable autocast for both fp16 and bf16:
```
# Mixed precision setup
self.use_apex = False
self.use_cpu_amp = False
# Mixed precision setup for SageMaker Model Parallel
if is_sagemaker_mp_enabled():
# BF16 + model parallelism in SageMaker: currently not supported, raise an error
if args.bf16:
raise ValueError("SageMaker Model Parallelism does not support BF16 yet. Please use FP16 instead ")
if IS_SAGEMAKER_MP_POST_1_10:
# When there's mismatch between SMP config and trainer argument, use SMP config as truth
if args.fp16 != smp.state.cfg.fp16:
logger.warning(
f"FP16 provided in SM_HP_MP_PARAMETERS is {smp.state.cfg.fp16}, "
f"but FP16 provided in trainer argument is {args.fp16}, "
f"setting to {smp.state.cfg.fp16}"
)
args.fp16 = smp.state.cfg.fp16
else:
# smp < 1.10 does not support fp16 in trainer.
if hasattr(smp.state.cfg, "fp16"):
logger.warning(
f"FP16 provided in SM_HP_MP_PARAMETERS is {smp.state.cfg.fp16}, "
"but SageMaker Model Parallelism < 1.10 does not support FP16 in trainer."
)
if (args.fp16 or args.bf16) and args.half_precision_backend == "auto":
if args.device == torch.device("cpu"):
if args.fp16:
raise ValueError("Tried to use `fp16` but it is not supported on cpu")
else:
args.half_precision_backend = "cpu_amp"
logger.info(f"Using {args.half_precision_backend} half precision backend")
if (args.fp16 or args.bf16) and not (self.is_deepspeed_enabled or is_sagemaker_mp_enabled()):
# deepspeed and SageMaker Model Parallel manage their own half precision
if args.half_precision_backend == "cpu_amp":
self.use_cpu_amp = True
self.amp_dtype = torch.bfloat16
elif args.half_precision_backend == "apex":
if not is_apex_available():
raise ImportError(
"Using FP16 with APEX but APEX is not installed, please refer to"
" https://www.github.com/nvidia/apex."
)
self.use_apex = True
```
### Expected behavior
the same part of version 4.28.1 (which I know it works):
```
# Mixed precision setup
self.use_apex = False
self.use_cuda_amp = False
self.use_cpu_amp = False
# Mixed precision setup for SageMaker Model Parallel
if is_sagemaker_mp_enabled():
# BF16 + model parallelism in SageMaker: currently not supported, raise an error
if args.bf16:
raise ValueError("SageMaker Model Parallelism does not support BF16 yet. Please use FP16 instead ")
if IS_SAGEMAKER_MP_POST_1_10:
# When there's mismatch between SMP config and trainer argument, use SMP config as truth
if args.fp16 != smp.state.cfg.fp16:
logger.warning(
f"FP16 provided in SM_HP_MP_PARAMETERS is {smp.state.cfg.fp16},"
f"but FP16 provided in trainer argument is {args.fp16},"
f"setting to {smp.state.cfg.fp16}"
)
args.fp16 = smp.state.cfg.fp16
else:
# smp < 1.10 does not support fp16 in trainer.
if hasattr(smp.state.cfg, "fp16"):
logger.warning(
f"FP16 provided in SM_HP_MP_PARAMETERS is {smp.state.cfg.fp16}, "
"but SageMaker Model Parallelism < 1.10 does not support FP16 in trainer."
)
if args.fp16 or args.bf16:
if args.half_precision_backend == "auto":
if args.device == torch.device("cpu"):
if args.fp16:
raise ValueError("Tried to use `fp16` but it is not supported on cpu")
elif _is_native_cpu_amp_available:
args.half_precision_backend = "cpu_amp"
else:
raise ValueError("Tried to use cpu amp but native cpu amp is not available")
else:
args.half_precision_backend = "cuda_amp"
logger.info(f"Using {args.half_precision_backend} half precision backend")
#the following part is no longer needed because of the switch to accelerator, but add it just in case
self.do_grad_scaling = False
if (args.fp16 or args.bf16) and not (args.deepspeed or is_sagemaker_mp_enabled()):
# deepspeed and SageMaker Model Parallel manage their own half precision
if args.half_precision_backend == "cuda_amp":
self.use_cuda_amp = True
self.amp_dtype = torch.float16 if args.fp16 else torch.bfloat16
# bf16 does not need grad scaling
self.do_grad_scaling = self.amp_dtype == torch.float16
if self.do_grad_scaling:
if self.sharded_ddp is not None:
self.scaler = ShardedGradScaler()
elif self.fsdp is not None:
from torch.distributed.fsdp.sharded_grad_scaler import (
ShardedGradScaler as FSDPShardedGradScaler,
)
self.scaler = FSDPShardedGradScaler()
elif is_torch_tpu_available():
from torch_xla.amp import GradScaler
self.scaler = GradScaler()
else:
self.scaler = torch.cuda.amp.GradScaler()
elif args.half_precision_backend == "cpu_amp":
self.use_cpu_amp = True
self.amp_dtype = torch.bfloat16
else:
if not is_apex_available():
raise ImportError(
"Using FP16 with APEX but APEX is not installed, please refer to"
" https://www.github.com/nvidia/apex."
)
self.use_apex = True
```
My workaround is to derive Trainer and add the 4.28.1 part of code above after calling `super().__init__()`, and override `autocast_smart_context_manager()` with the same method in version 4.34.1. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27760/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27759/comments | https://api.github.com/repos/huggingface/transformers/issues/27759/events | https://github.com/huggingface/transformers/pull/27759 | 2,016,777,065 | PR_kwDOCUB6oc5grfAf | 27,759 | Add the CRATE (Coding RATE) backbone model | {
"login": "BiEchi",
"id": 60613238,
"node_id": "MDQ6VXNlcjYwNjEzMjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/60613238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BiEchi",
"html_url": "https://github.com/BiEchi",
"followers_url": "https://api.github.com/users/BiEchi/followers",
"following_url": "https://api.github.com/users/BiEchi/following{/other_user}",
"gists_url": "https://api.github.com/users/BiEchi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BiEchi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BiEchi/subscriptions",
"organizations_url": "https://api.github.com/users/BiEchi/orgs",
"repos_url": "https://api.github.com/users/BiEchi/repos",
"events_url": "https://api.github.com/users/BiEchi/events{/privacy}",
"received_events_url": "https://api.github.com/users/BiEchi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Thanks a lot for reviewing this PR. As a reminder, the causal model is not yet supported, but I left the placeholder there. ",
"Thanks for this pointer @ArthurZucker . As the code is very similar to RoBERTa, I'm using `transformers-cli add-new-model-like` for initializing the code, which leads to these dependencies that are not directly convertable to a custom model:\r\n<img width=\"739\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/60613238/4384d41b-7468-4bc0-b80a-f9b823d68f22\">\r\nWould you like to suggest on how to solve this?\r\n",
"the `add-new-model-like` is made for an addition in `transformers`. Would be nice of use to have a `add-new-custom-model` that creates what's needed! Adding this to my todo list 😉 \r\nThe tutorial otherwise should help on adding the model on the hub rather than on the `transformers` library for now! 🤗 ",
"Thanks a lot for offering to help on this @ArthurZucker ! Just to keep us on the same page, our code is already runnable if we directly use `pip install -e .` on our new library, so there shouldn't be challenge merging into the `transformers` library. We choose to directly develop the model by changing the library to avoid writing any other scripts except the examples like [run_glue.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py).\r\nBy the way, we've already release the weights on the Hub at [Jackbai/crate-base](https://huggingface.co/JackBAI/crate-base), and we can directly load using `model.from_pretrained()`. If we have to develop a custom model first, we're pretty happy to proceed on this though (if you think this is necessary).",
"Okay! Thanks for explaining it looks like a good addition! Feel free to ping me if you need help integrating the model! 🤗 (on this PR and not as a custom model!) 🔥 \r\nfyi @amyeroberts ",
"Hi @ArthurZucker , we're currently developing a sibling model (CRATE-GPT). The model proposed above is CRATE-BERT. Do we upload it here or we give a separate PR?",
"A separate PR is better, but as I stated before the best way to share it at first is to use custom models! 🤗 I would recommend you to make sure you upload safetensors checkpoints, and that you fill the model card to make sure people who discover it get what it is about!",
"Also you can easily push to hub if you use the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) class! 🤗 I can open a PR in your repo if you want? "
] | 1,701 | 1,706 | null | NONE | null | # What does this PR do?
Implements the new model CRATE introduced in the paper [White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?](https://arxiv.org/abs/2311.13110).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker and @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27759/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27759",
"html_url": "https://github.com/huggingface/transformers/pull/27759",
"diff_url": "https://github.com/huggingface/transformers/pull/27759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27759.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27758/comments | https://api.github.com/repos/huggingface/transformers/issues/27758/events | https://github.com/huggingface/transformers/issues/27758 | 2,016,653,858 | I_kwDOCUB6oc54M7Ii | 27,758 | ZeroDivisionError when training on a single batch of data | {
"login": "tleyden",
"id": 296876,
"node_id": "MDQ6VXNlcjI5Njg3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/296876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tleyden",
"html_url": "https://github.com/tleyden",
"followers_url": "https://api.github.com/users/tleyden/followers",
"following_url": "https://api.github.com/users/tleyden/following{/other_user}",
"gists_url": "https://api.github.com/users/tleyden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tleyden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tleyden/subscriptions",
"organizations_url": "https://api.github.com/users/tleyden/orgs",
"repos_url": "https://api.github.com/users/tleyden/repos",
"events_url": "https://api.github.com/users/tleyden/events{/privacy}",
"received_events_url": "https://api.github.com/users/tleyden/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"Thank you @tleyden for raising this issue. As you already have the fix, it would be great if you could open a PR.",
"Hey @pacman100 happy to open an PR. \r\n\r\nThat was just a \"workaround experiment\" though. I think the right way to fix it might be to look at either of the following approaches:\r\n\r\n1. Adding a small constant to the denominator \r\n2. Making the global_step to be 1-based instead of 0-based\r\n3. Probably others .. I will do some research\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@tleyden feel free to link your PR! 🤗 ",
"Hey @ArthurZucker thanks for re-opening, I do think this is worth fixing. Here's the hack I did to fix it:\r\n\r\nhttps://github.com/tleyden/transformers/commit/550ba492401dd6c1aabfaca978ebb145ac1b054b\r\n\r\nbut I think that's more of a workaround than an actual root-cause fix. If someone already more familiar with the codebase could give some guidance on the best approach to fix, I can put together a PR. ",
"@ArthurZucker @pacman100 PTAL at https://github.com/huggingface/transformers/pull/28756",
"Thank you for fixing @tleyden . I found another workaround for my case, where I am learning to use SFTTrainer with PEFT. \r\nIncreasing the epochs to value > 1, in my scenario I used epoch=2 in TrainingArguments also resolves the problem. ",
"@ParulGupta16 Thanks for posting, that is a good hint about how to fix the underlying bug. My PR in #28756 is more of a workaround. "
] | 1,701 | 1,706 | null | CONTRIBUTOR | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: not that I'm aware, there's only a single GPU and single machine
### Who can help?
@muellerzr and @pacman100 (tagging for trainer)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Overview
With very small datasets that end up being a single batch, this code in [transformers/trainer.py#L1968-L1969](https://github.com/huggingface/transformers/blob/v4.35.2/src/transformers/trainer.py#L1968-L1969) from:
```
# add remaining tr_loss
self._total_loss_scalar += tr_loss.item()
train_loss = self._total_loss_scalar / self.state.global_step
```
will throw a `ZeroDivisionError`.
Since I can't always control the data that is being uploaded to the service I'm working on, this is problematic because users will receive a cryptic error that makes it appear that the service is broken.
### Reproduction
Train on a dataset with a single batch of data.
Logger info:
```
Currently training with a batch size of: 1
***** Running training *****
Num examples = 37
Num Epochs = 3
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 32
Total optimization steps = 3
Number of trainable parameters = 109,051,904
```
Error stack trace:
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.10/site-packages/dalm/training/generator_only/trainer.py", line 301, in <module>
main()
File "/opt/conda/lib/python3.10/site-packages/dalm/training/generator_only/trainer.py", line 264, in main
train_generator(
File "/opt/conda/lib/python3.10/site-packages/dalm/training/generator_only/trainer.py", line 255, in train_generator
trainer.train()
File "/opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py", line 280, in train
output = super().train(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1969, in _inner_training_loop
train_loss = self._total_loss_scalar / self.state.global_step
ZeroDivisionError: float division by zero
```
### Expected behavior
Instead of throwing a cryptic `ZeroDivisionError`, it should at least return a more user friendly error like "Training with a single batch of data is not supported. Try again with a larger dataset"
But it would be much better if it just handled it more gracefully and approximated the loss, maybe by adding a small constant to the denominator or making the `global_step` to be 1-based instead of 0-based.
The following workaround in [transformers/trainer.py](https://github.com/huggingface/transformers/blob/v4.35.2/src/transformers/trainer.py#L1968-L1969) avoids the error.
Update:
```
# add remaining tr_loss
self._total_loss_scalar += tr_loss.item()
train_loss = self._total_loss_scalar / self.state.global_step
```
to:
```
gstep = self.state.global_step
if gstep <= 0:
gstep = 1
train_loss = self._total_loss_scalar / gstep
```
This avoids the error, though it gives the incorrect loss value.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27758/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27757/comments | https://api.github.com/repos/huggingface/transformers/issues/27757/events | https://github.com/huggingface/transformers/pull/27757 | 2,016,631,054 | PR_kwDOCUB6oc5gq-ws | 27,757 | Generate: `GenerationConfig` throws an exception when `generate` args are passed | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27757). All of your documentation changes will be reflected on that endpoint.",
"@ArthurZucker I considered checking it against the signature of `generate`, to account for future changes. However, `.validate()` is called in every `generate` call, and `inspect.signature` is slow -- highly undesirable. \r\n\r\nI'd like to leave it like this for now, and consider adding a more robust (=slow) method in the future if we still have problems :)"
] | 1,701 | 1,701 | 1,701 | MEMBER | null | # What does this PR do?
Adds an informative exception that would have prevented #27704 🤗
Fixes #27704 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27757/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27757",
"html_url": "https://github.com/huggingface/transformers/pull/27757",
"diff_url": "https://github.com/huggingface/transformers/pull/27757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27757.patch",
"merged_at": 1701353791000
} |
https://api.github.com/repos/huggingface/transformers/issues/27756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27756/comments | https://api.github.com/repos/huggingface/transformers/issues/27756/events | https://github.com/huggingface/transformers/issues/27756 | 2,016,383,672 | I_kwDOCUB6oc54L5K4 | 27,756 | BERT model is slow in Pytorch | {
"login": "akote123",
"id": 133775732,
"node_id": "U_kgDOB_lBdA",
"avatar_url": "https://avatars.githubusercontent.com/u/133775732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akote123",
"html_url": "https://github.com/akote123",
"followers_url": "https://api.github.com/users/akote123/followers",
"following_url": "https://api.github.com/users/akote123/following{/other_user}",
"gists_url": "https://api.github.com/users/akote123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akote123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akote123/subscriptions",
"organizations_url": "https://api.github.com/users/akote123/orgs",
"repos_url": "https://api.github.com/users/akote123/repos",
"events_url": "https://api.github.com/users/akote123/events{/privacy}",
"received_events_url": "https://api.github.com/users/akote123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### System Info
transformers 4.35.2
tensorflow 2.15.0
torch 2.1.1
- `transformers` version: 4.35.2
- Platform: Linux-6.2.0-1016-aws-aarch64-with-glibc2.35
- Python version: 3.11.0
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
TF code is as follows:
[tf_bert.txt](https://github.com/huggingface/transformers/files/13499939/tf_bert.txt)
Pytorch code is :
[pytorch_bert.txt](https://github.com/huggingface/transformers/files/13499940/pytorch_bert.txt)
### Expected behavior
I have tried BERT uncased model with Tensorflow and Pytorch but Pyt is slower . In pytorch if I increase the batch size to 512 the system gets hang. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27756/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27755/comments | https://api.github.com/repos/huggingface/transformers/issues/27755/events | https://github.com/huggingface/transformers/issues/27755 | 2,015,991,743 | I_kwDOCUB6oc54KZe_ | 27,755 | How to inference the model with 200k length context | {
"login": "taishan1994",
"id": 27845149,
"node_id": "MDQ6VXNlcjI3ODQ1MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/27845149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taishan1994",
"html_url": "https://github.com/taishan1994",
"followers_url": "https://api.github.com/users/taishan1994/followers",
"following_url": "https://api.github.com/users/taishan1994/following{/other_user}",
"gists_url": "https://api.github.com/users/taishan1994/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taishan1994/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taishan1994/subscriptions",
"organizations_url": "https://api.github.com/users/taishan1994/orgs",
"repos_url": "https://api.github.com/users/taishan1994/repos",
"events_url": "https://api.github.com/users/taishan1994/events{/privacy}",
"received_events_url": "https://api.github.com/users/taishan1994/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for opening this issue! Would recommend you to look into the `mistral` model which should support fairly (~16K) context length with linear memory consumption. Also make sure that you did not activate gradient computation!",
"> ### Model description\r\n> I want to test Yi-34B-200k, Although I ran through the model, as the context length increased, OOM appeared, and I wondered how I could test to 200k context length with sufficient GPU resources.\r\n> \r\n> ### Open source status\r\n> * [x] The model implementation is available\r\n> * [x] The model weights are available\r\n> \r\n> ### Provide useful links for the implementation\r\n> _No response_\r\n\r\nI met the same issue. Did anyone else solve that?",
"设置output_scores=False(一般默认是这个)和use_cache=False,应该就不会有OOM",
"> 设置output_scores=False(一般默认是这个)和use_cache=False,应该就不会有OOM\r\n\r\n请问下你测的哪个模型,测到的最大长度是多少额?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | NONE | null | ### Model description
I want to test Yi-34B-200k, Although I ran through the model, as the context length increased, OOM appeared, and I wondered how I could test to 200k context length with sufficient GPU resources.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27755/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27755/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27754/comments | https://api.github.com/repos/huggingface/transformers/issues/27754/events | https://github.com/huggingface/transformers/pull/27754 | 2,015,486,694 | PR_kwDOCUB6oc5gnFKZ | 27,754 | update `create_model_card` to properly save peft details when using Trainer with PEFT | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> From my understanding, this seems a little bit hacky to me, because we first create the model card with PEFT, then completely overwrite it with Trainer, then re-add PEFT-related content. It feels like the proper solution would be for the Trainer to update the model card if it already exists. But I understand that this would be more work, so I'd be okay with this more hacky approach.\r\n\r\nYes, Trainer should ideally update if README is already there, but it rewrites everything from the TrainerState and as such appending/updating would be trickier. Open to ideas on making this cleaner.",
"I can't really see a better solution currently either, so what we have here works for now"
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
1. When using Trainer with PEFT, `model.save_pretrained` in PEFT adds PEFT-specific details to the existing model card or creates a new model card and adds these details. However, `trainer.create_model_card` is called after model is saved and it overwrites the entire file thereby nullifies the addition of details related to PEFT such as library name, quantization used and PEFT version.
2. This PR fixes the above issue and thereby adds backs the PEFT details to the model card. This will help in better organization and understanding usage of PEFT on Hub.
3. Example of the repo with the PR usage: https://huggingface.co/smangrul/mistral_lora_clm_with_added_tokens | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27754/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27754",
"html_url": "https://github.com/huggingface/transformers/pull/27754",
"diff_url": "https://github.com/huggingface/transformers/pull/27754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27754.patch",
"merged_at": 1701950762000
} |
https://api.github.com/repos/huggingface/transformers/issues/27753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27753/comments | https://api.github.com/repos/huggingface/transformers/issues/27753/events | https://github.com/huggingface/transformers/issues/27753 | 2,015,278,718 | I_kwDOCUB6oc54HrZ- | 27,753 | Pipeline instantiation of model "facebook/nllb-200-distilled-600M" requires source and target language as mandatory | {
"login": "drunkeninja42",
"id": 56391795,
"node_id": "MDQ6VXNlcjU2MzkxNzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/56391795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drunkeninja42",
"html_url": "https://github.com/drunkeninja42",
"followers_url": "https://api.github.com/users/drunkeninja42/followers",
"following_url": "https://api.github.com/users/drunkeninja42/following{/other_user}",
"gists_url": "https://api.github.com/users/drunkeninja42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drunkeninja42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drunkeninja42/subscriptions",
"organizations_url": "https://api.github.com/users/drunkeninja42/orgs",
"repos_url": "https://api.github.com/users/drunkeninja42/repos",
"events_url": "https://api.github.com/users/drunkeninja42/events{/privacy}",
"received_events_url": "https://api.github.com/users/drunkeninja42/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Hi, for translation pipelines you can do `translation_xx_to_yy`, for example:\r\n\r\n```py\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"translation_en_to_fr\", model=\"facebook/nllb-200-distilled-600M\")\r\npipe(\"Let's go to france and see the eiffel tower\")\r\n```\r\n\r\nNot every translation model requires this, but I think it'd be more inclusive to add `translation_xx_to_yy` for models that require a `src_lang` and `tgt_lang`. WDYT @ArthurZucker?",
"MMmm yes and not, the snippet is automatically generated and cannot account for every use-case. You are right @stevhliu, see [the doc](https://huggingface.co/docs/transformers/v4.35.2/en/main_classes/pipelines#transformers.TranslationPipeline) is very much lacking 😅 \r\n",
"Hey @ArthurZucker @stevhliu !\r\nIf there are some improvements or work required in this issue, I would love to work and contribute to🤗",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @drunkeninja42 feel free to open a PR that increase the specific doc in this case if yo feel like it! ",
"Sure @ArthurZucker , will raise the PR for this today, Thanks !\r\n"
] | 1,701 | 1,706 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): 2.14.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.20
- JaxLib version: 0.4.20
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@stevhliu @MKhalusova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to Reproduce:
1. Go to `https://huggingface.co/facebook/nllb-200-distilled-600M`
2. Click `Use in Transformers` Tag
3. Copy the `# Use a pipeline as a high-level helper` code snippet
4. Run in your notebook
5. Error is displayed asking for `src_lang` and `tgt_lang` as mandatory kwags.

### Expected behavior
I expect the code snippet in the `Use in Transformers` tag to mention these `kwags` in the example as they are mandatory to be used in this model and while the translation tasks.
I will love to contribute to this issue.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27753/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27752/comments | https://api.github.com/repos/huggingface/transformers/issues/27752/events | https://github.com/huggingface/transformers/issues/27752 | 2,015,241,051 | I_kwDOCUB6oc54HiNb | 27,752 | TECO - Temporally Consistent Transformers for Video Generation | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,701 | 1,701 | null | COLLABORATOR | null | ### Model description
TECO is a vector-quantized latent dynamics video prediction model that learns compressed representations to efficiently condition on long videos of hundreds of frames during both training and generation.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Github: https://github.com/wilson1yan/teco
Website: https://wilson1yan.github.io/teco/index.html
Paper: https://arxiv.org/pdf/2210.02396.pdf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27752/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27751/comments | https://api.github.com/repos/huggingface/transformers/issues/27751/events | https://github.com/huggingface/transformers/issues/27751 | 2,015,080,972 | I_kwDOCUB6oc54G7IM | 27,751 | HF trainer training args: save_only_model does not work together with load_best_model_at_end when using deepspeed | {
"login": "welsh01",
"id": 32550152,
"node_id": "MDQ6VXNlcjMyNTUwMTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/32550152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/welsh01",
"html_url": "https://github.com/welsh01",
"followers_url": "https://api.github.com/users/welsh01/followers",
"following_url": "https://api.github.com/users/welsh01/following{/other_user}",
"gists_url": "https://api.github.com/users/welsh01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/welsh01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/welsh01/subscriptions",
"organizations_url": "https://api.github.com/users/welsh01/orgs",
"repos_url": "https://api.github.com/users/welsh01/repos",
"events_url": "https://api.github.com/users/welsh01/events{/privacy}",
"received_events_url": "https://api.github.com/users/welsh01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thank you for bringing this to the notice. We will look into this.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Gently pinging @pacman100 and @muellerzr ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @SunMarc as well 😉 ",
"Hello, please see the above PR for the reason why this isn't possible and the above PR will raise an error when such config args are passed."
] | 1,701 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.24.1
- PyTorch version (GPU?): 2.1.1+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@pacman100
Setup:
Training via _examples/pytorch/language-modeling/run_clm.py_ using hf trainer together with deepspeed. When combining the arguments _"save_only_model"_ and _"load_best_model_at_end"_, the final checkpoint cannot be loaded.
Error message:
```
transformers/integrations/deepspeed.py", line 409, in deepspeed_load_checkpoint
raise ValueError(f"Can't find a valid checkpoint at {checkpoint_path}")
```
I guess deepspeed is expecting optimizer and scheduler states that are missing now.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
deepspeed --num_gpus=4 run_clm.py \
--deepspeed ~/ds_config_zero2.json \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--output_dir ~/res \
--validation_split_percentage 10 \
--num_train_epochs 1 \
--evaluation_strategy epoch \
--save_strategy epoch \
--do_train \
--do_eval \
--logging_first_step \
--save_total_limit 1 \
--logging_steps 10 \
--metric_for_best_model eval_loss \
--greater_is_better False \
--save_safetensors False \
--load_best_model_at_end \
--save_only_model
```
### Expected behavior
Final (in that case best) checkpoint should be loaded successfully, deepspeed checkpoint checks are probably too strict in that case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27751/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27750/comments | https://api.github.com/repos/huggingface/transformers/issues/27750/events | https://github.com/huggingface/transformers/pull/27750 | 2,015,067,668 | PR_kwDOCUB6oc5glpF6 | 27,750 | Generate: `assisted_decoding` now accepts arbitrary candidate generators | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27750). All of your documentation changes will be reflected on that endpoint.",
"One more place needs to change I think - `generation_mode` is currently set using `_get_generation_mode` , where this is the logic:\r\n\r\n```\r\nif assistant_model is not None:\r\n if generation_mode in (\"greedy_search\", \"sample\"):\r\n generation_mode = GenerationMode.ASSISTED_GENERATION\r\n else:\r\n raise ValueError(\r\n \"You've set `assistant_model`, which triggers assisted generate. Currently, assisted generate \"\r\n \"is only supported with Greedy Search and Sample.\"\r\n )\r\n```\r\n\r\nhow do u suggest this should change to support prompt lookup decoding? @gante ",
"@apoorvumang I'd add an `or` after `if assistant_model is not None`"
] | 1,701 | 1,702 | 1,702 | MEMBER | null | # What does this PR do?
A common trend is starting to pop up: people are experimenting with new strategies to generate candidate sequences, to then run an assisted-generation-like strategy. A key example is the new technique in https://github.com/huggingface/transformers/issues/27722, which is equal to `assisted_decoding` except for the candidate generation part. This technique in particular achieves nice speedups in some settings, and doesn't need an assistant model -- a model-free speedup!
To facilitate experimentation and the addition of new candidate generation techniques, this PR abstracts the candidate generation part in `assisted_decoding` to a new class with a stable API. This was inspired in classes like `LogitsProcessor` or `StoppingCriteria` -- components of `generate` that can easily be replaced. All these changes are backwards compatible! 🤗
Suggested review order:
1. `utils.py`, to see the shape of `assisted_decoding` under the abstracted API
2. `candidate.py`, to see the structure of the new base class (and the specific case of the original assisted generation)
_____________________________________________
The following tests are passing:
1. `RUN_SLOW=1 py.test tests/models/whisper/ -k speculative`
4. `py.test tests/ -k test_assisted` (which catches mixin and integration tests associated with assisted generation)
Happy to add more tests if needed :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27750/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27750/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27750",
"html_url": "https://github.com/huggingface/transformers/pull/27750",
"diff_url": "https://github.com/huggingface/transformers/pull/27750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27750.patch",
"merged_at": 1702373158000
} |
https://api.github.com/repos/huggingface/transformers/issues/27749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27749/comments | https://api.github.com/repos/huggingface/transformers/issues/27749/events | https://github.com/huggingface/transformers/issues/27749 | 2,014,982,702 | I_kwDOCUB6oc54GjIu | 27,749 | Learning Rate doesn't anneal properly after resume_from_checkpoint | {
"login": "jmzeng",
"id": 5641698,
"node_id": "MDQ6VXNlcjU2NDE2OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5641698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmzeng",
"html_url": "https://github.com/jmzeng",
"followers_url": "https://api.github.com/users/jmzeng/followers",
"following_url": "https://api.github.com/users/jmzeng/following{/other_user}",
"gists_url": "https://api.github.com/users/jmzeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmzeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmzeng/subscriptions",
"organizations_url": "https://api.github.com/users/jmzeng/orgs",
"repos_url": "https://api.github.com/users/jmzeng/repos",
"events_url": "https://api.github.com/users/jmzeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmzeng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"Moreover, I'm seeing that the epoch variable gets reset to 1.0 at about 0.45 epochs. Would this be a possible bug?\r\n\r\n<img width=\"786\" alt=\"Screenshot 2023-11-28 at 10 52 48 AM\" src=\"https://github.com/huggingface/transformers/assets/5641698/aab54221-f955-48ee-9642-3e5abadaaec6\">\r\n\r\nIn the logs:\r\n```{'loss': 0.1561, 'learning_rate': 1.996214146675922e-05, 'epoch': 0.47}\r\n734 {'loss': 0.1319, 'learning_rate': 1.996214146675922e-05, 'epoch': 0.47}\r\n735 {'loss': 0.2349, 'learning_rate': 1.996214146675922e-05, 'epoch': 0.47}\r\n736 {'loss': 0.3174, 'learning_rate': 1.996214146675922e-05, 'epoch': 0.47}\r\n737 {'loss': 0.1351, 'learning_rate': 1.996214146675922e-05, 'epoch': 0.47}\r\n738 {'loss': 0.1666, 'learning_rate': 1.996214146675922e-05, 'epoch': 0.47}\r\n739 10%|▉ | 1072/11275 [11:21:26<188:55:54, 66.66s/it]\r\n740 {'loss': 0.2105, 'learning_rate': 1.996214146675922e-05, 'epoch': 1.0}\r\n741 {'loss': 0.1975, 'learning_rate': 1.996214146675922e-05, 'epoch': 1.0}\r\n742 {'loss': 0.1663, 'learning_rate': 1.996214146675922e-05, 'epoch': 1.0}\r\n743 {'loss': 0.2243, 'learning_rate': 1.996214146675922e-05, 'epoch': 1.0}\r\n744 {'loss': 0.117, 'learning_rate': 1.996214146675922e-05, 'epoch': 1.0}\r\n745 {'loss': 0.25, 'learning_rate': 1.996214146675922e-05, 'epoch': 1.0}\r\n746 {'loss': 0.1695, 'learning_rate': 1.996214146675922e-05, 'epoch': 1.0}\r\n```",
"cc @muellerzr\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Gently pinging @muellerzr 🤗 "
] | 1,701 | 1,707 | null | NONE | null | ### System Info
I'm resuming a run with HF trainer using resume_from_checkpoint and loading from the checkpoint.
The LR scheduler is set as cosine annealing with warmup. However, when I resumed, the learning rate didn't resemble cosine annealing at all. I'm finetuning a llama2 model with custom code.
See image below:
<img width="786" alt="Screenshot 2023-11-28 at 9 35 17 AM" src="https://github.com/huggingface/transformers/assets/5641698/8328ea10-c284-4ad0-bfc9-9a22e7da8c4a">
The previous run's LR is below, so I was expecting it will continue from there:
<img width="786" alt="Screenshot 2023-11-28 at 9 41 44 AM" src="https://github.com/huggingface/transformers/assets/5641698/e42b17a1-0869-4eca-b3a2-92843cecf3b1">
Transformers info:
- `transformers` version: 4.34.0
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes, multinode training
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The steps to reach the bug was below:
1. Train a multinode model using HF Trainer with LLama2 with cosine annealing and warmup:
learning_rate: 2.0e-5
adam_beta1: 0.9
adam_beta2: 0.95
weight_decay: 0.0
lr_scheduler_type: cosine
warmup_ratio: 0.03
2. Resume the multinode trainer with default usage of resume_from_checkpoint
### Expected behavior
It should resume from where it left off in cosine annealing with warmup | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27749/timeline | reopened | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.