url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/27344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27344/comments
https://api.github.com/repos/huggingface/transformers/issues/27344/events
https://github.com/huggingface/transformers/pull/27344
1,981,545,602
PR_kwDOCUB6oc5e0Ro6
27,344
[WIP] script to fine tune CLIPSeg
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27344). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? This PR aims to include a tutorial on how to fune tune clipseg model. Fixes #24494 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27344/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27344", "html_url": "https://github.com/huggingface/transformers/pull/27344", "diff_url": "https://github.com/huggingface/transformers/pull/27344.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27344.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27343/comments
https://api.github.com/repos/huggingface/transformers/issues/27343/events
https://github.com/huggingface/transformers/issues/27343
1,981,500,131
I_kwDOCUB6oc52G0rj
27,343
IDEFICS AttributeError: 'NoneType' object has no attribute 'device' when calling forward with inputs_embeds and image_encoder_embeddings
{ "login": "folbaeni", "id": 46280006, "node_id": "MDQ6VXNlcjQ2MjgwMDA2", "avatar_url": "https://avatars.githubusercontent.com/u/46280006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/folbaeni", "html_url": "https://github.com/folbaeni", "followers_url": "https://api.github.com/users/folbaeni/followers", "following_url": "https://api.github.com/users/folbaeni/following{/other_user}", "gists_url": "https://api.github.com/users/folbaeni/gists{/gist_id}", "starred_url": "https://api.github.com/users/folbaeni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/folbaeni/subscriptions", "organizations_url": "https://api.github.com/users/folbaeni/orgs", "repos_url": "https://api.github.com/users/folbaeni/repos", "events_url": "https://api.github.com/users/folbaeni/events{/privacy}", "received_events_url": "https://api.github.com/users/folbaeni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Could you share the full reproducer please? 🤗 " ]
1,699
1,699
1,699
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.0.dev0 - Platform: Linux-6.1.0-10-amd64-x86_64-with-glibc2.36 - Python version: 3.10.8 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. Utilize the forward function with inputs_embeds and image_encoder_embeddings. 2. Execute the code, which leads to the line causing the AttributeError as described above. ### Expected behavior The code should execute without encountering the 'NoneType' object AttributeError and should properly assign the 'image_hidden_states' based on the 'image_encoder_embeddings' and input devices.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27343/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27342/comments
https://api.github.com/repos/huggingface/transformers/issues/27342/events
https://github.com/huggingface/transformers/pull/27342
1,981,231,118
PR_kwDOCUB6oc5ezMDw
27,342
device-agnostic deepspeed testing
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Verified with Ascend NPU via the following `spec.py` file:\r\n```python\r\nimport torch\r\nimport torch_npu\r\n# !! Further additional imports can be added here !!\r\n# Specify the device name (eg. 'cuda', 'cpu', 'npu')\r\nDEVICE_NAME = 'npu:0'\r\n# Specify device-specific backends to dispatch to.\r\n# If not specified, will fallback to 'default' in 'testing_utils.py`\r\nMANUAL_SEED_FN = torch.npu.manual_seed_all\r\nEMPTY_CACHE_FN = torch.npu.empty_cache\r\nDEVICE_COUNT_FN = torch.npu.device_count\r\n```\r\ntest results:\r\n```text\r\n(ds) [root@node-43 transformers]# RUN_SLOW=1 TRANSFORMERS_TEST_BACKEND=\"torch_npu\" TRANSFORMERS_TEST_DEVICE=\"npu:0\" TRANSFORMERS_TEST_DEVICE_SPEC=\"spec.py\" python -m pytest -v -k \"not bf16\" tests/deepspeed/test_deepspeed.py\r\n=============================================================================================================================== test session starts ================================================================================================================================\r\nplatform linux -- Python 3.8.18, pytest-7.4.3, pluggy-1.3.0 -- /home/miniconda3/envs/ds/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /home/ds/transformers\r\nconfigfile: setup.cfg\r\ncollected 81 items / 35 deselected / 46 selected\r\n\r\ntests/deepspeed/test_deepspeed.py::CoreIntegrationDeepSpeed::test_init_zero3_fp16 PASSED [ 2%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_errors_zero2_fp16 PASSED [ 4%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_errors_zero3_fp16 PASSED [ 6%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16_ds_optim_ds_scheduler PASSED [ 8%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16_ds_optim_hf_scheduler PASSED [ 10%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16_hf_optim_ds_scheduler PASSED [ 13%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16_hf_optim_hf_scheduler PASSED [ 15%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16_ds_optim_ds_scheduler PASSED [ 17%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16_ds_optim_hf_scheduler PASSED [ 19%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16_hf_optim_ds_scheduler PASSED [ 21%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16_hf_optim_hf_scheduler PASSED [ 23%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_config_object PASSED [ 26%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_ds_scheduler_hf_optimizer PASSED [ 28%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_early_get_last_lr_zero2_fp16 PASSED [ 30%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_early_get_last_lr_zero3_fp16 PASSED [ 32%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_fake_notebook_no_launcher_zero2_fp16 PASSED [ 34%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_fake_notebook_no_launcher_zero3_fp16 PASSED [ 36%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_gradient_accumulation_zero2_fp16 PASSED [ 39%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_gradient_accumulation_zero3_fp16 PASSED [ 41%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_ds_config_mismatch PASSED [ 43%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_optimizer_with_offload_zero2_fp16 PASSED [ 45%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_optimizer_with_offload_zero3_fp16 PASSED [ 47%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_scheduler_ds_optimizer PASSED [ 50%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hf_scheduler_hf_optimizer PASSED [ 52%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_hyperparameter_search SKIPPED (test requires optuna) [ 54%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_best_model_zero2_fp16 PASSED [ 56%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_best_model_zero3_fp16 PASSED [ 58%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_state_dict_from_zero_checkpoint_zero2_fp16 PASSED [ 60%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_state_dict_from_zero_checkpoint_zero3_fp16 PASSED [ 63%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_save_checkpoints_zero2_fp16 PASSED [ 65%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_save_checkpoints_zero3_fp16 PASSED [ 67%]\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_stage3_nvme_offload SKIPPED (test requires deepspeed async-io) [ 69%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_basic_distributed_zero2_fp16 PASSED [ 71%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_basic_distributed_zero3_fp16 PASSED [ 73%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_clm_from_config_zero3_fp16 PASSED [ 76%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_clm_zero2_fp16 PASSED [ 78%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_clm_zero3_fp16 PASSED [ 80%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_do_eval_no_train PASSED [ 82%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_distributed_zero2_fp16 PASSED [ 84%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_distributed_zero3_fp16 PASSED [ 86%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero2_fp16 PASSED [ 89%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_fp32_non_distributed_zero3_fp16 PASSED [ 91%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_inference_1_fp16 PASSED [ 93%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_inference_2_fp32 PASSED [ 95%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero2_fp16 PASSED [ 97%]\r\ntests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_resume_train_not_from_ds_checkpoint_zero3_fp16 PASSED [100%]\r\n\r\n================================================================================================================================= warnings summary =================================================================================================================================\r\n../../../miniconda3/envs/ds/lib/python3.8/site-packages/torch_npu/dynamo/__init__.py:18\r\n /home/miniconda3/envs/ds/lib/python3.8/site-packages/torch_npu/dynamo/__init__.py:18: UserWarning: Register eager implementation for the 'npu' backend of dynamo, as torch_npu was not compiled with torchair.\r\n warnings.warn(\r\n\r\n../../../miniconda3/envs/ds/lib/python3.8/site-packages/_pytest/config/__init__.py:1373\r\n /home/miniconda3/envs/ds/lib/python3.8/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\ntests/deepspeed/test_deepspeed.py::CoreIntegrationDeepSpeed::test_init_zero3_fp16\r\n /home/miniconda3/envs/ds/lib/python3.8/site-packages/deepspeed-0.12.3+6d33acc2-py3.8.egg/deepspeed/comm/comm.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead\r\n utils.logger.warn(\"HCCL backend in DeepSpeed not yet implemented\")\r\n\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_errors_zero2_fp16\r\n /home/miniconda3/envs/ds/lib/python3.8/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n\r\ntests/deepspeed/test_deepspeed.py: 12 warnings\r\n /home/miniconda3/envs/ds/lib/python3.8/site-packages/torch/nn/modules/module.py:1879: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_best_model_zero2_fp16\r\n /home/ds/transformers/src/transformers/tokenization_utils_base.py:2614: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).\r\n warnings.warn(\r\n\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_load_best_model_zero2_fp16\r\n /home/ds/transformers/src/transformers/models/t5/modeling_t5.py:893: UserWarning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implementations please use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, please use c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. See Note [AutoDispatchBelowAutograd] for more details. (Triggered internally at torch_npu/csrc/aten/common/TensorFactories.cpp:74.)\r\n shifted_input_ids[..., 0] = decoder_start_token_id\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n====================================================================================================== 44 passed, 2 skipped, 35 deselected, 21 warnings in 1176.45s (0:19:36) ======================================================================================================\r\n```\r\n", "@ydshieh please take a look at this commit :-)", "Thanks a lot @statelesshz ! Great there is no failure at all 💯 ", "> Thanks a lot @statelesshz ! Great there is no failure at all 💯\r\n\r\nTo be honest, I just skipped some bf16-related test cases because npu's support for bf16 is not comparable to that of gpu. 😅 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27342). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Part of https://github.com/huggingface/transformers/issues/25654#issuecomment-1783704306 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27342/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27342", "html_url": "https://github.com/huggingface/transformers/pull/27342", "diff_url": "https://github.com/huggingface/transformers/pull/27342.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27342.patch", "merged_at": 1699529654000 }
https://api.github.com/repos/huggingface/transformers/issues/27341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27341/comments
https://api.github.com/repos/huggingface/transformers/issues/27341/events
https://github.com/huggingface/transformers/issues/27341
1,981,229,177
I_kwDOCUB6oc52Fyh5
27,341
Generate: should `softmax` be upcasted to `.float()`?
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
closed
false
null
[]
[ "Note: According to this [issue](https://github.com/pytorch/pytorch/pull/103167#issuecomment-1583326602), PyTorch already uses 32 bit accumulation under the hood. \r\n\r\nIf true, we can do the opposite -- remove all manual casts to `torch.float32`", "Hi @gante I would love to take up this issue.", "Hey @nandwalritik 👋 Awesome, let me know how it goes and whether you need further pointers!", "Hi @gante some further pointers will greatly help me to proceed further.", "@nandwalritik \r\n\r\nWe can start with the basics, assuming that what I wrote in [this comment](https://github.com/huggingface/transformers/issues/27341#issuecomment-1798510786) is true. To verify it, you can:\r\n1. generate a random tensor in float16\r\n2. pass it through `softmax` and `softmax(..., dtype=torch.float32)`\r\n3. repeat it a couple of thousand times\r\n\r\nIf the outcome is exactly the same every single time, we can confirm the comment. In that case, we can close this issue.\r\n\r\nIf it is False, then report the average of the maximum absolute error, so we can decide how to proceed :)", "I tried above steps, but `softmax` gives `RuntimeError: \"softmax_lastdim_kernel_impl\" not implemented for 'Half'` this error, which means that PyTorch does not have a kernel implementation for the softmax function on half-precision (float16) tensors.\r\nTo test if its automatically casting to `float32` I tried this\r\n```\r\n>>> tensor = torch.randn(100,100,dtype=torch.float16)\r\n>>> sf_16 = nn.functional.softmax(tensor,dim=1,dtype=torch.float16)\r\n```\r\nbut again it throws out same error.", "I may be able to help out and make a test based on those steps as well. It could be useful to have two separate tests to verify or compare the results.", "Can confirm the error @nandwalritik found. \r\n\r\n`softmax` does not seem to work with float16 tensors unless you force it to run in 32-bit precision.\r\n\r\n```\r\n>>> tensor = torch.randn(5, dtype=torch.float16)\r\n>>> sm_32d = nn.functional.softmax(tensor, dim=-1)\r\n>>> print(sm_32d)\r\nRuntimeError: \"softmax_lastdim_kernel_impl\" not implemented for 'Half'\r\n```\r\n\r\n```\r\n>>> tensor = torch.randn(5, dtype=torch.float16)\r\n>>> sm_32d = nn.functional.softmax(tensor, dim=-1, dtype=torch.float16)\r\n>>> print(sm_32d)\r\nRuntimeError: \"softmax_lastdim_kernel_impl\" not implemented for 'Half'\r\n```\r\n\r\n```\r\n>>> tensor = torch.randn(5, dtype=torch.float16)\r\n>>> sm_32d = nn.functional.softmax(tensor, dim=-1, dtype=torch.float32)\r\n>>> print(sm_32d)\r\ntensor([0.3898, 0.0632, 0.1406, 0.2392, 0.1671])\r\n```\r\n\r\nMaybe that's why softmax is often forced to run in 32-bit FP precision? To accept tensors of any precision, it has to cast to a precision that softmax can use.", "Hey @Tanman2001 @nandwalritik 👋 \r\n\r\nMost FP16 operations are GPU-only, so you have to move the tensors to a CUDA device :) (e.g. `tensor = torch.randn(5, dtype=torch.float16, device=\"cuda\")`)", "Hi @gante I would like take up this issue. ", "Feel free to report any conclusions here 🤗 ", "@gante Created a little test to compare the performance of the `softmax` function with and without the `dtype` parameter declared as float32 on a float16 tensor.\r\n\r\nIt is setup so that each iteration has two loops, where each loop is run 500 times. The first loop generates a random tensor of size 1000 then executes the `softmax` function with `dtype=torch.float32`. The second loop does the same but the `softmax` omits the `dtype` parameter. The time it takes each loop to execute the 500 times is recorded in every iteration.\r\n\r\nHere's the results from my machine after 1000 iterations:\r\n```\r\nDefault Times (softmax w/ default dtype param):\r\nMedian: 22.938966751098633\r\nMean: 24.283889055252075\r\nStandard Deviation: 5.126754368188971\r\nMinimum: 16.4642333984375\r\nMaximum: 77.31127738952637\r\n\r\nForced Times (softmax w/ float32 dtype param:)\r\nMedian: 25.928974151611328\r\nMean: 26.8714861869812\r\nStandard Deviation: 5.366920780984994\r\nMinimum: 18.945932388305664\r\nMaximum: 70.31035423278809\r\n```\r\n\r\nAnd I had someone else run it on their machine (which is apparently better than mine):\r\n```\r\nDefault Times (softmax w/ default dtype param): \r\nMedian: 19.004344940185547\r\nMean: 22.11885714530945\r\nStandard Deviation: 11.286693376217634\r\nMinimum: 12.00103759765625\r\nMaximum: 46.010732650756836\r\n\r\nForced Times (softmax w/ float32 dtype param):\r\nMedian: 21.004796028137207\r\nMean: 23.721716165542603\r\nStandard Deviation: 11.330417161274704\r\nMinimum: 11.969327926635742\r\nMaximum: 43.00975799560547\r\n```\r\n\r\nI've run this a few times and these results are typical. The loops with `softmax` forced to `float32` do appear to take slightly longer to execute. The mean and median times for the `float32` forced `softmax` are consistently higher (though not significantly) than the same times for the `softmax` without the `dtype` parameter.\r\n\r\nCurious to see if @nandwalritik and @codeserra can corroborate these results.", "@Tanman2001 Interesting, so we can see that `softmax` is faster without `dtype=torch.float32`. Two follow-up questions:\r\n1. Which version of `torch` were you using?\r\n2. Have you confirmed whether the two values (with and without `dtype=torch.float32`) matched for the same input?", "@gante \r\n\r\n1. My machine has version 2.1.1+cu121. Other machine was 2.1.0+cu118.\r\n2. I have not yet. I may be able to check that sometime this week.", "Hi, followed what @Tanman2001 had described, for setting up 2 loops for described iterations and dtype.\r\n\r\nI am getting below results, i am using torch version 1.12.1\r\n\r\nDefault Times (softmax w/ default dtype param):\r\nMedian: 0.015637\r\nMean: 0.015441\r\nStandard Deviation: 0.006205\r\nMinimum: 0.000000\r\nMaximum: 0.032977\r\n\r\nForced Times (softmax w/ float32 dtype param):\r\nMedian: 0.015635\r\nMean: 0.015082\r\nStandard Deviation: 0.006126\r\nMinimum: 0.000000\r\nMaximum: 0.040869\r\n\r\nI am not seeing that much difference though, the differences i am seeing seems to be due precision change between 2 data types or due to random generator.", "@codeserra \r\n\r\nHow did you record and measure the running time? A 0.0 minimum time does not seem correct.", "@Tanman2001 \r\nI am using below code, i am fairly new at this, if any correction is required please let me know. \r\n\r\n```\r\nimport torch\r\nimport time\r\nimport numpy as np\r\n\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\ntorch.manual_seed(42)\r\n\r\nnum_iterations = 1000\r\nnum_runs = 500\r\ntensor_size = 1000\r\n\r\n# Results list\r\ndefault_times = []\r\nforced_times = []\r\n\r\nfor iteration in range(num_iterations):\r\n # Record start time for softmax without dtype\r\n start_time_default = time.time()\r\n\r\n # softmax without dtype\r\n for _ in range(num_runs):\r\n input_tensor = torch.rand(tensor_size, dtype=torch.float32, device=device)\r\n result = torch.nn.functional.softmax(input_tensor.to(torch.float32), dim=0)\r\n end_time_default = time.time()\r\n\r\n # time for default dtype\r\n time_default = end_time_default - start_time_default\r\n default_times.append(time_default)\r\n\r\n start_time_forced = time.time()\r\n\r\n # softmax with float32 dtype \r\n for _ in range(num_runs):\r\n input_tensor = torch.rand(tensor_size, dtype=torch.float32, device=device)\r\n result = torch.nn.functional.softmax(input_tensor, dim=0, dtype=torch.float32)\r\n\r\n end_time_forced = time.time()\r\n \r\n # time for forced time\r\n time_forced = end_time_forced - start_time_forced\r\n forced_times.append(time_forced)\r\n```", "@codeserra (and @gante correct me if I'm wrong)\r\n\r\nThe random input tensors should be `dtype=torch.float16` which would also require the device to be CUDA.\r\n\r\nFor the \"without dtype\" loop I think you are still manually casting the tensor to float32 with the `.to()` call. For that loop I would have had just `input_tensor` as that first parameter.", "@gante \r\n\r\nCorrectness test, assuming this is done correctly, fails.\r\n\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nimport numpy as np\r\n\r\nmismatchCount = 0\r\nfailedTensors = 0\r\nfor t in range(5):\r\n tensor = torch.randn(1000,dtype=torch.float16,device=\"cuda\")\r\n sm_32d = nn.functional.softmax(tensor, dim=-1, dtype=torch.float32)\r\n sm_def = nn.functional.softmax(tensor, dim=-1)\r\n failFlag = 0\r\n for x in range(1000):\r\n if sm_32d[x].item() != sm_def[x].item():\r\n print(\"Mismatch found. Expected \" + str(sm_32d[x].item()) +\r\n \" in default, got \" + str(sm_def[x].item()))\r\n mismatchCount = mismatchCount + 1\r\n failFlag = 1\r\n if failFlag == 1:\r\n failedTensors = failedTensors + 1\r\n print(\"Tensor \" + str(t) + \" failed.\")\r\n else:\r\n print(\"Tensor \" + str(t) + \" passed.\")\r\n print(\"Current mismatch count is \" + str(mismatchCount))\r\n```\r\n\r\nBased on the output, it seems every value in each tensor differs.\r\n\r\nHere's a small portion of the output this generated for me:\r\n\r\n```\r\nMismatch found. Expected 0.0004018559120595455 in default, got 0.0004019737243652344\r\nMismatch found. Expected 0.000258067884715274 in default, got 0.0002579689025878906\r\nMismatch found. Expected 0.001631039078347385 in default, got 0.0016307830810546875\r\nMismatch found. Expected 0.0002674324787221849 in default, got 0.0002675056457519531\r\nMismatch found. Expected 0.0008042750996537507 in default, got 0.0008044242858886719\r\nMismatch found. Expected 0.0004807825025636703 in default, got 0.00048089027404785156\r\nMismatch found. Expected 0.00018524506594985723 in default, got 0.00018525123596191406\r\nMismatch found. Expected 0.0007350771920755506 in default, got 0.0007352828979492188\r\nMismatch found. Expected 0.0008412283495999873 in default, got 0.0008411407470703125\r\nMismatch found. Expected 0.00029043699032627046 in default, got 0.0002903938293457031\r\n```\r\n\r\nAlso of note: the dtype of the default dtype softmax tensor is `float16` while the forced to float32 softmax tensor is `float32`.\r\n\r\nIs there anything I missed here or anything I am misunderstanding about this issue?", "@Tanman2001 Your script has missing a downcast :) `sm_32d` is a `fp32` tensor, while `sm_def` is a `fp16` tensor. If the hypothesis of the internal computations being in `fp32` is correct, then downcasting `sm_32d` after softmax would make it equal to `sm_def` -- which is the case!\r\n\r\n```py\r\nimport torch\r\nimport torch.nn as nn\r\n\r\nmismatchCount = 0\r\nfailedTensors = 0\r\nfor t in range(100):\r\n tensor = torch.randn(1000,dtype=torch.float16,device=\"cuda\")\r\n sm_32d = nn.functional.softmax(tensor, dim=-1, dtype=torch.float32).to(torch.float16)\r\n sm_def = nn.functional.softmax(tensor, dim=-1)\r\n failFlag = 0\r\n for x in range(1000):\r\n if sm_32d[x].item() != sm_def[x].item():\r\n print(\"Mismatch found. Expected \" + str(sm_32d[x].item()) +\r\n \" in default, got \" + str(sm_def[x].item()))\r\n mismatchCount = mismatchCount + 1\r\n failFlag = 1\r\n if failFlag == 1:\r\n failedTensors = failedTensors + 1\r\n print(\"Tensor \" + str(t) + \" failed.\")\r\n else:\r\n print(\"Tensor \" + str(t) + \" passed.\")\r\n print(\"Current mismatch count is \" + str(mismatchCount))\r\n```\r\n\r\nIn that case, we can close this issue, as we now know we don't need to upcast `softmax` :)", "Is there any follow up to this that needs to be done? Should there be another issue opened to modify any softmax calls based on this information?", "@Tanman2001 there is no need to change softmax, based on the information we found :)" ]
1,699
1,707
1,701
MEMBER
null
I am looking for a contributor to do some numerical exploration across models! 💛 ### Context Some operations are notoriously unstable at lower precisions, such as the `softmax`. In fact, in the models' attention layers, we often force the `softmax` operation to run in 32-bit FP precision ([example](https://github.com/huggingface/transformers/blob/90b4adc1f1111f42eada62ea611895646aaee6b6/src/transformers/models/llama/modeling_llama.py#L406)), regardless of the model `dtype`. ### Potential problem In `generate`, we have a `softmax` operation when `do_sample=True`, and we are not upcasting it. Most models return the `logits` in the same `dtype` as the model itself -- a few notable exceptions being `llama` and `mistral`. Does this mean we are losing performance there? ### What we are looking for A numerical exploration of the benefits (some metric) versus downsides (execution speed and memory consumption). If the returns turn out to be positive in favor of upcasting, you will be making a huge contribution to the whole community!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27341/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27340/comments
https://api.github.com/repos/huggingface/transformers/issues/27340/events
https://github.com/huggingface/transformers/pull/27340
1,981,000,356
PR_kwDOCUB6oc5eyZqj
27,340
Add discriminator to vits
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27340). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,704
1,704
COLLABORATOR
null
# What does this PR do? Vits is a TTS model that have been supported in HF for a few months now. MMS-tts, Meta's models on a thousand languages, also uses VITS architecture. Vits is a peculiar model, a GAN, a VAE a flow-based, all at the same time. To train it, one needs a discriminator, which this PR adds to Transformers. **This PR aims to add VitsDiscriminator and a dedicated model for training VITS. PR #27244 will then add a training script example.** Main changes: - adding VitsDiscriminator - adding VitsModelForPretraining - allows passing multiple speaker_id, instead of a single one. Works if nb_speaker_id = batch_size - weight_norm functions - to keep aligned to the original code - Resizing the speaker embeddings feature - I haven't touched VitsModel except for the handling of multiple speaker id in the forward method. cc @amyeroberts and @sanchit-gandhi ! This is a separate PR as requested!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27340", "html_url": "https://github.com/huggingface/transformers/pull/27340", "diff_url": "https://github.com/huggingface/transformers/pull/27340.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27340.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27339/comments
https://api.github.com/repos/huggingface/transformers/issues/27339/events
https://github.com/huggingface/transformers/pull/27339
1,980,986,600
PR_kwDOCUB6oc5eyWnS
27,339
Fix autoawq docker image
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27339). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Fixes AWQ-related failing tests in: https://github.com/huggingface/transformers/actions/runs/6779219624/job/18426184991 Autoawq recently made a release that distributed pre-compiled kernels on CUDA 12.2. Our VM support CUDA 11.8 As we were testing everything before that release, we need to correctly install the package to retrieve the kernels compiled with cuda 11.8, as per the installation guidelines in autoawq: https://github.com/casper-hansen/AutoAWQ#install cc @ydshieh @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27339/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27339", "html_url": "https://github.com/huggingface/transformers/pull/27339", "diff_url": "https://github.com/huggingface/transformers/pull/27339.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27339.patch", "merged_at": 1699352464000 }
https://api.github.com/repos/huggingface/transformers/issues/27338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27338/comments
https://api.github.com/repos/huggingface/transformers/issues/27338/events
https://github.com/huggingface/transformers/pull/27338
1,980,921,791
PR_kwDOCUB6oc5eyIVW
27,338
[`Whisper`] Add conversion script for the tokenizer
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27338). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? Aligned with #27336 this PR adds the conversion of the tokenizer form `tiktoken` to `transformers`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27338/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27338", "html_url": "https://github.com/huggingface/transformers/pull/27338", "diff_url": "https://github.com/huggingface/transformers/pull/27338.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27338.patch", "merged_at": 1699366076000 }
https://api.github.com/repos/huggingface/transformers/issues/27337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27337/comments
https://api.github.com/repos/huggingface/transformers/issues/27337/events
https://github.com/huggingface/transformers/pull/27337
1,980,882,516
PR_kwDOCUB6oc5ex_yC
27,337
moving example of benchmarking to legacy dir
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Agree! Wdyt @ArthurZucker ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27337). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Since benchmarking tools was deprecated by https://github.com/huggingface/transformers/pull/15848, I think it is more appropriate to put the corresponding examples in the legacy directory :-). WDYT @patrickvonplaten @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27337/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27337", "html_url": "https://github.com/huggingface/transformers/pull/27337", "diff_url": "https://github.com/huggingface/transformers/pull/27337.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27337.patch", "merged_at": 1699432057000 }
https://api.github.com/repos/huggingface/transformers/issues/27336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27336/comments
https://api.github.com/repos/huggingface/transformers/issues/27336/events
https://github.com/huggingface/transformers/pull/27336
1,980,735,428
PR_kwDOCUB6oc5exgAV
27,336
[Whisper] Add `large-v3` version support
{ "login": "flyingleafe", "id": 1803963, "node_id": "MDQ6VXNlcjE4MDM5NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1803963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flyingleafe", "html_url": "https://github.com/flyingleafe", "followers_url": "https://api.github.com/users/flyingleafe/followers", "following_url": "https://api.github.com/users/flyingleafe/following{/other_user}", "gists_url": "https://api.github.com/users/flyingleafe/gists{/gist_id}", "starred_url": "https://api.github.com/users/flyingleafe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flyingleafe/subscriptions", "organizations_url": "https://api.github.com/users/flyingleafe/orgs", "repos_url": "https://api.github.com/users/flyingleafe/repos", "events_url": "https://api.github.com/users/flyingleafe/events{/privacy}", "received_events_url": "https://api.github.com/users/flyingleafe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks, this can be extremely helpful!", "@ArthurZucker Thanks! Did not see the download fixing PR, and you were quite fast with tokenizer support, congrats)\r\nExcept for the tokenizer, the feature extractor parameters should also be fetched and exported, esp. given that `v3` uses a different number of melbanks. I can handle that in this PR, if you do not yet have it already implemented somewhere locally.", "Yes for sure! ", "@ArthurZucker Added feature extractor export.\r\nI reused the pre-computed mel filters from `openai/whisper` repository, doing so required slight changes in `WhisperFeatureExtractor` logic. I anticipate that auto-computed filters should be equivalent to ones saved in `openai/whisper`, but I am not 100% sure, so I think this is a more reliable way to obtain 100% functional equivalence.", "Nice, just merged #27338 can you rebase? ", "@ArthurZucker merged, can instead rebase/forcepush if that's preferable.", "Merging should be fine, reviewing now ! ", "@ArthurZucker removed everything related to downloading the pre-computed filters, they are indeed equivalent to the constructed ones (`np.allclose == True`). ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27336). All of your documentation changes will be reflected on that endpoint.", "@ArthurZucker @sanchit-gandhi Your comment for the full preprocessor export is addressed. Since the preprocessor has the tokenizer as its constituent part, I renamed the `--convert_tokenizer` option to `--convert_preprocessor`.\r\n\r\nI also took the liberty of removing additional parameters of `--whisper_version` and `--multilingual`, since the actual number of supported languages is [derivable from the vocabulary size](https://github.com/openai/whisper/blob/fcfeaf1b61994c071bba62da47d7846933576ac9/whisper/model.py#L271), which is the part of OpenAI model checkpoint.\r\n\r\n@sanchit-gandhi I implemented the fetching of generation config from HF based on the number of languages supported as you have suggested, but it is kind of a chicken-and-egg situation. The alignment heads can be hardcoded into the dictionary [as OpenAI does that](https://github.com/openai/whisper/blob/fcfeaf1b61994c071bba62da47d7846933576ac9/whisper/__init__.py#L34), and the other parameters are either derived from the tokenizer or hardcoded as well. The only setting I don't quite understand how to derive is the set of suppressed tokens, if you give me a hint on that, I can remove the dependency on downloading extra stuff from HF completely.", "@sanchit-gandhi \r\nBasically what I did is setting the alignment head appropriately if the user provided Whisper model version instead of a local checkpoint, and not setting them (with a warning) otherwise.\r\nIt could be possible to also detect if the local checkpoint is equivalent to the official one by checking the hash, but that is probably a non-issue, I cannot think of a genuine use case when the user has the OpenAI checkpoint saved locally but is unable/unwilling to simply re-download it.", "@sanchit-gandhi Your point is valid - why do extra work if we are downloading generation configs from HF hub anyway.\r\nRemoved all logic related to that, simply preserving the alignment heads in the config if the original checkpoint is downloaded.", "@sanchit-gandhi People [complain](https://github.com/guillaumekln/faster-whisper/pull/548#issuecomment-1807779706) in the downstream community projects that they expect tokenizer files in fast format (`tokenizer.json`) to be also present in the HF checkpoint.\r\n\r\nI added a couple of lines here for conversion and export of fast tokenizer as well. Only you and your colleagues can add that to the official checkpoint though.", "@sanchit-gandhi bump, is that good for merge?", "@sanchit-gandhi Your last suggestion has been done three days ago, let's merge if good to go", "Thanks for bearing with both of us 😉 " ]
1,699
1,705
1,700
CONTRIBUTOR
null
# What does this PR do? Adds the ability to download and convert the fresh `large-v3` version of Whisper (https://github.com/openai/whisper/pull/1761/files). Closes #27331. The usage of `_download` method in `convert_openai_to_hf.py` turned out to be broken, that was fixed. I also plan to add the processor (feature extractor + tokenizer) automatic file export today and take care that subtle changes in language tag tokenization are supported - hence the draft status. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27336/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27336", "html_url": "https://github.com/huggingface/transformers/pull/27336", "diff_url": "https://github.com/huggingface/transformers/pull/27336.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27336.patch", "merged_at": 1700498209000 }
https://api.github.com/repos/huggingface/transformers/issues/27335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27335/comments
https://api.github.com/repos/huggingface/transformers/issues/27335/events
https://github.com/huggingface/transformers/pull/27335
1,980,681,676
PR_kwDOCUB6oc5exUOr
27,335
OpenAI to HF: Add large-v3 to conversion script
{ "login": "jstoone", "id": 1711456, "node_id": "MDQ6VXNlcjE3MTE0NTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1711456?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jstoone", "html_url": "https://github.com/jstoone", "followers_url": "https://api.github.com/users/jstoone/followers", "following_url": "https://api.github.com/users/jstoone/following{/other_user}", "gists_url": "https://api.github.com/users/jstoone/gists{/gist_id}", "starred_url": "https://api.github.com/users/jstoone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jstoone/subscriptions", "organizations_url": "https://api.github.com/users/jstoone/orgs", "repos_url": "https://api.github.com/users/jstoone/repos", "events_url": "https://api.github.com/users/jstoone/events{/privacy}", "received_events_url": "https://api.github.com/users/jstoone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I've converted PR to draft, as I'm currently getting the following exception:\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/jstoone/Sites/huggingface/transformers/src/transformers/models/whisper/convert_openai_to_hf.py\", line 185, in <module>\r\n convert_openai_whisper_to_tfms(args.checkpoint_path, args.pytorch_dump_folder_path)\r\n File \"/Users/jstoone/Sites/huggingface/transformers/src/transformers/models/whisper/convert_openai_to_hf.py\", line 137, in convert_openai_whisper_to_tfms\r\n dimensions = original_checkpoint[\"dims\"]\r\n ~~~~~~~~~~~~~~~~~~~^^^^^^^^\r\nTypeError: byte indices must be integers or slices, not str\r\n```", "Hi @jstoone, thanks for opening this PR and contributing to the library! \r\n\r\nThere's some other PRs relating to this\r\n* #26834\r\n* #27336\r\n* #27338 \r\n\r\nI believe #26834 tackles the issue being seen here. As #27336 is a more complete update - handling the addition of cantonese - then that will be the PR to be merged in. \r\n\r\ncc @ArthurZucker for reference. ", "@amyeroberts That's amazing! The community moves so darn fast, it's amazing to see. I'll go ahead and close this one then. Thanks!", "@amyeroberts\r\n> As https://github.com/huggingface/transformers/pull/27336 is a more complete update - handling the addition of cantonese - then that will be the PR to be merged in.\r\n\r\nMaybe I'm not understanding you, but the PR appears to only remove mention of Cantonese", "@gerrynjenny The diff in the [open PR](https://github.com/huggingface/transformers/pull/27336) removes instructions in the docs mentioning that you have to specify 100 languages. This is because it's now inferred from the checkpoint when converting. The PR previously included logic which added cantonese as a language in the tokenizer mapping. However that has already been merged in with #27338" ]
1,699
1,699
1,699
NONE
null
# What does this PR do? It adds the new large-v3 to the script which converts OpenAI model to HF. The URL is copied from the PR releasing large-v3: https://github.com/openai/whisper/pull/1761 I can't find any tests related to the conversion script, so I've gone ahead and skipped that. Let me know, if I've missed something. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi Do you know what has changed, since the model checkpoint seems to be a byte array instead of a directory? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27335/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27335", "html_url": "https://github.com/huggingface/transformers/pull/27335", "diff_url": "https://github.com/huggingface/transformers/pull/27335.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27335.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27334/comments
https://api.github.com/repos/huggingface/transformers/issues/27334/events
https://github.com/huggingface/transformers/pull/27334
1,980,571,030
PR_kwDOCUB6oc5ew8Hr
27,334
translate big_models.md and performance.md to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu\r\n\r\nHi, I will translate `Performance and scalability` part next.\r\n\r\nFor the possible merge conflict, I will fix it later after another pr merged. \r\n\r\nBest", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27334). All of your documentation changes will be reflected on that endpoint.", "@stevhliu\r\n\r\nHi, I have fixed redirect problem.\r\n\r\nBest" ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27334/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27334", "html_url": "https://github.com/huggingface/transformers/pull/27334", "diff_url": "https://github.com/huggingface/transformers/pull/27334.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27334.patch", "merged_at": 1699462126000 }
https://api.github.com/repos/huggingface/transformers/issues/27333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27333/comments
https://api.github.com/repos/huggingface/transformers/issues/27333/events
https://github.com/huggingface/transformers/pull/27333
1,980,539,859
PR_kwDOCUB6oc5ew1kb
27,333
[fix] bug about label padding in data collator for seq2seq
{ "login": "cs-wangchong", "id": 17706003, "node_id": "MDQ6VXNlcjE3NzA2MDAz", "avatar_url": "https://avatars.githubusercontent.com/u/17706003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cs-wangchong", "html_url": "https://github.com/cs-wangchong", "followers_url": "https://api.github.com/users/cs-wangchong/followers", "following_url": "https://api.github.com/users/cs-wangchong/following{/other_user}", "gists_url": "https://api.github.com/users/cs-wangchong/gists{/gist_id}", "starred_url": "https://api.github.com/users/cs-wangchong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cs-wangchong/subscriptions", "organizations_url": "https://api.github.com/users/cs-wangchong/orgs", "repos_url": "https://api.github.com/users/cs-wangchong/repos", "events_url": "https://api.github.com/users/cs-wangchong/events{/privacy}", "received_events_url": "https://api.github.com/users/cs-wangchong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> When I use `DataCollatorForSeq2Seq` with `Trainer` to fine-tune a model, an error of batch_size mismatch occurs at the beginning of the second epoch. After read the implementation of `DataCollatorForSeq2Seq.__call__`, I figure out the error is caused by the label padding logics. The `feature` dict is modified during the first epoch, leading to the batch_size mismatch error in the following epoches. I fix this by create a copy of the `feature` dict instead of directly modifying itself. error message: ``` File "workspace/train.py", line 115, in train:22<10:44:27, 1.87it/s] trainer.train() File ".../lib/python3.9/site-packages/transformers/trainer.py", line 1591, in train return inner_training_loop( File ".../lib/python3.9/site-packages/transformers/trainer.py", line 1892, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File ".../lib/python3.9/site-packages/transformers/trainer.py", line 2776, in training_step loss = self.compute_loss(model, inputs) File ".../lib/python3.9/site-packages/transformers/trainer.py", line 2801, in compute_loss outputs = model(**inputs) File ".../lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File ".../lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File ".../lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1109, in forward loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) File ".../lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File ".../lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File ".../lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1179, in forward return F.cross_entropy(input, target, weight=self.weight, File ".../lib/python3.9/site-packages/torch/nn/functional.py", line 3053, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) ValueError: Expected input batch_size (5856) to match target batch_size (6368). ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Library: - tokenizers: @ArthurZucker - trainer: @muellerzr
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27333/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27333", "html_url": "https://github.com/huggingface/transformers/pull/27333", "diff_url": "https://github.com/huggingface/transformers/pull/27333.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27333.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27332/comments
https://api.github.com/repos/huggingface/transformers/issues/27332/events
https://github.com/huggingface/transformers/issues/27332
1,980,460,013
I_kwDOCUB6oc52C2vt
27,332
Early stopping can not save best model at last
{ "login": "ILG2021", "id": 93691919, "node_id": "U_kgDOBZWgDw", "avatar_url": "https://avatars.githubusercontent.com/u/93691919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ILG2021", "html_url": "https://github.com/ILG2021", "followers_url": "https://api.github.com/users/ILG2021/followers", "following_url": "https://api.github.com/users/ILG2021/following{/other_user}", "gists_url": "https://api.github.com/users/ILG2021/gists{/gist_id}", "starred_url": "https://api.github.com/users/ILG2021/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ILG2021/subscriptions", "organizations_url": "https://api.github.com/users/ILG2021/orgs", "repos_url": "https://api.github.com/users/ILG2021/repos", "events_url": "https://api.github.com/users/ILG2021/events{/privacy}", "received_events_url": "https://api.github.com/users/ILG2021/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "cc @muellerzr ", "Hi @ILG2021, what is in `trainer.args`?" ]
1,699
1,707
null
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sanchit-gandhi ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I finetune whisper with transformers and set a easy stopping in the trainer. The easy stop works but can not find best model: the code is: ``` trainer = Seq2SeqTrainer( args=training_args, model=model, train_dataset=common_voice["train"], eval_dataset=common_voice["test"], data_collator=data_collator, compute_metrics=compute_metrics, tokenizer=processor.feature_extractor, callbacks=[EarlyStoppingCallback(early_stopping_patience=3)] ) ``` the error report is: `Could not locate the best model at whisper-large-v2-1.1.8/checkpoint-800/pytorch_model.bin, if you are running a distributed training on multiple nodes, you should activate `--save_on_each_node`.` I have check the checkpoint folders no pytorch_model.bin in there, but why the trainer try to find pytorch_model.bin? ![image](https://github.com/huggingface/transformers/assets/93691919/496abca7-9f4f-41de-9eac-6c3d12f60024) ### Expected behavior the best model can been saved at last.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27332/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27331/comments
https://api.github.com/repos/huggingface/transformers/issues/27331/events
https://github.com/huggingface/transformers/issues/27331
1,980,420,148
I_kwDOCUB6oc52CtA0
27,331
Add OpenAI Whisper Large-v3 weights
{ "login": "gau-nernst", "id": 26946864, "node_id": "MDQ6VXNlcjI2OTQ2ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gau-nernst", "html_url": "https://github.com/gau-nernst", "followers_url": "https://api.github.com/users/gau-nernst/followers", "following_url": "https://api.github.com/users/gau-nernst/following{/other_user}", "gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}", "starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions", "organizations_url": "https://api.github.com/users/gau-nernst/orgs", "repos_url": "https://api.github.com/users/gau-nernst/repos", "events_url": "https://api.github.com/users/gau-nernst/events{/privacy}", "received_events_url": "https://api.github.com/users/gau-nernst/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "I saw that the weights are already ported here: https://huggingface.co/openai/whisper-large-v3, and the announcements made by the team. Awesome work!\r\nAre there any reasons why this issue is not closed?", " @gau-nernst As it's your issue, you can close it if you think it's resolved. #27336 hasn't been merged yet - so full v3 compatibility hasn't been shipped - depends what your \"solved\" criteria are :) " ]
1,699
1,700
1,700
CONTRIBUTOR
null
### Feature request OpenAI released Whisper Large-v3. https://github.com/openai/whisper/discussions/1762 I haven't looked into it closely but it seems the only difference is using 128-bin Mel-spec instead of 80-bin, thus weight conversion should be the same. ### Motivation NA ### Your contribution NA
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27331/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27330/comments
https://api.github.com/repos/huggingface/transformers/issues/27330/events
https://github.com/huggingface/transformers/pull/27330
1,979,997,157
PR_kwDOCUB6oc5eu_yO
27,330
Fix FA2 import + deprecation cycle
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@SunMarc Merging so this can be part of a patch release. " ]
1,699
1,699
1,699
MEMBER
null
# What does this PR do ? This PR puts back `is_flash_attn_available` and make it go through a deprecation cycle. It was removed in this [PR](#26785). As this method was never private, some remote modeling files used it. Fixes #27319
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27330/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27330", "html_url": "https://github.com/huggingface/transformers/pull/27330", "diff_url": "https://github.com/huggingface/transformers/pull/27330.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27330.patch", "merged_at": 1699953630000 }
https://api.github.com/repos/huggingface/transformers/issues/27329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27329/comments
https://api.github.com/repos/huggingface/transformers/issues/27329/events
https://github.com/huggingface/transformers/issues/27329
1,979,904,689
I_kwDOCUB6oc52AvKx
27,329
Should I be getting more speedup/memory reduction from FlashAttention2 with Mistral?
{ "login": "cassianlewis", "id": 131266258, "node_id": "U_kgDOB9L20g", "avatar_url": "https://avatars.githubusercontent.com/u/131266258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cassianlewis", "html_url": "https://github.com/cassianlewis", "followers_url": "https://api.github.com/users/cassianlewis/followers", "following_url": "https://api.github.com/users/cassianlewis/following{/other_user}", "gists_url": "https://api.github.com/users/cassianlewis/gists{/gist_id}", "starred_url": "https://api.github.com/users/cassianlewis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cassianlewis/subscriptions", "organizations_url": "https://api.github.com/users/cassianlewis/orgs", "repos_url": "https://api.github.com/users/cassianlewis/repos", "events_url": "https://api.github.com/users/cassianlewis/events{/privacy}", "received_events_url": "https://api.github.com/users/cassianlewis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @cassianlewis \r\nThanks a lot for the extensive benchmark here! \r\nMany things in your setup can add an overhead here, I think we should test it without `use_double_quant=True` and make sure to use the same dtype for `compute_dtype` and `torch_dtype`. For better efficiency, I would try with `torch_dtype=torch.float16` and `bnb_4bit_compute_dtype=torch.float16`. Moreover, note that the input context length has a big impact in the final latency. Please take a look at the experiments I made here: https://github.com/huggingface/transformers/pull/26464#issuecomment-1743273513 and let me know if you have more questions.", "Hi @younesbelkada, thanks for the reply.\r\nI did actually do testing with fp16 and the results were largely the same:\r\n```\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_compute_dtype=torch.float16\r\n)\r\n\r\n# load base LLM model and tokenizer\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, cache_dir = 'm', torch_dtype=torch.float16, \r\n quantization_config=bnb_config, use_flash_attention_2 = True)\r\n```\r\n\r\n### Results\r\nSlight improvements in time/memory. FYI the input is ~2k tokens (but no padding) so this may benefit from longer input sequences.\r\n![flash_fp16](https://github.com/huggingface/transformers/assets/131266258/30da13b7-242c-4212-a4f8-09b8dae80d6e)\r\n\r\nI then tested it with `max_new_tokens = 1` (ie just looking at prefill) and saw a much larger discrepancy:\r\n\r\n![flash_s](https://github.com/huggingface/transformers/assets/131266258/84ae9be9-faa6-416f-90e0-e323a55634e6)\r\n\r\nThis is more in line with what you posted in https://github.com/huggingface/transformers/pull/26464#issuecomment-1743273513\r\n\r\nSo it looks like, for decoding at least, the speedups are fairly minimal for the kind of input sequence lengths/batch sizes I'm using.\r\n\r\n", "Thanks a lot for this benchmark ! Yes I think this result is pretty much inline with my findings. \r\nFlash Attention is great for prefill indeed (which is proven by one of your plots), although for generation it is not 'as fast' as prefill you can note that you can fit larger batch size with the same memory budget (~10 for native vs ~16 for FA2). I think FA-2 + HF transformers really shines in the context of training / fine-tuning because you can fit much larger sequence length / batch size, hence improve efficiency of your training setup.\r\nIf we want to increase generation throughput one needs to use static KV cache + flash decoding " ]
1,699
1,699
1,699
NONE
null
### System Info transformers: 4.35.0 python: 3.9.13 ### Who can help? @SunMarc @younesbelkada @gant ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## Setup model ``` model_id = "mistralai/Mistral-7B-Instruct-v0.1" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) # load base LLM model and tokenizer model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, quantization_config=bnb_config, use_flash_attention_2 = True) ``` ## Run code for different batch sizes ``` results = [] for n in range(1, 25): print(f'Processing {n} examples') tokenized_prompt = tokenizer([context]*n, return_tensors="pt") length = len(tokenized_prompt['input_ids'][0])+1 print(length) t0 = time() with torch.no_grad(): output = model.generate( inputs = tokenized_prompt['input_ids'], max_new_tokens = 400, repetition_penalty = 1.2 ) t1 = time() time_taken = t1 - t0 mem_usage = memory() new_token_length = len(output[0]) - length tokens_per_second = new_token_length * n / time_taken time_per_batch = time_taken/n print('Time taken = ', time_taken) print(f'Tokens/s = {tokens_per_second}') gc.collect() torch.cuda.empty_cache() results.append({'batch_size': n, 'time_taken': time_taken, 'tokens_per_second': tokens_per_second, 'memory_usage': mem_usage, 'time_per_batch':time_per_batch}) ``` ### Expected behavior ## Results Very little speedup/memory improvement: ![flash](https://github.com/huggingface/transformers/assets/131266258/2b722a0b-67d4-4a58-be21-a8eab9cc2f09) ### Profiling With FA2: <img width="1255" alt="Screenshot 2023-11-06 at 18 22 46" src="https://github.com/huggingface/transformers/assets/131266258/ab75b997-9225-495f-9e9d-f86162039edc"> Without FA2 <img width="1248" alt="Screenshot 2023-11-06 at 18 16 58" src="https://github.com/huggingface/transformers/assets/131266258/96a84c7a-2f37-48f7-8682-bf256ab2490a"> Would expect better performance given these
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27329/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27328/comments
https://api.github.com/repos/huggingface/transformers/issues/27328/events
https://github.com/huggingface/transformers/pull/27328
1,979,824,130
PR_kwDOCUB6oc5euZTO
27,328
Allow `# Ignore copy`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27328). All of your documentation changes will be reflected on that endpoint.", "## observed code (where there is `#Copied from`)\r\n#### (There are 2 `# ignore copied` blocks: one is not in the target code, another one is but having different content)\r\n\r\n```python\r\n----------------------------------------\r\nclass RobertaBertDummyModel:\r\n----------------------------------------\r\n def __init__(self, a=1, b=2):\r\n self.a = a\r\n self.b = b\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n # ignore copied\r\n def only_in_roberta_to_be_ignored(self, c):\r\n return 3\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n # Copied from transformers.models.dummy_gpt2.modeling_dummy_gpt2.GPT2DummyModel.forward\r\n def forward(self, c):\r\n return 1\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def only_in_roberta_not_ignored(self, c):\r\n return 2\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_common(self, c):\r\n return 4\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_diff_not_ignored(self, c):\r\n return 5\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n # ignore copied\r\n def existing_diff_to_be_ignored(self, c):\r\n return 6\r\n----------------------------------------\r\n```\r\n\r\n## target code (where is specified by `#Copied from`)\r\n```python\r\n----------------------------------------\r\nclass BertDummyModel:\r\n----------------------------------------\r\n def __init__(self, a=1, b=2):\r\n self.a = a\r\n self.b = b\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n # Copied from transformers.models.dummy_gpt2.modeling_dummy_gpt2.GPT2DummyModel.forward\r\n def forward(self, c):\r\n return 1\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def only_in_bert(self, c):\r\n return 7\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_common(self, c):\r\n return 4\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_diff_not_ignored(self, c):\r\n return 8\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_diff_to_be_ignored(self, c):\r\n return 9\r\n----------------------------------------\r\n```\r\n\r\n<details>\r\n <summary>Intermediate results</summary>\r\n\r\n## replaced target code (by patterns)\r\n#### (just `BertDummyModel` -> `RobertaBertDummyModel`)\r\n```python\r\n----------------------------------------\r\nclass RobertaBertDummyModel:\r\n----------------------------------------\r\n def __init__(self, a=1, b=2):\r\n self.a = a\r\n self.b = b\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n # Copied from transformers.models.dummy_gpt2.modeling_dummy_gpt2.GPT2DummyModel.forward\r\n def forward(self, c):\r\n return 1\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def only_in_bert(self, c):\r\n return 7\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_common(self, c):\r\n return 4\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_diff_not_ignored(self, c):\r\n return 8\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_diff_to_be_ignored(self, c):\r\n return 9\r\n----------------------------------------\r\n```\r\n\r\n\r\n\r\n## target code taking into account `# ignore copied` from observed code\r\n#### (target code use to write before blackify)\r\n#### (the 2 `# ignore copied` blocks in the observed code are now in the target code)\r\n```python\r\n----------------------------------------\r\nclass RobertaBertDummyModel:\r\n----------------------------------------\r\n def __init__(self, a=1, b=2):\r\n self.a = a\r\n self.b = b\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n # Copied from transformers.models.dummy_gpt2.modeling_dummy_gpt2.GPT2DummyModel.forward\r\n def forward(self, c):\r\n return 1\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def only_in_bert(self, c):\r\n return 7\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_common(self, c):\r\n return 4\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n def existing_diff_not_ignored(self, c):\r\n return 8\r\n----------------------------------------\r\n\r\n----------------------------------------\r\n # ignore copied\r\n def existing_diff_to_be_ignored(self, c):\r\n return 6\r\n----------------------------------------\r\n # ignore copied\r\n def only_in_roberta_to_be_ignored(self, c):\r\n return 3\r\n----------------------------------------\r\n```\r\n\r\n## target code use to compare (remove `# ignore copied` and empty blocks)\r\n\r\n```python\r\nclass RobertaBertDummyModel:\r\n def __init__(self, a=1, b=2):\r\n self.a = a\r\n self.b = b\r\n # Copied from transformers.models.dummy_gpt2.modeling_dummy_gpt2.GPT2DummyModel.forward\r\n def forward(self, c):\r\n return 1\r\n def only_in_bert(self, c):\r\n return 7\r\n def existing_common(self, c):\r\n return 4\r\n def existing_diff_not_ignored(self, c):\r\n return 8\r\n```\r\n\r\n## observed code use to compare (remove `# ignore copied` and empty blocks)\r\n\r\n```python\r\nclass RobertaBertDummyModel:\r\n def __init__(self, a=1, b=2):\r\n self.a = a\r\n self.b = b\r\n # Copied from transformers.models.dummy_gpt2.modeling_dummy_gpt2.GPT2DummyModel.forward\r\n def forward(self, c):\r\n return 1\r\n def only_in_roberta_not_ignored(self, c):\r\n return 2\r\n def existing_common(self, c):\r\n return 4\r\n def existing_diff_not_ignored(self, c):\r\n return 5\r\n```\r\n</details>\r\n\r\n## target code use to overwrite\r\n\r\n```python\r\nclass RobertaBertDummyModel:\r\n def __init__(self, a=1, b=2):\r\n self.a = a\r\n self.b = b\r\n\r\n # Copied from transformers.models.dummy_gpt2.modeling_dummy_gpt2.GPT2DummyModel.forward\r\n def forward(self, c):\r\n return 1\r\n\r\n def only_in_bert(self, c):\r\n return 7\r\n\r\n def existing_common(self, c):\r\n return 4\r\n\r\n def existing_diff_not_ignored(self, c):\r\n return 8\r\n\r\n # ignore copied\r\n def existing_diff_to_be_ignored(self, c):\r\n return 6\r\n\r\n # ignore copied\r\n def only_in_roberta_to_be_ignored(self, c):\r\n return 3\r\n```", "Rebased on `main` 🎉 ", "I'll review tomorrow! ", "> I'll review tomorrow!\r\n\r\nI have a meeting with Lysandre to explain this PR. We can discuss all together, so might be easier for the review.", "Following the discussion in our last meeting, I tried to implemented each line with a certain level of indent (that is not in an inner function or class) as a single block.\r\n\r\nHowever, I realized this is too error prone, like\r\n\r\n```python\r\nclass RobertaTokenizationTest:\r\n tokenizer_class = RobertaTokenizer\r\n rust_tokenizer_class = RobertaTokenizerFast\r\n test_rust_tokenizer = True\r\n from_pretrained_kwargs = {\"cls_token\": \"<s>\"}\r\n\r\n def setUp(self):\r\n super().setUp()\r\n```\r\nvs\r\n```python\r\nclass LongformerTokenizationTest\r\n # ignore copied\r\n tokenizer_class = LongformerTokenizer\r\n test_slow_tokenizer = True\r\n rust_tokenizer_class = LongformerTokenizerFast\r\n test_rust_tokenizer = True\r\n\r\n def setUp(self):\r\n super().setUp()\r\n```\r\n\r\nThose blocks (each line before the `setUp`) have **no well defined name**, and there is no reliable way to map the blocks between the source and target files. For this simple case, we might say just arrange a bit the files and it would work. But it's just a guess that happens to work in a special case.\r\n\r\nI suggest to treat the block before any inner class/func/method **as a single block**. This is sill not 100% theoretically reliable, but it will work until users are writing code in strange format.\r\n\r\nWDYT?", "> I'll review again after the last changes!\r\n\r\nIt's not ready, as we need to agree what to do regarding our last discussion offline.", "@ArthurZucker Updated the tests [here](https://github.com/huggingface/transformers/pull/27328/commits/3a5b22075cb126cc0e16c8382bc00a858e21e91d) as suggested", "Feel free to merge! " ]
1,699
1,701
1,701
COLLABORATOR
null
# What does this PR do? Allow `# ignore copied` as suggested in https://github.com/huggingface/transformers/pull/26713#discussion_r1354625364 The runtime of `check_copies` is the same as before. See the comment below for an example.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27328/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27328", "html_url": "https://github.com/huggingface/transformers/pull/27328", "diff_url": "https://github.com/huggingface/transformers/pull/27328.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27328.patch", "merged_at": 1701939609000 }
https://api.github.com/repos/huggingface/transformers/issues/27327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27327/comments
https://api.github.com/repos/huggingface/transformers/issues/27327/events
https://github.com/huggingface/transformers/pull/27327
1,979,766,039
PR_kwDOCUB6oc5euMLs
27,327
[docs] fixed links with 404
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
CONTRIBUTOR
null
The PR fixes some links that were resulting in 404 for various reasons: typos, external doc (e.g. for `flax.linen.Module`) moved, etc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27327/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27327", "html_url": "https://github.com/huggingface/transformers/pull/27327", "diff_url": "https://github.com/huggingface/transformers/pull/27327.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27327.patch", "merged_at": 1699299903000 }
https://api.github.com/repos/huggingface/transformers/issues/27326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27326/comments
https://api.github.com/repos/huggingface/transformers/issues/27326/events
https://github.com/huggingface/transformers/pull/27326
1,979,704,919
PR_kwDOCUB6oc5et-kz
27,326
storing & logging gradient norm in trainer
{ "login": "shijie-wu", "id": 2987758, "node_id": "MDQ6VXNlcjI5ODc3NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shijie-wu", "html_url": "https://github.com/shijie-wu", "followers_url": "https://api.github.com/users/shijie-wu/followers", "following_url": "https://api.github.com/users/shijie-wu/following{/other_user}", "gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions", "organizations_url": "https://api.github.com/users/shijie-wu/orgs", "repos_url": "https://api.github.com/users/shijie-wu/repos", "events_url": "https://api.github.com/users/shijie-wu/events{/privacy}", "received_events_url": "https://api.github.com/users/shijie-wu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerzr ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Thank you for the work on this, @shijie-wu! \r\n\r\nIt may seem like a little PR to some, but this would be a huge step to bring `transformers` closer to parity with projects like `gpt-neox` for large-scale training.", "Gentle ping @shijie-wu :)", "Found that `self.accelerator.clip_grad_norm_` will return `None` if we are using DeepSpeed with Trainer. In DeepSpeed we should use `model.get_global_grad_norm()` to get grad_norm:\r\n```python\r\n_grad_norm = self.accelerator.clip_grad_norm_(\r\n model.parameters(),\r\n args.max_grad_norm,\r\n)\r\nif self.accelerator.distributed_type == DistributedType.DEEPSPEED:\r\n grad_norm = model.get_global_grad_norm()\r\nelse:\r\n grad_norm = _grad_norm.item() if _grad_norm is not None else None\r\n\r\n```", "sorry for the delay! PTAL @muellerzr @mjbommar ", "Gentle ping @muellerzr @mjbommar :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27326). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @amyeroberts for final review :) " ]
1,699
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? Report gradient norm during training - Fixes #26143 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @muellerzr @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27326/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27326", "html_url": "https://github.com/huggingface/transformers/pull/27326", "diff_url": "https://github.com/huggingface/transformers/pull/27326.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27326.patch", "merged_at": 1708369661000 }
https://api.github.com/repos/huggingface/transformers/issues/27325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27325/comments
https://api.github.com/repos/huggingface/transformers/issues/27325/events
https://github.com/huggingface/transformers/pull/27325
1,979,703,404
PR_kwDOCUB6oc5et-O6
27,325
Modify group_sub_entities in TokenClassification Pipeline to support label with "-"
{ "login": "eshoyuan", "id": 57313880, "node_id": "MDQ6VXNlcjU3MzEzODgw", "avatar_url": "https://avatars.githubusercontent.com/u/57313880?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eshoyuan", "html_url": "https://github.com/eshoyuan", "followers_url": "https://api.github.com/users/eshoyuan/followers", "following_url": "https://api.github.com/users/eshoyuan/following{/other_user}", "gists_url": "https://api.github.com/users/eshoyuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/eshoyuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eshoyuan/subscriptions", "organizations_url": "https://api.github.com/users/eshoyuan/orgs", "repos_url": "https://api.github.com/users/eshoyuan/repos", "events_url": "https://api.github.com/users/eshoyuan/events{/privacy}", "received_events_url": "https://api.github.com/users/eshoyuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27325). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR provides a bug fix for the `group_sub_entities` function within the NER pipeline. Previously, the function split entities on every dash (`-`), which led to incorrect parsing of entities that inherently contain a dash in their names. The code change ensures that only the first dash is considered for splitting, preserving the integrity of the entity names. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - pipelines: @Narsil <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27325", "html_url": "https://github.com/huggingface/transformers/pull/27325", "diff_url": "https://github.com/huggingface/transformers/pull/27325.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27325.patch", "merged_at": 1701098747000 }
https://api.github.com/repos/huggingface/transformers/issues/27324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27324/comments
https://api.github.com/repos/huggingface/transformers/issues/27324/events
https://github.com/huggingface/transformers/pull/27324
1,979,661,588
PR_kwDOCUB6oc5et09f
27,324
Enhance Text Grouping with Dynamic Padding for Variable Sequence Lengths
{ "login": "itsmenick212", "id": 52716575, "node_id": "MDQ6VXNlcjUyNzE2NTc1", "avatar_url": "https://avatars.githubusercontent.com/u/52716575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itsmenick212", "html_url": "https://github.com/itsmenick212", "followers_url": "https://api.github.com/users/itsmenick212/followers", "following_url": "https://api.github.com/users/itsmenick212/following{/other_user}", "gists_url": "https://api.github.com/users/itsmenick212/gists{/gist_id}", "starred_url": "https://api.github.com/users/itsmenick212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itsmenick212/subscriptions", "organizations_url": "https://api.github.com/users/itsmenick212/orgs", "repos_url": "https://api.github.com/users/itsmenick212/repos", "events_url": "https://api.github.com/users/itsmenick212/events{/privacy}", "received_events_url": "https://api.github.com/users/itsmenick212/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
This commit introduces an enhancement to the `group_texts` function, allowing it to dynamically pad sequences within each batch to match the length of the longest sequence. This change ensures that batches fed into the model have uniform sequence lengths, improving model training stability and performance. # What does this PR do? ## Summary Implement dynamic padding in the `group_texts` function to handle batches with variable sequence lengths, improving model performance and training efficiency. ## Motivation Previously, the `group_texts` function would group texts into blocks of a specified size without ensuring that each sequence within a batch was of the same length. This could lead to inefficiencies and complications during model training as machine learning models typically expect inputs of uniform size. ## Changes The `group_texts` function now includes a padding step that dynamically adjusts the length of sequences within a batch. The sequences are padded with the tokenizer's padding token to match the length of the longest sequence in the batch. The specific changes include: - Computing the max sequence length in each batch post-chunking. - Padding shorter sequences with the `tokenizer.pad_token_id`. - Adjusting the function's return to include padded `input_ids` and `labels`. ## Impact This change will: - Ensure that all batches have sequences of uniform length, which is crucial for many machine learning models, particularly when using hardware accelerators like GPUs. - Potentially increase the data throughput and efficiency during training, as uniform sequence lengths can be more easily parallelized. - Improve the ease of use for the `group_texts` function, as it now internally manages variable sequence lengths, abstracting this complexity from the user. ## Tests - Updated existing tests to account for the new padding behavior. - Added new tests to specifically check for correct padding behavior with sequences of varying lengths within the same batch. ## Usage Example ```python block_size = 128 # The desired sequence length after grouping padded_datasets = tokenized_datasets.map( lambda examples: group_texts(examples, block_size, tokenizer), batched=True, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, desc=f"Grouping and padding texts in chunks of {block_size}", )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27324/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27324", "html_url": "https://github.com/huggingface/transformers/pull/27324", "diff_url": "https://github.com/huggingface/transformers/pull/27324.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27324.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27323/comments
https://api.github.com/repos/huggingface/transformers/issues/27323/events
https://github.com/huggingface/transformers/pull/27323
1,979,653,291
PR_kwDOCUB6oc5etzJm
27,323
Fix `Kosmos2Processor` batch mode
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "All tests (even slow) pass!", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? Fix `Kosmos2Processor` batch mode as there is a bug, see comment in the code change. Fix one issue about batch mode opened in #27301
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27323/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27323", "html_url": "https://github.com/huggingface/transformers/pull/27323", "diff_url": "https://github.com/huggingface/transformers/pull/27323.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27323.patch", "merged_at": 1699293951000 }
https://api.github.com/repos/huggingface/transformers/issues/27322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27322/comments
https://api.github.com/repos/huggingface/transformers/issues/27322/events
https://github.com/huggingface/transformers/pull/27322
1,979,559,213
PR_kwDOCUB6oc5eteZ4
27,322
[Whisper] Block language/task args for English-only
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for taking care of this so swiftly! 🤗 ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? The `language`/`task` args should only be forwarded to the multilingual Whisper models. Passing these args to the English-only models gives the wrong error message currently: ``` ValueError: The generation config is outdated and is thus not compatible with the `language` argument to `generate`. Either set the language using the `forced_decoder_ids` in the model config, or update the generation config as per the instructions https://github.com/huggingface/transformers/issues/25084#issuecomment-1664398224 ``` This PR blocks these args for the English-only models with an appropriate error message. cc @Vaibhavs10
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27322/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27322/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27322", "html_url": "https://github.com/huggingface/transformers/pull/27322", "diff_url": "https://github.com/huggingface/transformers/pull/27322.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27322.patch", "merged_at": 1699351464000 }
https://api.github.com/repos/huggingface/transformers/issues/27321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27321/comments
https://api.github.com/repos/huggingface/transformers/issues/27321/events
https://github.com/huggingface/transformers/issues/27321
1,979,516,675
I_kwDOCUB6oc51_QcD
27,321
AWQ quantization example in colab failures
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @poedator for reporting, everything should be back to normal now. \r\nWe tested it with autoawq==0.1.4 which did not had the strong requirement for CUDA>=12.0, that has been introduced in the most recent release: https://github.com/casper-hansen/AutoAWQ/releases/tag/v0.1.6 . To install autoawq with CUDA < 12.0 I had to call `pip install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl` (cc @casper-hansen in case I am mistaken somewhere). \r\nTo retrive the config from the resulting model you need to check the attribute `model.model.config`, as done per the line:\r\n```python\r\n# the pretrained transformers model is stored in the model attribute + we need to pass a dict\r\nmodel.model.config.quantization_config = quantization_config\r\n```\r\nThe point 2 is temporarly solved if you call `lower()` to the version string, until https://github.com/huggingface/transformers/pull/27320 gets merged where it should perform str to enum conversion.\r\nAgain thanks for reporting, on my end everything looks resolved, I'll leave this issue open but if you think that everything has been addressed feel free to close the issue! ", "@younesbelkada You are correct. PyPi only allows one release file, so I chose to update to the latest configuration of torch 2.1.0 and CUDA 12.1.1 that PyTorch offers. Extra wheels are made available with torch 2.0.1 and CUDA 11.8.0 for compatibility on the GitHub release itself.\r\n\r\nI updated the installation section to reflect the above:\r\nhttps://github.com/casper-hansen/AutoAWQ#install", "Ok makes sense, therefore I believe there is no fix to upstream on autoawq side, the only fix is on the notebook, which is done!", "@casper-hansen , @younesbelkada - thank you for the quick response!" ]
1,699
1,703
1,699
CONTRIBUTOR
null
### System Info colab ### Who can help? @younesbelkada @SunMarc ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction page https://huggingface.co/docs/transformers/main_classes/quantization refers to colab example for AWQ quantization. this example fails: 1) [optional] After the 4.35.0 release it fails with error `ImportError: libcudart.so.12: cannot open shared object file: No such file or directory` during `from awq import AutoAWQForCausalLM`. I thought that it should work now without git+https installs with 4.35. 2)quantization_config = AwqConfig(... fails, because it expects `AWQLinearVersion.GEMM` and gets string "GEMM" instead. see my other issue today. 3) the resulting model has no config at all. Which looks odd to me. It has `model.quant_config` though. ### Expected behavior more normal
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27321/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27320/comments
https://api.github.com/repos/huggingface/transformers/issues/27320/events
https://github.com/huggingface/transformers/pull/27320
1,979,506,849
PR_kwDOCUB6oc5etS-w
27,320
[`Quantization`] Add str to enum conversion for AWQ
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts yes correct there is no backward compatibility to handle, it is more about users that can easily make the typo (\"GEMM\" vs \"gemm\") and lead to an error that is hard to understand for them. \r\nFor quantizing a model one needs to call (through autoawq package):\r\n\r\n```python\r\nfrom awq import AutoAWQForCausalLM\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel_path = \"facebook/opt-125m\"\r\nquant_path = \"opt-125m-awq\"\r\nquant_config = {\"zero_point\": True, \"q_group_size\": 128, \"w_bit\": 4, \"version\":\"GEMM\"}\r\n\r\n# Load model\r\nmodel = AutoAWQForCausalLM.from_pretrained(model_path)\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\r\n\r\n# Quantize\r\nmodel.quantize(tokenizer, quant_config=quant_config)\r\n```\r\n\r\nThen convert `quantization_config` to the transformers format like this:\r\n\r\n```python\r\nfrom transformers import AwqConfig, AutoConfig\r\nfrom huggingface_hub import HfApi\r\n\r\n# modify the config file so that it is compatible with transformers integration\r\nquantization_config = AwqConfig(\r\n bits=quant_config[\"w_bit\"],\r\n group_size=quant_config[\"q_group_size\"],\r\n zero_point=quant_config[\"zero_point\"],\r\n version=quant_config[\"version\"].lower(),\r\n).to_dict()\r\n\r\n# the pretrained transformers model is stored in the model attribute + we need to pass a dict\r\nmodel.model.config.quantization_config = quantization_config\r\n# a second solution would be to use Autoconfig and push to hub (what we do at llm-awq)\r\n\r\n\r\n# save model weights\r\nmodel.save_quantized(quant_path)\r\ntokenizer.save_pretrained(quant_path)\r\n```\r\n\r\nnote the `version=quant_config[\"version\"].lower()` here. This PR is more about the user experience and smooth compatibility between autoawq and transformers. The changes proposed in this PR are very simple and IMO would be nice to add it to avoid these easy usecases.\r\n", "@younesbelkada Just to make sure I've understood, from the example above it seems that it's necessary for the value to be uppercase when creating the quant config? \r\n\r\ni.e. `quant_config = {\"version\": \"GEMM\"}` is correct but `quant_config = {\"version\": \"gemm\"}` is not? ", "@amyeroberts when using external tools such as AutoAWQ or LLm-awq to quantize your LLM yes! However currently users need to make it lowercase before saving it locally / pushing it on the Hub in order to use it with transformers", "@younesbelkada Failing CI runs should now be resolved on main. You should be able to rebase and merge :) ", "@younesbelkada - the test should (🤞 ) now be passing. A full run of all the tests with a fix / skipping tests has passed and been merged to main ", "Thank you veyr much @amyeroberts @ydshieh @muellerzr for fixing the CI issues! " ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/27318 Some repositories that use the old conversion script have the `version` field set to `GEMM` whereas it should be `gemm`. Despite the fix should be that we should convert the config file accordingly e.g.: https://huggingface.co/marcsun13/Llama-2-13B-AWQ/discussions/1 I propose to add a small str to enum conversion logic to cover this scenario as well and avoid errors that are hard to understand for users. Added also a test for it cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27320/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27320", "html_url": "https://github.com/huggingface/transformers/pull/27320", "diff_url": "https://github.com/huggingface/transformers/pull/27320.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27320.patch", "merged_at": 1699620300000 }
https://api.github.com/repos/huggingface/transformers/issues/27319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27319/comments
https://api.github.com/repos/huggingface/transformers/issues/27319/events
https://github.com/huggingface/transformers/issues/27319
1,979,503,155
I_kwDOCUB6oc51_NIz
27,319
ImportError: cannot import name 'is_flash_attn_available' from 'transformers.utils' (~/lib/python3.10/site-packages/transformers/utils/__init__.py)
{ "login": "Rajmehta123", "id": 22636443, "node_id": "MDQ6VXNlcjIyNjM2NDQz", "avatar_url": "https://avatars.githubusercontent.com/u/22636443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rajmehta123", "html_url": "https://github.com/Rajmehta123", "followers_url": "https://api.github.com/users/Rajmehta123/followers", "following_url": "https://api.github.com/users/Rajmehta123/following{/other_user}", "gists_url": "https://api.github.com/users/Rajmehta123/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rajmehta123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rajmehta123/subscriptions", "organizations_url": "https://api.github.com/users/Rajmehta123/orgs", "repos_url": "https://api.github.com/users/Rajmehta123/repos", "events_url": "https://api.github.com/users/Rajmehta123/events{/privacy}", "received_events_url": "https://api.github.com/users/Rajmehta123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Rajmehta123 ! Thanks for reporting this issue. This happens because in the latest version of transformers, we removed `is_flash_attn_available()` in favor of `is_flash_attn_2_available()`. See related [PR](https://github.com/huggingface/transformers/pull/26785). I guess the remote [code](https://huggingface.co/01-ai/Yi-6B-200K/blob/96eff7e8c7871e95d59d13497923800e084ea7b0/modeling_yi.py#L21) that you are executing is using `is_flash_attn_available`, hence the error. I've opened an PR to put back the function and make it go through a deprecation cycle ! ", "Hi @SunMarc , I have an error as well but its on the flip side\r\n\r\nfrom transformers import pipeline\r\nclassifier = pipeline('summarization')\r\n\r\nRuntimeError: Failed to import transformers.models.bart.modeling_bart because of the following error (look up to see its traceback):\r\ncannot import name 'is_flash_attn_2_available' from 'transformers.utils'\r\n\r\nTransformers: Version: 4.35.0\r\nPython: 3.11.4", "Hi @regineshalom, i'm unable to reproduce your error on my local setup. If you can reproduce the error in a colab and send it to me, that would be great. ", "ImportError: cannot import name 'is_flash_attn_available' from 'transformers.utils'\r\n答:`pip install transformers==4.34.1`,transformers的版本必须是4.34,不能是4.31、4.32、4.33\r\n\r\n![image](https://github.com/OrionStarAI/Orion/assets/32784059/49b8b467-072a-4661-918b-7a68d4673199)" ]
1,699
1,706
1,699
NONE
null
### System Info transformers: 4.35.0 python: 3.10.13 Platform: Linux ### Who can help? @Narsil @ArthurZucker @SunMarc ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import torch model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-6B-200K", trust_remote_code=True,torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-6B-200K",trust_remote_code=True) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) This works fine with 4.34.1 transformers verion ### Expected behavior The model gets loaded correctly BUT I got this error: ImportError: cannot import name 'is_flash_attn_available' from 'transformers.utils' (~/lib/python3.10/site-packages/transformers/utils/__init__.py) This works fine with 4.34.1 transformers verion
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27319/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27318/comments
https://api.github.com/repos/huggingface/transformers/issues/27318/events
https://github.com/huggingface/transformers/issues/27318
1,979,478,212
I_kwDOCUB6oc51_HDE
27,318
AWQ model error when `AwqConfig['version']` is a string
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi the normal Cuda load as seen below\r\n\r\n**from transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"quant_autoawq\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"quant_autoawq\", device_map=\"cuda:0\")****\r\n\r\nis taking up too much GPU memory\r\n\r\nIt doesn't work like Auto AWQ package as seen below. \r\n\r\n**from awq import AutoAWQForCausalLM\r\nfrom transformers import AutoTokenizer, TextStreamer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"quant_autoawq\", trust_remote_code=True)\r\nmodel = AutoAWQForCausalLM.from_quantized(\"quant_autoawq\", fuse_layers=True)**\r\n\r\n\r\nWhat parameters do we need to pass ?\r\n\r\nThanks\r\n", "Hi @choochtech thanks for your message, would you mind posting a new ticket for your issue? it seems unrelated to the issue posted by @poedator !\r\n", "@younesbelkada thanks" ]
1,699
1,699
1,699
CONTRIBUTOR
null
### System Info A100 Cuda 12.2 transformers 4.36.0.dev (Nov4) ### Who can help? @younesbelkada @SunMarc ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction load some AWQ model, like `"marcsun13/Llama-2-13B-AWQ"` - it has `AwqConfig['version']` as a string "GEMM". later in `utils/quantization.config.py::AWQcofig.post_init` there is this check: ``` if self.version not in [AWQLinearVersion.GEMM, AWQLinearVersion.GEMV]: raise ValueError( f"Only supported versions are in [AWQLinearVersion.GEMM, AWQLinearVersion.GEMV] - not recognized version {self.version}" ) ``` Apparently it fails because `version` is still a string "GEMM" please consider some string -> enum conversion during init or rewrite this check when loading model. ### Expected behavior normal model loading.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27318/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27317/comments
https://api.github.com/repos/huggingface/transformers/issues/27317/events
https://github.com/huggingface/transformers/issues/27317
1,979,456,875
I_kwDOCUB6oc51_B1r
27,317
audio pipeline support for initial_prompt?
{ "login": "silvacarl2", "id": 4220915, "node_id": "MDQ6VXNlcjQyMjA5MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/4220915?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silvacarl2", "html_url": "https://github.com/silvacarl2", "followers_url": "https://api.github.com/users/silvacarl2/followers", "following_url": "https://api.github.com/users/silvacarl2/following{/other_user}", "gists_url": "https://api.github.com/users/silvacarl2/gists{/gist_id}", "starred_url": "https://api.github.com/users/silvacarl2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silvacarl2/subscriptions", "organizations_url": "https://api.github.com/users/silvacarl2/orgs", "repos_url": "https://api.github.com/users/silvacarl2/repos", "events_url": "https://api.github.com/users/silvacarl2/events{/privacy}", "received_events_url": "https://api.github.com/users/silvacarl2/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @sanchit-gandhi ", "let me know if i am mistaken, but we cannot find initial_prompt in audio pipeline.", "Hey @silvacarl2! The `pipeline` is a high-level helper function that is compatible with **all** audio models in the Transformers library. It's not meant to be fully comprehensive, but rather an easy API for all audio models over the common set of features (e.g. audio inputs, text outputs, batching, etc).\r\n\r\nOn the other hand, the `initial_prompt` is Whisper-specific argument. For example, there's no concept of `initial_prompt` for other speech models, like Wav2Vec2. Hence, it's not currently supported in the pipeline.\r\n\r\nGiven the prominence of Whisper as the current most popular model for ASR, I would be in favour of making an exception, and allowing the `initial_prompt` as a valid arg for the `pipeline`. WDYT here @ArthurZucker @Narsil @ylacombe?", "you are a genius! if that is possible, it would be great!!!!!!!!!!!!!!\r\n\r\njust to be clear, whisper is the coolest thing we have seen in a long time, and i am sure everyone else thinks the same.", "Well the pipeline is already pretty bloated with whisper stuff so no problem for me either. It's already supported (edit because what we don't support is the conditioning I think?)", "I agree with you all, whisper is special enough. And I also agree it should be seen as an exception.", "Any news for this? How can we force this initial_prompt first while waiting for the new release?", "You can use the model outside of the pipeline for now ", "Does use the model outside the pipeline also use a transformer? Sorry I am beginner in it. ", "Hi, I have this code that as you suggest to use the model outside of the pipeline. But why in only transcribe the first portion of the audio. It seems only transcribe first 3 seconds of the audio but the audio is almost 30 minutes long. What am I doing wrong? Thank you\r\n\r\n```python\r\n\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nimport librosa\r\nimport torch\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n\r\n\r\n# Load the Whisper processor and model\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-tiny\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-tiny\").to(device)\r\n\r\n# Load the audio file using librosa\r\nfile_path = \"路人y2mate.com - 大陸年輕人薪資街頭調查 聽到這個數字新疆正妹直呼荒謬扯淡不可能中天朋友圈 CtiNews userjf1ro3lv8o_144p.wav\"\r\naudio_data, _ = librosa.load(file_path, sr=16000) # Adjust the sample rate if needed\r\n\r\n# Process the audio and generate output\r\ninput_features = processor.feature_extractor(audio_data, return_tensors=\"pt\").input_features.to(device)\r\nprint(input_features.shape)\r\n# Without prompt\r\n# output_without_prompt = model.generate(input_features)\r\n# print(processor.decode(output_without_prompt[0]))\r\n\r\n# With prompt\r\nprompt_ids = processor.get_prompt_ids(\"你好,這是一個中文範例\")\r\n\r\n# generate token ids by running model forward sequentially\r\npredicted_ids = model.generate(input_features,max_length=100, prompt_ids=prompt_ids,language=\"Chinese\")\r\n\r\n# post-process token ids to text\r\ntranscription = processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)\r\nprint(transcription)](url)\r\n```", "That is expected, you need to iterate over the audio because the whipser Processor (and model) only accepts 30sec long inputs. Inviting you to read the documentations and tutorials related to this to get the hang of it! 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@sanchit-gandhi @Narsil , Hi i have done the changes for this. And raised a PR (https://github.com/huggingface/transformers/pull/28556). Please let me know, if anything else is needed. Here is an example to use it. \r\n\r\n``` python\r\nfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\r\nfrom datasets import load_dataset\r\n\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\nmodel_id = \"openai/whisper-small\"\r\nmodel = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nmodel.to(device)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id)\r\n\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n max_new_tokens=128,\r\n chunk_length_s=15,\r\n batch_size=16,\r\n torch_dtype=torch_dtype,\r\n device=device,\r\n processor=processor\r\n)\r\n\r\ndataset = load_dataset(\"distil-whisper/librispeech_long\", \"clean\", split=\"validation\")\r\nsample = dataset[0][\"audio\"]\r\n\r\n# including timestamp\r\nprint(pipe(audio, initial_prompt = \"Biswajit, Whisper\", return_timestamps=True))\r\n\r\n# without timestamp\r\nprint(pipe(audio, initial_prompt = \"Biswajit, Whisper\"))\r\n```", "THIS IS AWESOME!!!!!!!!!!!!!!!!!! WILL CHECK IT OUT!!!!!!!!!!!!!!!!!!\r\n\r\nwill this also work with large-v2 and large-v3?", "@silvacarl2 yes i have added that implementation to pipeline. yet PR needs approval. For the time being you can use my repository to try it out. `pip install git+https://github.com/Biswajit2902/transformers.git` for installation.\r\n\r\nyes it will work for any whisper model." ]
1,699
1,707
null
NONE
null
### System Info Is there a parameter someplace for audio pipeline support for initial_prompt? like this: https://github.com/openai/whisper/discussions/963 $ whisper --help optional arguments: --initial_prompt INITIAL_PROMPT optional text to provide as a prompt for the first window. (default: None) $ whisper-ctranslate2 --help optional arguments: --initial_prompt INITIAL_PROMPT optional text to provide as a prompt for the first window. (default: None) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction audio pipeline-initial_paremeter ### Expected behavior audio pipeline-initial_paremeter
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27317/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27317/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27316/comments
https://api.github.com/repos/huggingface/transformers/issues/27316/events
https://github.com/huggingface/transformers/pull/27316
1,979,357,119
PR_kwDOCUB6oc5esyHS
27,316
Fix Falcon tokenizer loading in pipeline
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This turned out to be a more involved fix than I thought, so I'm going to put in a quick hack at @ArthurZucker's suggestion to fix it for Falcon, and come back to it later when the Keras 3 Situation is less urgent!", "Some unrelated tests are red even after rebasing, will merge after that's resolved!" ]
1,699
1,699
1,699
MEMBER
null
Previously, pipelines checked `config.json` for info on the model tokenizer. However, often the information they want is not here, and is instead stored in `tokenizer_config.json`. This PR changes the pipeline code to more aggressively try `AutoTokenizer.from_pretrained()` even if `model.config` doesn't contain tokenizer information. This PR is quite experimental for now, don't review/merge yet!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27316/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/27316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27316", "html_url": "https://github.com/huggingface/transformers/pull/27316", "diff_url": "https://github.com/huggingface/transformers/pull/27316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27316.patch", "merged_at": 1699894919000 }
https://api.github.com/repos/huggingface/transformers/issues/27315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27315/comments
https://api.github.com/repos/huggingface/transformers/issues/27315/events
https://github.com/huggingface/transformers/pull/27315
1,979,267,354
PR_kwDOCUB6oc5eseF-
27,315
[`Llama + Mistral`] Add attention dropout
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? fixes #26616 by adding the `attention_dropout` attribute to the config and the logic in the code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27315", "html_url": "https://github.com/huggingface/transformers/pull/27315", "diff_url": "https://github.com/huggingface/transformers/pull/27315.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27315.patch", "merged_at": 1699883508000 }
https://api.github.com/repos/huggingface/transformers/issues/27314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27314/comments
https://api.github.com/repos/huggingface/transformers/issues/27314/events
https://github.com/huggingface/transformers/pull/27314
1,979,140,359
PR_kwDOCUB6oc5esCGg
27,314
Fix id_tensor_storage in case the tensor is a view of an other tensor
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27314). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,701
1,701
COLLABORATOR
null
As per title and discussed offline @LysandreJik Currently `safe_serialization=True` may unexpectedly erase tensors from the state_dict due to this issue. A release or patch release including https://github.com/huggingface/safetensors/pull/379 will be needed to work with this PR, to avoid this error with the current latest release of safetensors: ``` E RuntimeError: E Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: ``` here: https://github.com/huggingface/safetensors/blob/96061e97bb7fc4ea6cdd1f79f58701efc4710d22/bindings/python/py_src/safetensors/torch.py#L467
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27314/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27314", "html_url": "https://github.com/huggingface/transformers/pull/27314", "diff_url": "https://github.com/huggingface/transformers/pull/27314.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27314.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27313/comments
https://api.github.com/repos/huggingface/transformers/issues/27313/events
https://github.com/huggingface/transformers/pull/27313
1,979,065,776
PR_kwDOCUB6oc5erynL
27,313
[`PretrainedTokenizer`] add some of the most important functions to the doc
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? Fixes #27132 by adding `add_tokens` and `add_special_tokens` to the documentation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27313", "html_url": "https://github.com/huggingface/transformers/pull/27313", "diff_url": "https://github.com/huggingface/transformers/pull/27313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27313.patch", "merged_at": 1699279860000 }
https://api.github.com/repos/huggingface/transformers/issues/27312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27312/comments
https://api.github.com/repos/huggingface/transformers/issues/27312/events
https://github.com/huggingface/transformers/issues/27312
1,978,865,742
I_kwDOCUB6oc518xhO
27,312
Want to add a new model
{ "login": "ReCag", "id": 150021192, "node_id": "U_kgDOCPEkSA", "avatar_url": "https://avatars.githubusercontent.com/u/150021192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ReCag", "html_url": "https://github.com/ReCag", "followers_url": "https://api.github.com/users/ReCag/followers", "following_url": "https://api.github.com/users/ReCag/following{/other_user}", "gists_url": "https://api.github.com/users/ReCag/gists{/gist_id}", "starred_url": "https://api.github.com/users/ReCag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ReCag/subscriptions", "organizations_url": "https://api.github.com/users/ReCag/orgs", "repos_url": "https://api.github.com/users/ReCag/repos", "events_url": "https://api.github.com/users/ReCag/events{/privacy}", "received_events_url": "https://api.github.com/users/ReCag/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Closing as this is a duplicate of #27308" ]
1,699
1,699
1,699
NONE
null
### Model description ReCag will be a generative model ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27312/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27311/comments
https://api.github.com/repos/huggingface/transformers/issues/27311/events
https://github.com/huggingface/transformers/issues/27311
1,978,836,007
I_kwDOCUB6oc518qQn
27,311
TypeError: ResNet.__init__() got an unexpected keyword argument 'out_indices'
{ "login": "wuxiaolianggit", "id": 34123600, "node_id": "MDQ6VXNlcjM0MTIzNjAw", "avatar_url": "https://avatars.githubusercontent.com/u/34123600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wuxiaolianggit", "html_url": "https://github.com/wuxiaolianggit", "followers_url": "https://api.github.com/users/wuxiaolianggit/followers", "following_url": "https://api.github.com/users/wuxiaolianggit/following{/other_user}", "gists_url": "https://api.github.com/users/wuxiaolianggit/gists{/gist_id}", "starred_url": "https://api.github.com/users/wuxiaolianggit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wuxiaolianggit/subscriptions", "organizations_url": "https://api.github.com/users/wuxiaolianggit/orgs", "repos_url": "https://api.github.com/users/wuxiaolianggit/repos", "events_url": "https://api.github.com/users/wuxiaolianggit/events{/privacy}", "received_events_url": "https://api.github.com/users/wuxiaolianggit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @wuxiaolianggit, thanks for raising this issue. \r\n\r\nSo that we can help you, could you make sure to fill out the [bug report template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and include: \r\n\r\n* Minimal code reproducer\r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output", "Okay, thank you very much for your reply@amyeroberts ", "I also encountered this error when I tried to load the `\"timm/vit_small_patch14_dinov2.lvd142m\"` model from Timm, this is how I called `timm.create_model`:\r\n```python\r\ntimm.create_model(\r\n model_name=\"timm/vit_small_patch14_dinov2.lvd142m\",\r\n features_only=False,\r\n pretrained=True,\r\n in_chans=3,\r\n out_indices=(2, 5, 8, 11), # vit-s has total 12 blocks, I want those four level of features to be used as fpn for VitDet purpose\r\n norm_layer=FrozenBatchNorm2d,\r\n )\r\n```\r\n\r\nThe `transformers-cli env` command return \r\n```bash\r\n- `transformers` version: 4.35.0\r\n- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.18\r\n- Huggingface_hub version: 0.19.1\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.11.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?: no\r\n```\r\n\r\nThe error I got is :\r\n```bash\r\n__init__() got an unexpected keyword argument 'out_indices'\r\n```", "@Goooyi - the error encountered is coming from the `timm` library and is unreleated to `transformers`. It's coming from the fact [VisionTransfomer](https://github.com/huggingface/pytorch-image-models/blob/ef72c3cd470dd67836eebf95ec567199c890a6a2/timm/models/vision_transformer.py#L389) doesn't accept `out_indices` as an input argument. If you think this is an error or a feature that should be supported I'd suggest opening a new issue in that repo. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
Hello, may I ask you a question? How to solve this problem about “TypeError: ResNet.__init__() got an unexpected keyword argument 'out_indices‘”?@ @vanpelt @tmm1 @pvl @tmc @arfon
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27311/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27310/comments
https://api.github.com/repos/huggingface/transformers/issues/27310/events
https://github.com/huggingface/transformers/issues/27310
1,978,762,758
I_kwDOCUB6oc518YYG
27,310
Evaluation with HuggingFace Whisper
{ "login": "qnxgy921", "id": 35495714, "node_id": "MDQ6VXNlcjM1NDk1NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/35495714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qnxgy921", "html_url": "https://github.com/qnxgy921", "followers_url": "https://api.github.com/users/qnxgy921/followers", "following_url": "https://api.github.com/users/qnxgy921/following{/other_user}", "gists_url": "https://api.github.com/users/qnxgy921/gists{/gist_id}", "starred_url": "https://api.github.com/users/qnxgy921/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qnxgy921/subscriptions", "organizations_url": "https://api.github.com/users/qnxgy921/orgs", "repos_url": "https://api.github.com/users/qnxgy921/repos", "events_url": "https://api.github.com/users/qnxgy921/events{/privacy}", "received_events_url": "https://api.github.com/users/qnxgy921/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @qnxgy921, could you possibly share the audio sample you're using so we can run the example end-to-end? Currently, it uses a local audio file, so we cannot reproduce the repeating prompt phenomenon. Happy to advise on how to fix the parameters here!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,704
1,704
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.25.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sanchit-gandhi @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I was using the example evaluation code mentioned in [to](https://huggingface.co/openai/whisper-large) evaluating my own model. I also used the datasets to load my own test dataset. The source code looks like below: ``` dss = load_dataset("audiofolder", data_dir="./uatdata", split="train", cache_dir="my/datasets") for ds in dss: prompt_ids = prossor.get_prompt_ids(prompts, return_tensors="pt") input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features # generate token ids with torch.no_grad(): predicted_ids = model.generate(input_features.to("cuda"), prompt_ids=prompt_ids) # decode token ids to text transcription = processor.decode(predicted_ids[0], skip_special_tokens=False) print(transcription) ``` ### Expected behavior 1. The torch size of prompt_ids is around 200. I printed the prediction result. I found that the prompt_id would repeated with the next audio until until they occupy the results of all generations like: ``` TKOL2910_segment0.wav <|startofprev|> submitted cystectomy cyst urethral tumourappearance corpusprecedure colon straddle skin subcutis adjacent bilateral consistency macroscopically rectal medial uterus marginsresection inferior peritoneal bladder dye perforation intestine urinary multifocal synchronous lymph distal vulval biopsy myometrium extensive ovarian nipple invasion unifocal vaginal focality lateral radical tumour rectum fallopian procedure posterior tumournumber ovary identify length tme mesenteric THBSO tumoursite ureteral serosal resection node perivesical parametrial apr circumference margin completeness proximal organ cervical grossly longitudinal mesorectum dentate gross radial ureter anterior breast involved superior uterine axillary transverse CA ostial endometrial bisected anastomosis intestinal mesoappendiceal mucosa mesoappendix isthmic bicornually reflection ulcerated vascular pedicle donut<|startoftranscript|><|notimestamps|>22AB3872 K040 skin left thigh<|endoftext|> TKOL2910_segment1.wav <|startofprev|> submitted cystectomy cyst urethral tumourappearance corpusprecedure colon straddle skin subcutis adjacent bilateral consistency macroscopically rectal medial uterus marginsresection inferior peritoneal bladder dye perforation intestine urinary multifocal synchronous lymph distal vulval biopsy myometrium extensive ovarian nipple invasion unifocal vaginal focality lateral radical tumour rectum fallopian procedure posterior tumournumber ovary identify length tme mesenteric THBSO tumoursite ureteral serosal resection node perivesical parametrial apr circumference margin completeness proximal organ cervical grossly longitudinal mesorectum dentate gross radial ureter anterior breast involved superior uterine axillary transverse CA ostial endometrial bisected anastomosis intestinal mesoappendiceal mucosa mesoappendix isthmic bicornually reflection ulcerated vascular pedicle donut<|startoftranscript|> submitted cystectomy cyst urethral tumourappearance corpusprecedure colon straddle skin subcutis adjacent bilateral consistency macroscopically rectal medial uterus marginsresection inferior peritoneal bladder dye perforation intestine urinary multifocal synchronous lymph distal vulval biopsy myometrium extensive ovarian nipple invasion unifocal vaginal focality lateral radical tumour rectum fallopian procedure posterior tumournumber ovary identify length tme mesenteric THBSO tumoursite ureteral serosal resection node perivesical parametrial apr circumference margin completeness proximal organ cervical grossly longitudinal mesorectum dentate gross radial ureter anterior breast involved superior uterine axillary transverse CA ostial endometrial bisected anastomosis intestinal mesoappendiceal mucosa mesoappendix isthmic bicornually reflection ulcerated vascular pedicle donut<|startoftranscript|><|notimestamps|> vesular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nodular nod ``` Is there anything wrong with my code? 2. Besides that, I also found that when I set `skip_special_tokens=True` when decode, the prompt_ids would not be masked. ``` transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) print(transcription) TKOL2910_segment0.wav submitted cystectomy cyst urethral tumourappearance corpusprecedure colon straddle skin subcutis adjacent bilateral consistency macroscopically rectal medial uterus marginsresection inferior peritoneal bladder dye perforation intestine urinary multifocal synchronous lymph distal vulval biopsy myometrium extensive ovarian nipple invasion unifocal vaginal focality lateral radical tumour rectum fallopian procedure posterior tumournumber ovary identify length tme mesenteric THBSO tumoursite ureteral serosal resection node perivesical parametrial apr circumference margin completeness proximal organ cervical grossly longitudinal mesorectum dentate gross radial ureter anterior breast involved superior uterine axillary transverse CA ostial endometrial bisected anastomosis intestinal mesoappendiceal mucosa mesoappendix isthmic bicornually reflection ulcerated vascular pedicle donut22AB3872 K040 skin left thigh ``` When I used transformers 4.30.2 , the result should look like: ``` TKOL2910_segment0.wav 22AB3872 K040 skin left thigh ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27310/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27310/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27309/comments
https://api.github.com/repos/huggingface/transformers/issues/27309/events
https://github.com/huggingface/transformers/issues/27309
1,978,762,621
I_kwDOCUB6oc518YV9
27,309
model.add_adapter() does not work correctly, raising the following error while check_peft_version : [PEFT is not installed. Please install it with `pip install peft`]
{ "login": "aghris", "id": 56536283, "node_id": "MDQ6VXNlcjU2NTM2Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/56536283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aghris", "html_url": "https://github.com/aghris", "followers_url": "https://api.github.com/users/aghris/followers", "following_url": "https://api.github.com/users/aghris/following{/other_user}", "gists_url": "https://api.github.com/users/aghris/gists{/gist_id}", "starred_url": "https://api.github.com/users/aghris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aghris/subscriptions", "organizations_url": "https://api.github.com/users/aghris/orgs", "repos_url": "https://api.github.com/users/aghris/repos", "events_url": "https://api.github.com/users/aghris/events{/privacy}", "received_events_url": "https://api.github.com/users/aghris/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 @younesbelkada ", "you may have a broken installation of peft, can you try to re-run your script by uninstall peft and re-installing it again?\r\n```bash\r\npip uninstall peft && pip install -U peft\r\n```", "Thank you for your response. However, I have implemented that solution previously but encountered the same issue. Additionally, I have conducted tests on both the Kaggle platform and Google Colab to verify the results.", "Hi @aghris \r\nHmmm I tried again with an environment having PEFT freshly installed and did not managed to repro - I ran the same script as yours and just replaced the checkpoint to a smaller checkpoint:\r\n```python\r\nfrom transformers import AutoTokenizer, MistralForSequenceClassification\r\nfrom peft import PeftModel, get_peft_model, LoraConfig, TaskType, prepare_model_for_kbit_training\r\nfrom transformers import BitsAndBytesConfig\r\nimport torch\r\n\r\npeft_config = LoraConfig(\r\n r=64,\r\n lora_alpha=16,\r\n lora_dropout=0.1,\r\n bias=\"none\",\r\n task_type=TaskType.SEQ_CLS,\r\n inference_mode=False,\r\n target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\"]\r\n)\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\n\r\nmodel_name = \"hf-internal-testing/tiny-random-MistralForCausalLM\"\r\nmodel = MistralForSequenceClassification.from_pretrained(\r\n model_name,\r\n num_labels=3,\r\n quantization_config=bnb_config,\r\n device_map=\"auto\"\r\n)\r\n\r\nmodel.gradient_checkpointing_enable()\r\nmodel= prepare_model_for_kbit_training(model)\r\npeft_config.init_lora_weights = False\r\nmodel.add_adapter(peft_config, adapter_name='adapter')\r\n```\r\nA conflict could also happen if you have a `peft/` directory somewhere in your working directory ", "Could you please try it in a sample google colab notebook?\r\n\r\nThank you in advance", "Hi @aghris\r\nI just made this notebook: https://colab.research.google.com/drive/16S4U9v9oBJYVtCm_-wuWi6vBGDfyOsy5?usp=sharing and it seems to work fine on my end ", "Thank you @younesbelkada for your cooperative spirit and the time spent helping me. It seems the issue was with the compatibility of certain accelerators in Kaggle and Google Colab environments—specifically, the script functions well with V100/T4 GPUs and TPUs, but encounters problems with P100 and A100 GPUs\r\n", "Thanks a lot @aghris for double checking ! And no worries\r\nOut of curiosity I ran my snippet on an A100 and it worked fine - really not sure about the issue here :/ ", "iIt is quite intriguing. I found out that several users within the community utilizing get_peft_model along with a customized peft_config, bypassing add_adapter method. I try it and it works correctly.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.14.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.4 (gpu) - Jax version: 0.4.16 - JaxLib version: 0.4.16 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? I am attempting to fine-tune a fully quantized LLM model. So, I need to attach trainable adapters to enhance its performance. However, during this process, I encountered this error "PEFT is not installed. Please install it with pip install peft," that was triggered by the check_peft_version() function. In an effort to diagnose the issue, I employed two diagnostic functions from the transformers.utils.import_utils module: is_peft_available and _is_package_available. Interestingly, **_is_package_available("peft")** returns **True** while, **is_peft_available()** returns **False**. Code details: Model: "mistralai/Mistral-7B-v0.1" Quantization: BitsAndBytes Peft_ adapter: Lora Additionally, I have followed the example provided on the HuggingFace platform, available at their documentation site (https://huggingface.co/docs/transformers/main/peft) but, I encountered the same issue. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer, MistralForSequenceClassification from peft import PeftModel, get_peft_model, LoraConfig, TaskType, prepare_model_for_kbit_training from transformers import BitsAndBytesConfig import torch peft_config = LoraConfig( r=64, lora_alpha=16, lora_dropout=0.1, bias="none", task_type=TaskType.SEQ_CLS, inference_mode=False, target_modules=["q_proj", "k_proj", "v_proj", "o_proj"] ) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model_name = "mistralai/Mistral-7B-v0.1" model = MistralForSequenceClassification.from_pretrained( model_name, num_labels=3, quantization_config=bnb_config, device_map="auto" ) model.gradient_checkpointing_enable() model= prepare_model_for_kbit_training(model) peft_config.init_lora_weights = False model.add_adapter(peft_config, adapter_name='adapter') ``` ### Expected behavior **_is_package_available("peft")** and **is_peft_available()** imported from the transformers.utils.import_utils module both have to return the same value either True or False.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27309/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27308/comments
https://api.github.com/repos/huggingface/transformers/issues/27308/events
https://github.com/huggingface/transformers/issues/27308
1,978,762,289
I_kwDOCUB6oc518YQx
27,308
Add ReCag model
{ "login": "Ranitbag007", "id": 133197492, "node_id": "U_kgDOB_ButA", "avatar_url": "https://avatars.githubusercontent.com/u/133197492?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ranitbag007", "html_url": "https://github.com/Ranitbag007", "followers_url": "https://api.github.com/users/Ranitbag007/followers", "following_url": "https://api.github.com/users/Ranitbag007/following{/other_user}", "gists_url": "https://api.github.com/users/Ranitbag007/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ranitbag007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ranitbag007/subscriptions", "organizations_url": "https://api.github.com/users/Ranitbag007/orgs", "repos_url": "https://api.github.com/users/Ranitbag007/repos", "events_url": "https://api.github.com/users/Ranitbag007/events{/privacy}", "received_events_url": "https://api.github.com/users/Ranitbag007/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @Ranitbag007, thanks for opening this new model request! \r\n\r\nCould you update the issue description to include any relevant links to the model, such as the paper and any relevant repos?\r\n\r\nWe have recently been trying to push for `model on the hub` and have as much support as we can there. It will also be easier to integrate it! For anyone which wishes to tackle adding this model - here is a [tutorial](https://huggingface.co/docs/transformers/custom_models). ", "I have built a model architecture and tokenizer using sentencepiece . I want to integrate the model to Huggingface . for further training and implemetntation.", "@Ranitbag007 - that's great! This means there isn't much work at all to be able to make the model usable in the transformers library. The tutorial on [adding custom models](https://huggingface.co/docs/transformers/custom_models) should contain all necessary information on how to add the model to the hub. Let us know if anything isn't clear or you need help! ", "Hi. https://github.com/Ranitbag007/Recag this is the code for tokenizer and model architecture can you help me to add this model into huggingface?\r\n", "@amyeroberts ", "@Ranitbag007 - the [linked to tutorial](https://huggingface.co/docs/transformers/custom_models) should tell you all the steps needed for adding the model to the hub. Here is a more general overview of adding models to transformers: https://huggingface.co/docs/transformers/v4.35.0/en/add_new_model - note in this case you don't need to open a PR in this library, instead you can create a model repo and add the code there directly on the hub!", "Do I need to occur qany changement in my codes?\r\n", "@amyeroberts I have registered my custom model using AutoModelForCausalLM.register(CustomAIConfig, CustomAI) but I am unable to call the model . It shows ImportError: cannot import name 'CustomAI' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py) . Can you help me in this?", "Without seeing the modeling code I won't be able to help - could you link to it on its repo on the hub? " ]
1,699
1,699
null
NONE
null
### Model description Want to add a new model ReCag in Transformers. ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27308/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27307/comments
https://api.github.com/repos/huggingface/transformers/issues/27307/events
https://github.com/huggingface/transformers/pull/27307
1,978,735,719
PR_kwDOCUB6oc5eqp3o
27,307
Fix daily CI image build
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? This is currently failing due to `intel_extension_for_pytorch`. The post fix in the version like `intel_extension_for_pytorch==1.11.0+cpu` is no longer working 4 days ago. So I remove it. Also update the version to `2.1.0`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27307", "html_url": "https://github.com/huggingface/transformers/pull/27307", "diff_url": "https://github.com/huggingface/transformers/pull/27307.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27307.patch", "merged_at": 1699266443000 }
https://api.github.com/repos/huggingface/transformers/issues/27306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27306/comments
https://api.github.com/repos/huggingface/transformers/issues/27306/events
https://github.com/huggingface/transformers/pull/27306
1,978,731,139
PR_kwDOCUB6oc5eqo49
27,306
Update doctest workflow file
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,699
1,699
COLLABORATOR
null
# What does this PR do? @glegendre01 mentioned this to me, due to a change in AWS CI runner setting. Without this, this CI could not find runner to run.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27306/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27306", "html_url": "https://github.com/huggingface/transformers/pull/27306", "diff_url": "https://github.com/huggingface/transformers/pull/27306.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27306.patch", "merged_at": 1699266469000 }
https://api.github.com/repos/huggingface/transformers/issues/27305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27305/comments
https://api.github.com/repos/huggingface/transformers/issues/27305/events
https://github.com/huggingface/transformers/issues/27305
1,978,710,314
I_kwDOCUB6oc518Lkq
27,305
SAM discrepancies between automatic mask generation and bbox/point guided
{ "login": "rb-synth", "id": 135021519, "node_id": "U_kgDOCAxDzw", "avatar_url": "https://avatars.githubusercontent.com/u/135021519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rb-synth", "html_url": "https://github.com/rb-synth", "followers_url": "https://api.github.com/users/rb-synth/followers", "following_url": "https://api.github.com/users/rb-synth/following{/other_user}", "gists_url": "https://api.github.com/users/rb-synth/gists{/gist_id}", "starred_url": "https://api.github.com/users/rb-synth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rb-synth/subscriptions", "organizations_url": "https://api.github.com/users/rb-synth/orgs", "repos_url": "https://api.github.com/users/rb-synth/repos", "events_url": "https://api.github.com/users/rb-synth/events{/privacy}", "received_events_url": "https://api.github.com/users/rb-synth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @rb-synth \r\nThanks for the issue and your interest for using SAM ! \r\nDo you observe the same behaviour when using the original repository? I think that this is expected as you always need to provide at least a single 2D point or a bounding box in order for the model to predict accurately segmentation masks. \r\n\r\nPlease follow this notebook shared in the documentation: https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb for using SAM\r\n\r\nFor automatically retrieving all potential masks given an image, one should use the `automatic-mask-generation` pipeline which should lead to identical results than original repository - please check out this notebook on how to use it: https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb \r\n\r\nLet us know if you have more questions or if anything is unclear", "Right, I see! Could we not embed the generator inside the `__call__` function so that if no mask/boundingbox/points are given, it will fall back to this?", "Hi @rb-synth \r\nHmm I am afraid this would be quite breaking for users that already use SAM for fine-tuning or use SAM for raw inference without any input (image only) :/ ", "Please refer to the documentation and go over the resources shared in the overview section: https://huggingface.co/docs/transformers/v4.35.0/en/model_doc/sam#overview to understand all different type of usage (fine-tuning, AMG, pure inference ...)", "Ok, thanks for the help. I want to enable the case where the user may give input, in which case I use `SamModel`, but fall back to `MaskGenerationPipeline` when no extra inputs are given. I don't want to have two models saved in memory, so I tried the following:\r\n\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import MaskGenerationPipeline, SamModel\r\nimport torch\r\n\r\nmodel = SamModel.from_pretrained(\"facebook/sam-vit-huge\", low_cpu_mem_usage=True, torch_dtype=torch.float16)\r\nmodel.to(\"cuda\")\r\ngenerator = MaskGenerationPipeline(model=model)\r\n\r\nimg_url = \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg\"\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\n\r\noutputs = generator(raw_image, points_per_batch=64)\r\n```\r\nBut here it seems that the image processor does not get set\r\n![image](https://github.com/huggingface/transformers/assets/135021519/1ac2e5c4-72b1-4c56-9bbd-051d522e392d)\r\n\r\n\r\nAny advice on how to simultaneously run `SamModel` and `MaskGenerationPipeline` with only one model in memory?", "Working with \r\n\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import MaskGenerationPipeline, SamModel, SamImageProcessor\r\nimport torch\r\n\r\nmodel_name = \"facebook/sam-vit-huge\"\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n\r\nimage_processor = SamImageProcessor.from_pretrained(model_name)\r\nmodel = SamModel.from_pretrained(model_name, low_cpu_mem_usage=True)\r\nmodel.to(device)\r\ngenerator = MaskGenerationPipeline(model=model, image_processor=image_processor, device=device)\r\n\r\nimg_url = \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg\"\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\n\r\nwith torch.no_grad():\r\n outputs = generator(raw_image, points_per_batch=64)\r\n```", "Now I'm noticing that ``MaskGenerationPipeline`` exposes args such as ``pred_iou_thresh``, but ``SamModel`` doesn't. How can I get the same behaviour from ``SamModel``?\r\n\r\nI guess globally, I'm confused by what appears to be a lack of symmetry between the classes.", "@rb-synth \r\nthe automatic mask generation pipeline is quite complex, please have a look at the source code here: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/mask_generation.py#L221 , I am personally not in favor of adding the automatic mask generation pipeline inside the model forward itself. The AMG pipeline itself uses the standalone model to get the raw mask and process them (not easily!) for every randomly generated points, hence, that entire processing logic should live inside an appropriate pipeline that abstracts everything for the users. ", "Ok, perhaps I'm fundamentally misunderstanding something, I imagined the process of AMG to be identical as the normal forward pass with points sampled at regular intervals. Which is why I was surprised that the implementations are different. I appreciate the for AMG it needs to be wrapped into mini-batches to save time, of course, but I suppose the same could be done for many manually requested points. \r\n\r\nThat aside, how can I apply `pred_iou_thresh` and `nms` to results coming out from `SamModel`?", "Any updates on how to harmonise the automatic mask generation with the rest of the SAM code base?", "Hi @rb-synth \r\nSorry for getting back late on this issue, unfortunately currently there is no plan on harmonising the model's forward to comply with AMG pipeline because of the reasons stated in https://github.com/huggingface/transformers/issues/27305#issuecomment-1794770681 . If you think that you can still make it through a PR, I'd be happy to review it and merge it if it makes sense but thinking about it IMO there is no way we can effectively do this currently", "okay, thanks for the update and thanks for the effort!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-1043-aws-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts cc @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use example from SAM REAMDE. Remove the coordinate to ask SAM to predict all objects in image. ```python import torch from PIL import Image import requests from transformers import SamModel, SamProcessor device = "cuda" if torch.cuda.is_available() else "cpu" model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-huge") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") # input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() ) masks = [Image.fromarray(m.numpy()) for m in masks[0][0]] scores = outputs.iou_scores for i, m in enumerate(masks): m.save(f"mask_{i}.png") ``` ![image](https://github.com/huggingface/transformers/assets/135021519/0ae178df-90a6-4c0f-8360-8d39d9b09224) ![image](https://github.com/huggingface/transformers/assets/135021519/47e1238e-63d8-4843-9a9c-1cbdd490e9aa) ![image](https://github.com/huggingface/transformers/assets/135021519/1c8ec9b8-ae20-4a70-a14d-a439c3da22f7) ### Expected behavior Should segment regions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27304/comments
https://api.github.com/repos/huggingface/transformers/issues/27304/events
https://github.com/huggingface/transformers/issues/27304
1,978,548,835
I_kwDOCUB6oc517kJj
27,304
DirectML quantized models
{ "login": "PHIL-GIBSON-1990", "id": 21168288, "node_id": "MDQ6VXNlcjIxMTY4Mjg4", "avatar_url": "https://avatars.githubusercontent.com/u/21168288?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PHIL-GIBSON-1990", "html_url": "https://github.com/PHIL-GIBSON-1990", "followers_url": "https://api.github.com/users/PHIL-GIBSON-1990/followers", "following_url": "https://api.github.com/users/PHIL-GIBSON-1990/following{/other_user}", "gists_url": "https://api.github.com/users/PHIL-GIBSON-1990/gists{/gist_id}", "starred_url": "https://api.github.com/users/PHIL-GIBSON-1990/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PHIL-GIBSON-1990/subscriptions", "organizations_url": "https://api.github.com/users/PHIL-GIBSON-1990/orgs", "repos_url": "https://api.github.com/users/PHIL-GIBSON-1990/repos", "events_url": "https://api.github.com/users/PHIL-GIBSON-1990/events{/privacy}", "received_events_url": "https://api.github.com/users/PHIL-GIBSON-1990/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @PHIL-GIBSON-1990, thanks for raising this issue! \r\n\r\nI'm not familiar with DirectML, or our support for it - cc @SunMarc as this error seems to be triggered with [GPTQ quantization](https://github.com/huggingface/transformers/blob/eef7ea98c31a333bacdc7ae7a2372bde772be8e4/src/transformers/modeling_utils.py#L2801). \r\n\r\nSo that we can best help, could you provide a minimal code snippet for us to reproduce the error on our end? In particular, how the model is being loaded and used with torch_directml. ", "Hi @PHIL-GIBSON-1990, thanks for reporting. This is indeed strange. It shouldn't be related to gptq quantization in particular. The error that you got happened because of this condition with transformers 4.34.1: \r\n ```py\r\nif not torch.cuda.is_available():\r\n raise RuntimeError(\"GPU is required to quantize or run quantize model.\")\r\n```\r\nCan you check what `torch.cuda.is_available()` returns ? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.34.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.6 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: RTX 3090 - Using distributed or parallel set-up in script?: no idea what this is ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 1. setup pytorch with directml 2. load quantized model The following error occurs: Traceback (most recent call last): File "H:\PyTorch\pyTorchTest.py", line 20, in <module> model = AutoModelForCausalLM.from_pretrained("TheBloke/Hermes-Trismegistus-Mistral-7B-GPTQ") File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 565, in from_pretrained return model_class.from_pretrained( File "C:\Users\xxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2702, in from_pretrained raise RuntimeError("GPU is required to quantize or run quantize model.") RuntimeError: GPU is required to quantize or run quantize model. Note: I have my GPU set to be the default torch device, and when running non-quantized models the GPU is used. This is the code used to set the device: `import torch ` `import torch_directml ` `dml = torch_directml.device(0)` `torch.set_default_device(dml) ` ### Expected behavior It should work
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27304/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27303/comments
https://api.github.com/repos/huggingface/transformers/issues/27303/events
https://github.com/huggingface/transformers/issues/27303
1,978,412,872
I_kwDOCUB6oc517C9I
27,303
KV cache optimization with paged attention
{ "login": "liangan1", "id": 46986936, "node_id": "MDQ6VXNlcjQ2OTg2OTM2", "avatar_url": "https://avatars.githubusercontent.com/u/46986936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liangan1", "html_url": "https://github.com/liangan1", "followers_url": "https://api.github.com/users/liangan1/followers", "following_url": "https://api.github.com/users/liangan1/following{/other_user}", "gists_url": "https://api.github.com/users/liangan1/gists{/gist_id}", "starred_url": "https://api.github.com/users/liangan1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liangan1/subscriptions", "organizations_url": "https://api.github.com/users/liangan1/orgs", "repos_url": "https://api.github.com/users/liangan1/repos", "events_url": "https://api.github.com/users/liangan1/events{/privacy}", "received_events_url": "https://api.github.com/users/liangan1/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "cc @gante (I think this is closest to your work - sorry if wrong! ) ", "@jgong5\r\n", "Hi @liangan1 👋 \r\n\r\nWe are close to introducing a new cache abstraction (https://github.com/huggingface/transformers/pull/26681). I believe that, after this PR is merged, adding paged attention would become directly applicable on top of it :)\r\n\r\nWould you be interested in adding it to `transformers`?", "> Hi @liangan1 👋\r\n> \r\n> We are close to introducing a new cache abstraction (#26681). I believe that, after this PR is merged, adding paged attention would become directly applicable on top of it :)\r\n> \r\n> Would you be interested in adding it to `transformers`?\r\n\r\nSure. We are pleasure to contribute more kv_cache related optimizations. ", "Awesome, I will let you know when the cache abstraction is ready!", "Thanks. ", "@liangan1 the cache abstraction will be merged today, so you can start working on top of it. Happy to provide pointers and suggestions! 🙌 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,704
null
NONE
null
### Feature request Paged attention has been enabled by a lot of server engine, e.g., [vllm](https://github.com/vllm-project/vllm), [tensorrt-llm](https://github.com/NVIDIA/TensorRT-LLM/blob/release/0.5.0/tensorrt_llm/runtime/kv_cache_manager.py) ### Motivation KV cache is used to reduce computation for Decoder layer but it also bring memory overheads, for example, when we use beam search, the kv_cache should be reordered according to latest beam idx and the current key/value should also be concat with kv_cache in the attention layer to get entire context to do scale dot product. When the sequence is very long, the memory overhead will be performance bottleneck. ### Your contribution No PR yet
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27303/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27302/comments
https://api.github.com/repos/huggingface/transformers/issues/27302/events
https://github.com/huggingface/transformers/issues/27302
1,978,370,685
I_kwDOCUB6oc5164p9
27,302
Issue with Whisper model
{ "login": "lem21h", "id": 668165, "node_id": "MDQ6VXNlcjY2ODE2NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/668165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lem21h", "html_url": "https://github.com/lem21h", "followers_url": "https://api.github.com/users/lem21h/followers", "following_url": "https://api.github.com/users/lem21h/following{/other_user}", "gists_url": "https://api.github.com/users/lem21h/gists{/gist_id}", "starred_url": "https://api.github.com/users/lem21h/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lem21h/subscriptions", "organizations_url": "https://api.github.com/users/lem21h/orgs", "repos_url": "https://api.github.com/users/lem21h/repos", "events_url": "https://api.github.com/users/lem21h/events{/privacy}", "received_events_url": "https://api.github.com/users/lem21h/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "After a bit of digging the Whisper AI builds a map of special tokens:\r\n```\r\nadditional_tokens = dict(\r\n zip(\r\n tokenizer.tokenizer.additional_special_tokens,\r\n tokenizer.tokenizer.additional_special_tokens_ids,\r\n )\r\n )\r\n```\r\n \r\non version 4.30.2 the EN gets ID 50259\r\nand on version 4.35.0 the EN gets ID 50349", "Hey! I am not sure I follow, where is the snippet taken from? What you are showing seems to use `openai-whisper` which now uses `tiktoken` rather `tokenizers` as a backend so not sure we can do much here. \r\n", "hi, thank you for the reply.\r\n\r\nWell whisper.ai in version 1.0.0.1 (from dec 2022) still using GPT2 tokenizer.\r\nThe issue is that when the model is loaded, then tokenizer should return the ID of the language (in my case english). It is done by checking those `additional_special_tokens` and matched position against `additional_special_tokens_ids`\r\n\r\nIn version 4.30.2 of tokenizers, the value for english is returned 50259 and in version 4.35.0 it is out of sudden 50349.\r\n\r\nI'm not saying that what they did in Whisper AI is ok, but just curious how ID of EN tokenizer has different ID between those two versions for the same model.\r\nThat is the whole issue about :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info After upgrading from version 4.30.2 to 4.35.0 the Whisper model stopped working. The issue is with the following piece of the code: ``` if tokenizer is None: tokenizer = get_tokenizer(model.is_multilingual) if tokenizer.language is None or tokenizer.language_token not in tokenizer.sot_sequence: raise ValueError(f"This model doesn't have language tokens so it can't perform lang id") ``` For version 4.30.2 the `tokenizer.language` returns 50259 for version 4.35.0 the `tokenizer.language` returns 50349 ### Who can help? @ArthurZucker ### Reproduction ``` model = whisper.load_model("medium") tokenizer = get_tokenizer(model.is_multilingual) print(tokenizer.language_token) print(tokenizer.language_token not in tokenizer.sot_sequence) ``` For 4.30.2 the returned values are: 50259 False For 4.35.0 the returned values are: 50349 True ### Expected behavior I would expect that this will work the same. Also why the same model out of sudden changes the language token value ??
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27302/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27301/comments
https://api.github.com/repos/huggingface/transformers/issues/27301/events
https://github.com/huggingface/transformers/issues/27301
1,978,228,011
I_kwDOCUB6oc516V0r
27,301
Kosmos2 device_map and batch processing issues.
{ "login": "rabiulcste", "id": 7416164, "node_id": "MDQ6VXNlcjc0MTYxNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7416164?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabiulcste", "html_url": "https://github.com/rabiulcste", "followers_url": "https://api.github.com/users/rabiulcste/followers", "following_url": "https://api.github.com/users/rabiulcste/following{/other_user}", "gists_url": "https://api.github.com/users/rabiulcste/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabiulcste/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabiulcste/subscriptions", "organizations_url": "https://api.github.com/users/rabiulcste/orgs", "repos_url": "https://api.github.com/users/rabiulcste/repos", "events_url": "https://api.github.com/users/rabiulcste/events{/privacy}", "received_events_url": "https://api.github.com/users/rabiulcste/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "cc @ydshieh ", "Hi @rabiulcste \r\n\r\nThank you (again) for opening this issue. I will take a look. In order to ease the process, could you provide all the necessary imports and variable definition.\r\n\r\nThe goal is to have a self-complete code snippet that could be run, so we can debug it straightforward.", "> Hi @rabiulcste\r\n> \r\n> Thank you (again) for opening this issue. I will take a look. In order to ease the process, could you provide all the necessary imports and variable definition.\r\n> \r\n> The goal is to have a self-complete code snippet that could be run, so we can debug it straightforward.\r\n\r\nThanks for the prompt response! I've now updated it with a complete script. ", "Very nice @rabiulcste I appreciated a lot - will take a look!", "@rabiulcste Regarding batch issue, it is fixed in #27323.", "\r\n@ydshieh thanks so much for the fix! any thoughts on the multi-GPU issue?\r\n\r\n", "@rabiulcste taking a look today for auto map", "@rabiulcste Fixed! " ]
1,699
1,699
1,699
NONE
null
### System Info - `transformers` version: 4.36.0.dev0 - Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27 - Python version: 3.10.11 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: RTX8000 - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Batch processing raises batch size mismatch errors. ``` from transformers import AutoProcessor, Kosmos2ForConditionalGeneration from PIL import Image import requests def initialize_grounding_model(model_name: str): print(f"Initializing {model_name} model") model = Kosmos2ForConditionalGeneration.from_pretrained(model_name, device_map="auto") processor = AutoProcessor.from_pretrained(model_name) processor.tokenizer.padding_side = "left" return model, processor def test_kosmos2_batch(model, processor, batching=False): print("=========================================") print(f"Running test with Batching = {batching}") print("=========================================") prompt = "<grounding>An image of" url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") if batching: prompts = [ "<grounding>An image of", "<grounding>A photo of", "<grounding>Describe this photo in details", "<grounding>What is in this photo?", "<grounding>What is this a picture of?", "<grounding>What is the image about?", ] batch_images = [image] * 6 else: prompts = prompt batch_images = image print(f"Input: {prompts}") print(f"Image: {batch_images}") inputs = processor(text=prompts, images=batch_images, padding=True, return_tensors="pt") device = model.device print(f"Running on device: {device}") generated_ids = model.generate( pixel_values=inputs["pixel_values"].to(device), input_ids=inputs["input_ids"].to(device), attention_mask=inputs["attention_mask"].to(device), image_embeds=None, image_embeds_position_mask=inputs["image_embeds_position_mask"].to(device), use_cache=True, max_new_tokens=64, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) processed_text = [ processor.post_process_generation(out, cleanup_and_extract=False) for out in generated_text ] print(f"Output: {processed_text}") if __name__ == "__main__": model_name = "microsoft/kosmos-2-patch14-224" model, processor = initialize_grounding_model(model_name) print("Model initialized successfully!") test_kosmos2_batch(model, processor, batching=False) test_kosmos2_batch(model, processor, batching=True) ``` ``` Traceback (most recent call last): File "/home/mila/r/rabiul.awal/.venv/dgx-machines/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 748, in convert_to_tensors tensor = as_tensor(value) File "/home/mila/r/rabiul.awal/.venv/dgx-machines/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 720, in as_tensor return torch.tensor(value) ValueError: expected sequence of length 74 at dim 1 (got 75) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/mila/r/rabiul.awal/vqazero-private/local_files/test_kosmos2.py", line 64, in <module> test_kosmos2_batch(model, processor, batching=True) File "/home/mila/r/rabiul.awal/vqazero-private/local_files/test_kosmos2.py", line 39, in test_kosmos2_batch inputs = processor(text=prompts, images=batch_images, padding=True, return_tensors="pt") File "/home/mila/r/rabiul.awal/.venv/dgx-machines/lib/python3.10/site-packages/transformers/models/kosmos2/processing_kosmos2.py", line 257, in __call__ BatchEncoding( File "/home/mila/r/rabiul.awal/.venv/dgx-machines/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 223, in __init__ self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File "/home/mila/r/rabiul.awal/.venv/dgx-machines/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 764, in convert_to_tensors raise ValueError( ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`input_ids` in this case) have excessive nesting (inputs type `list` where type `int` is expected). ``` 2. To reproduce the device_map issue please run the same command with 2 GPUs. It will raise the device allocation issue. ### Expected behaviour I want the device_map to assign available GPUs automatically. Also, this is the only model failing in newer versions of Transformers. I've used it fine previously! - fix device map or .cuda() for multiple GPUs - fix batch processing as expected
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27301/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27300/comments
https://api.github.com/repos/huggingface/transformers/issues/27300/events
https://github.com/huggingface/transformers/issues/27300
1,978,134,929
I_kwDOCUB6oc515_GR
27,300
There is a bug when using the neftune_noise_alpha feature in transformers v4.35.0.
{ "login": "CaC033", "id": 34328295, "node_id": "MDQ6VXNlcjM0MzI4Mjk1", "avatar_url": "https://avatars.githubusercontent.com/u/34328295?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaC033", "html_url": "https://github.com/CaC033", "followers_url": "https://api.github.com/users/CaC033/followers", "following_url": "https://api.github.com/users/CaC033/following{/other_user}", "gists_url": "https://api.github.com/users/CaC033/gists{/gist_id}", "starred_url": "https://api.github.com/users/CaC033/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaC033/subscriptions", "organizations_url": "https://api.github.com/users/CaC033/orgs", "repos_url": "https://api.github.com/users/CaC033/repos", "events_url": "https://api.github.com/users/CaC033/events{/privacy}", "received_events_url": "https://api.github.com/users/CaC033/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerz @pacman100 ", "Hello @amyeroberts , tagging @younesbelkada would added the feature. Untagging myself.", "The error is \r\n\r\n```bash\r\nTypeError: _set_gradient_checkpointing() got an unexpected keyword argument 'enable'\r\n```\r\n\r\nThis is because you are using a model that uses code on the Hub feature that used the old gradient checkpointing logic that has been deprecated. We do not support it again thanks to https://github.com/huggingface/transformers/pull/27610. Please refer to my comment in https://github.com/huggingface/transformers/pull/27610#issue-2002672467 for more details. \r\n\r\nClosing this issue as the issue is unrelated to Neftune, the issue is also fixed if you switch to transformers main\r\n\r\n```bash\r\npip install -U git+https://github.com/huggingface/transformers.git\r\n```" ]
1,699
1,700
1,700
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction trainer = Trainer( model=model, args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], tokenizer=tokenizer, data_collator=default_data_collator, neftune_noise_alpha=0.1, ) ### Expected behavior [INFO|modeling_utils.py:3118] 2023-11-06 09:56:03,877 >> loading weights file /mnt/workspace/peipao/jichunengli/Qwen-14B-Chat/model.safetensors.index.json [INFO|configuration_utils.py:791] 2023-11-06 09:56:03,879 >> Generate config GenerationConfig {} Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:18<00:00, 1.24s/it] Traceback (most recent call last): File "/mnt/workspace/peipao/jichunengli/test_nefqwen_hf/ds_train_huggingface_llama.py", line 325, in <module> main() File "/mnt/workspace/peipao/jichunengli/test_nefqwen_hf/ds_train_huggingface_llama.py", line 290, in main model.gradient_checkpointing_enable() File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1872, in gradient_checkpointing_enable self._set_gradient_checkpointing(enable=True, gradient_checkpointing_func=gradient_checkpointing_func) TypeError: _set_gradient_checkpointing() got an unexpected keyword argument 'enable' Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:20<00:00, 1.34s/it] Traceback (most recent call last): File "/mnt/workspace/peipao/jichunengli/test_nefqwen_hf/ds_train_huggingface_llama.py", line 325, in <module> main() File "/mnt/workspace/peipao/jichunengli/test_nefqwen_hf/ds_train_huggingface_llama.py", line 290, in main model.gradient_checkpointing_enable() File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1872, in gradient_checkpointing_enable self._set_gradient_checkpointing(enable=True, gradient_checkpointing_func=gradient_checkpointing_func) TypeError: _set_gradient_checkpointing() got an unexpected keyword argument 'enable' Loading checkpoint shards: 80%|███████████████████████████████████████████████████████████████████████████████████████▏ | 12/15 [00:23<00:04, 1.59s/it]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2378 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2379 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2382 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2383 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2384 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2385 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 2 (pid: 2380) of binary: /opt/conda/bin/python3.8 Traceback (most recent call last): File "/opt/conda/bin/torchrun", line 8, in <module> sys.exit(main()) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /mnt/workspace/peipao/jichunengli/test_nefqwen_hf/ds_train_huggingface_llama.py FAILED ------------------------------------------------------------
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27300/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/27300/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27299/comments
https://api.github.com/repos/huggingface/transformers/issues/27299/events
https://github.com/huggingface/transformers/issues/27299
1,978,032,618
I_kwDOCUB6oc515mHq
27,299
Deepspeed integration: support batch sizes that are less than the number of gpus/ranks
{ "login": "IMbackK", "id": 13803414, "node_id": "MDQ6VXNlcjEzODAzNDE0", "avatar_url": "https://avatars.githubusercontent.com/u/13803414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMbackK", "html_url": "https://github.com/IMbackK", "followers_url": "https://api.github.com/users/IMbackK/followers", "following_url": "https://api.github.com/users/IMbackK/following{/other_user}", "gists_url": "https://api.github.com/users/IMbackK/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMbackK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMbackK/subscriptions", "organizations_url": "https://api.github.com/users/IMbackK/orgs", "repos_url": "https://api.github.com/users/IMbackK/repos", "events_url": "https://api.github.com/users/IMbackK/events{/privacy}", "received_events_url": "https://api.github.com/users/IMbackK/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" }, { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @muellerzr @pacman100 ", "Hello, this isn't possible. DeepSpeed is a Data Parallel paradigm, meaning each process/rank should get at least one sample. \r\n\r\n> As far as i know there is no fundamental requirement with zero style shading for there to be one batch per GPU.\r\n\r\nData Parallel by definition requires at least one sample per process/GPU. Could you state where the above claim is \r\nmentioned?" ]
1,699
1,700
null
NONE
null
### Feature request Currently when training using trainer and the deepspeed zero integration the minimum total effective batch size is one per gpu/rank, thus on a 8 gpu machine the minimum effective batch size is 8. As far as i know there is no fundamental requirement with zero style shading for there to be one batch per GPU. ### Motivation Having one batch per gpu may be undesirable, for instance when there is insufficient memory.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27299/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27298/comments
https://api.github.com/repos/huggingface/transformers/issues/27298/events
https://github.com/huggingface/transformers/issues/27298
1,977,984,216
I_kwDOCUB6oc515aTY
27,298
cannot import name 'LlamaConfig' from 'transformers
{ "login": "Ahmed-Roushdy", "id": 68569076, "node_id": "MDQ6VXNlcjY4NTY5MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/68569076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ahmed-Roushdy", "html_url": "https://github.com/Ahmed-Roushdy", "followers_url": "https://api.github.com/users/Ahmed-Roushdy/followers", "following_url": "https://api.github.com/users/Ahmed-Roushdy/following{/other_user}", "gists_url": "https://api.github.com/users/Ahmed-Roushdy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ahmed-Roushdy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ahmed-Roushdy/subscriptions", "organizations_url": "https://api.github.com/users/Ahmed-Roushdy/orgs", "repos_url": "https://api.github.com/users/Ahmed-Roushdy/repos", "events_url": "https://api.github.com/users/Ahmed-Roushdy/events{/privacy}", "received_events_url": "https://api.github.com/users/Ahmed-Roushdy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Ahmed-Roushdy !\r\nI see from the details above that:\r\n\r\n> adapter-transformers version: 3.2.1\r\n\r\nCan you try to install the latest `transformers` package instead? `pip install -U transformers`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info - `adapter-transformers` version: 3.2.1 - Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @SunMarc @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction >>> from transformers import LlamaConfig ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'LlamaConfig' from 'transformers' (/home/aelkordy/.local/lib/python3.8/site-packages/transformers/__init__.py)``` ### Expected behavior to be able to load LlamaConfig file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27298/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27297/comments
https://api.github.com/repos/huggingface/transformers/issues/27297/events
https://github.com/huggingface/transformers/issues/27297
1,977,965,674
I_kwDOCUB6oc515Vxq
27,297
Unable to uninstall transformers package
{ "login": "Ahmed-Roushdy", "id": 68569076, "node_id": "MDQ6VXNlcjY4NTY5MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/68569076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ahmed-Roushdy", "html_url": "https://github.com/Ahmed-Roushdy", "followers_url": "https://api.github.com/users/Ahmed-Roushdy/followers", "following_url": "https://api.github.com/users/Ahmed-Roushdy/following{/other_user}", "gists_url": "https://api.github.com/users/Ahmed-Roushdy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ahmed-Roushdy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ahmed-Roushdy/subscriptions", "organizations_url": "https://api.github.com/users/Ahmed-Roushdy/orgs", "repos_url": "https://api.github.com/users/Ahmed-Roushdy/repos", "events_url": "https://api.github.com/users/Ahmed-Roushdy/events{/privacy}", "received_events_url": "https://api.github.com/users/Ahmed-Roushdy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Ahmed-Roushdy, thanks for opening an issue. \r\n\r\nFrom the error, it looks like this isn't a transformers library but due to not having write access for running pip in the environment. As such, this isn't something we're able to help with. If you installed `transformers` using `sudo`, you should try uninstalling with `sudo` too. " ]
1,699
1,699
1,699
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `adapter-transformers` version: 3.2.1 - Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucke ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` pip uninstall transformers Uninstalling transformers-4.26.1: Would remove: /opt/miniconda3/bin/transformers-cli /opt/miniconda3/lib/python3.8/site-packages/transformers-4.26.1.dist-info/* /opt/miniconda3/lib/python3.8/site-packages/transformers/* Proceed (Y/n)? Y ERROR: Exception: Traceback (most recent call last): File "/opt/miniconda3/lib/python3.8/shutil.py", line 791, in move os.rename(src, real_dst) PermissionError: [Errno 1] Operation not permitted: '/opt/miniconda3/bin/transformers-cli' -> '/tmp/pip-uninstall-hvzzdqd1/transformers-cli' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/miniconda3/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 169, in exc_logging_wrapper status = run_func(*args) File "/opt/miniconda3/lib/python3.8/site-packages/pip/_internal/commands/uninstall.py", line 105, in run uninstall_pathset = req.uninstall( File "/opt/miniconda3/lib/python3.8/site-packages/pip/_internal/req/req_install.py", line 680, in uninstall uninstalled_pathset.remove(auto_confirm, verbose) File "/opt/miniconda3/lib/python3.8/site-packages/pip/_internal/req/req_uninstall.py", line 381, in remove moved.stash(path) File "/opt/miniconda3/lib/python3.8/site-packages/pip/_internal/req/req_uninstall.py", line 272, in stash renames(path, new_path) File "/opt/miniconda3/lib/python3.8/site-packages/pip/_internal/utils/misc.py", line 313, in renames shutil.move(old, new) File "/opt/miniconda3/lib/python3.8/shutil.py", line 812, in move os.unlink(src) PermissionError: [Errno 1] Operation not permitted: '/opt/miniconda3/bin/transformers-cli' ``` ### Expected behavior to successfully uninstall the package
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27297/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27296/comments
https://api.github.com/repos/huggingface/transformers/issues/27296/events
https://github.com/huggingface/transformers/pull/27296
1,977,933,606
PR_kwDOCUB6oc5en-a2
27,296
Fix VideoMAEforPretrained dtype error
{ "login": "ikergarcia1996", "id": 18737249, "node_id": "MDQ6VXNlcjE4NzM3MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/18737249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ikergarcia1996", "html_url": "https://github.com/ikergarcia1996", "followers_url": "https://api.github.com/users/ikergarcia1996/followers", "following_url": "https://api.github.com/users/ikergarcia1996/following{/other_user}", "gists_url": "https://api.github.com/users/ikergarcia1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/ikergarcia1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ikergarcia1996/subscriptions", "organizations_url": "https://api.github.com/users/ikergarcia1996/orgs", "repos_url": "https://api.github.com/users/ikergarcia1996/repos", "events_url": "https://api.github.com/users/ikergarcia1996/events{/privacy}", "received_events_url": "https://api.github.com/users/ikergarcia1996/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @amyeroberts \r\n\r\nAfter further investigation into the issue, I discovered that the pixel_values are of the correct dtype. However, in L851, the MEAN and STD values are loaded as float32. Consequently, in L853, where `frames = pixel_values * std + mean`, frames is converted to float32. This causes problems in the subsequent logic. By ensuring that std and mean are loaded with the same dtype as pixel_values, this unwanted conversion is avoided, resolving the issue.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27296). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> It is not possible to train VideoMAEForPreTraining with bfloat16, because the labels are always stored as float32. This code snippet triggers the error. ```python from transformers import AutoImageProcessor, VideoMAEForPreTraining import numpy as np import torch num_frames = 16 video = list(np.random.randint(0, 256, (num_frames, 3, 224, 224))) image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base") model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base",torch_dtype=torch.bfloat16).to("cuda") pixel_values = image_processor(video, return_tensors="pt").pixel_values num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() outputs = model(pixel_values.to(device=model.device,dtype=model.dtype), bool_masked_pos=bool_masked_pos) loss = outputs.loss loss.backward() ``` Full TraceBack ```bash RuntimeError Traceback (most recent call last) Cell In[1], line 20 17 outputs = model(pixel_values.to(device=model.device,dtype=model.dtype), bool_masked_pos=bool_masked_pos) 18 loss = outputs.loss ---> 20 loss.backward() File ~/miniconda3/envs/transformers/lib/python3.10/site-packages/torch/_tensor.py:492, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 482 if has_torch_function_unary(self): 483 return handle_torch_function( 484 Tensor.backward, 485 (self,), (...) 490 inputs=inputs, 491 ) --> 492 torch.autograd.backward( 493 self, gradient, retain_graph, create_graph, inputs=inputs 494 ) File ~/miniconda3/envs/transformers/lib/python3.10/site-packages/torch/autograd/__init__.py:251, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 246 retain_graph = create_graph 248 # The reason we repeat the same comment below is that 249 # some Python versions print out the first line of a multi-line function 250 # calls in the traceback and some print out the last line --> 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 252 tensors, 253 grad_tensors_, 254 retain_graph, 255 create_graph, 256 inputs, 257 allow_unreachable=True, 258 accumulate_grad=True, 259 ) RuntimeError: Found dtype Float but expected BFloat16 ``` The problem is that when computing the loss, the labels are in `float32` therefore, the returned loss is also in `float32`. ``` logits: torch.bfloat16 labels: torch.float32 loss: torch.float32 ``` This small change, fixes the issue and allows training VideoMAEForPreTraining model with bfloat16 dtype. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case: #27295 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->@amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27296", "html_url": "https://github.com/huggingface/transformers/pull/27296", "diff_url": "https://github.com/huggingface/transformers/pull/27296.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27296.patch", "merged_at": 1699291206000 }
https://api.github.com/repos/huggingface/transformers/issues/27295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27295/comments
https://api.github.com/repos/huggingface/transformers/issues/27295/events
https://github.com/huggingface/transformers/issues/27295
1,977,933,384
I_kwDOCUB6oc515N5I
27,295
VideoMAEforPretrained cannot be trained with Bfloat16
{ "login": "ikergarcia1996", "id": 18737249, "node_id": "MDQ6VXNlcjE4NzM3MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/18737249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ikergarcia1996", "html_url": "https://github.com/ikergarcia1996", "followers_url": "https://api.github.com/users/ikergarcia1996/followers", "following_url": "https://api.github.com/users/ikergarcia1996/following{/other_user}", "gists_url": "https://api.github.com/users/ikergarcia1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/ikergarcia1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ikergarcia1996/subscriptions", "organizations_url": "https://api.github.com/users/ikergarcia1996/orgs", "repos_url": "https://api.github.com/users/ikergarcia1996/repos", "events_url": "https://api.github.com/users/ikergarcia1996/events{/privacy}", "received_events_url": "https://api.github.com/users/ikergarcia1996/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ikergarcia1996 thanks for reporting and opening a PR! \r\n\r\nI've started a review on the PR around implementation specifics and I think once merged that should resolve the issue.", "Fixed #27296" ]
1,699
1,699
1,699
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-6.5.6-76060506-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.2.0.dev20230907 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It is not possible to train VideoMAEForPreTraining with bfloat16, because the labels are always stored as float32. This code snippet triggers the error. ```python from transformers import AutoImageProcessor, VideoMAEForPreTraining import numpy as np import torch num_frames = 16 video = list(np.random.randint(0, 256, (num_frames, 3, 224, 224))) image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base") model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base",torch_dtype=torch.bfloat16).to("cuda") pixel_values = image_processor(video, return_tensors="pt").pixel_values num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool() outputs = model(pixel_values.to(device=model.device,dtype=model.dtype), bool_masked_pos=bool_masked_pos) loss = outputs.loss loss.backward() ``` Full TraceBack ```bash RuntimeError Traceback (most recent call last) Cell In[1], line 20 17 outputs = model(pixel_values.to(device=model.device,dtype=model.dtype), bool_masked_pos=bool_masked_pos) 18 loss = outputs.loss ---> 20 loss.backward() File ~/miniconda3/envs/transformers/lib/python3.10/site-packages/torch/_tensor.py:492, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 482 if has_torch_function_unary(self): 483 return handle_torch_function( 484 Tensor.backward, 485 (self,), (...) 490 inputs=inputs, 491 ) --> 492 torch.autograd.backward( 493 self, gradient, retain_graph, create_graph, inputs=inputs 494 ) File ~/miniconda3/envs/transformers/lib/python3.10/site-packages/torch/autograd/__init__.py:251, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 246 retain_graph = create_graph 248 # The reason we repeat the same comment below is that 249 # some Python versions print out the first line of a multi-line function 250 # calls in the traceback and some print out the last line --> 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 252 tensors, 253 grad_tensors_, 254 retain_graph, 255 create_graph, 256 inputs, 257 allow_unreachable=True, 258 accumulate_grad=True, 259 ) RuntimeError: Found dtype Float but expected BFloat16 ``` The problem is that when computing the loss, the labels are in `float32` therefore, the returned loss is also in `float32`. ``` logits: torch.bfloat16 labels: torch.float32 loss: torch.float32 ``` ### Expected behavior Labels should be converted to the same dtype as the logits. This PR #27296 fixes the error. Altough I am not 100% sure that is the best way to handle the problem.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27295/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27294/comments
https://api.github.com/repos/huggingface/transformers/issues/27294/events
https://github.com/huggingface/transformers/issues/27294
1,977,914,522
I_kwDOCUB6oc515JSa
27,294
large alloc when running code flax speech recognition.
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "Hey @pphuc25 - you can ignore this warning. It doesn't mean that training will be slower or that there's anything wrong with the script. _c.f._ https://stackoverflow.com/a/52399665. You can change the `tcmalloc` threshold if you want to hide these warnings, e.g. as per https://github.com/huggingface/distil-whisper/blob/914dcdf3919552d5a3826a9d5db99b059ddcc16e/training/flax/distillation_scripts/run_distillation_32_2_timestamped.sh#L3", "with my expriments, this make the training really slow on TPU, so I think it's really affect\r\nThe way I fix is reinstall library with jax[tpu]", "That's a different issue @pphuc25! You **definitely** want to be installing the TPU version of jax / jax-lib on TPU, otherwise you'll have no utilisation of the accelerator cores. This is unrelated to the `tcmalloc` allocation threshold. It simply requires you to install the correct version of JAX:\r\n1. Official instructions: https://jax.readthedocs.io/en/latest/installation.html#pip-installation-google-cloud-tpu\r\n2. More detailed instructions (possibly outdated): https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects#tpu-vm", "Oh, I see, thank you for supported" ]
1,699
1,701
1,701
CONTRIBUTOR
null
### System Info When I run the code for training, the problem appears when the warning large alloc, but I do not understand why this happens, this make the training progress really slow, I am currently run on TPU v3-8 The warning be like ``` cmalloc: large alloc 13261799424 bytes == 0x53d90000 @ 0x7efe22acb680 0x7efe22aec824 0x7efe22aecb8a 0x7efd7854d2dc 0x7efd7343b7e5 0x7efd734462b0 0x7efd734521f7 0x7efd75b16a37 0x7efd733dc26a 0x7efd733dd56e 0x7efd730a3e04 0x7efd73077cbf 0x5f6939 0x5f7506 0x50b8d3 0x570556 0x5697da 0x5f6ec3 0x5f60b2 0x56ccfc 0x5697da 0x5f6ec3 0x59d21e 0x5f62ae 0x56ccfc 0x5697da 0x5f6ec3 0x5f60b2 0x56ccfc 0x5697da 0x5f6ec3 ``` ### Reproduction 1. Use code [run_flax_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/flax/speech-recognition/run_flax_speech_recognition_seq2seq.py) 2. Install libraries ``` datasets[audio]>=2.14.0 jax>=0.3.6 jaxlib>=0.3.6 flax>=0.4.1 optax>=0.0.8 torch>=1.9.0 jiwer evaluate git+https://github.com/huggingface/transformers.git tokenizers accelerate soundfile>=0.12.1 audiomentations ``` 3. Run command ``` python3 src/jax/train.py \ --model_name_or_path="openai/whisper-tiny" \ --dataset_name="mozilla-foundation/common_voice_12_0" \ --dataset_config_name="vi" \ --language="vi" \ --train_split_name="train+validation" \ --eval_split_name="test" \ --output_dir="./whisper-tiny-vi-flax-example" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="8" \ --num_train_epochs="10" \ --learning_rate="1e-4" \ --accumulate_gradient_steps 2 \ --warmup_steps="500" \ --logging_steps="25" \ --generation_max_length="30" \ --preprocessing_num_workers="10" \ --dataloader_num_workers="10" \ --max_duration_in_seconds="10" \ --text_column_name="sentence" \ --overwrite_output_dir \ --do_train \ --do_eval \ --predict_with_generate \ --push_to_hub \ --use_auth_token ``` ### Expected behavior It's would be nice if I can have a explain of why this happened, and the way to fix it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27294/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27293/comments
https://api.github.com/repos/huggingface/transformers/issues/27293/events
https://github.com/huggingface/transformers/issues/27293
1,977,875,244
I_kwDOCUB6oc514_ss
27,293
Shared tensors not correctly saved.
{ "login": "Luodian", "id": 15847405, "node_id": "MDQ6VXNlcjE1ODQ3NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/15847405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luodian", "html_url": "https://github.com/Luodian", "followers_url": "https://api.github.com/users/Luodian/followers", "following_url": "https://api.github.com/users/Luodian/following{/other_user}", "gists_url": "https://api.github.com/users/Luodian/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luodian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luodian/subscriptions", "organizations_url": "https://api.github.com/users/Luodian/orgs", "repos_url": "https://api.github.com/users/Luodian/repos", "events_url": "https://api.github.com/users/Luodian/events{/privacy}", "received_events_url": "https://api.github.com/users/Luodian/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting! I can reproduce, I'm fixing.", "Hmmm actually, it seems like it was just a mistake on my end, I cannot reproduce after trying again.\r\n\r\nIf you load the fuyu model, save it, and reload it once again, do you have an error? Does it only happen after fine-tuning?", "> Hmmm actually, it seems like it was just a mistake on my end, I cannot reproduce after trying again.\r\n> \r\n> If you load the fuyu model, save it, and reload it once again, do you have an error? Does it only happen after fine-tuning?\r\n\r\nohh I see, you dont have it on `transformers==4.36.0`?? I had this issue on two of my instances. Let's me figure out the details.\r\n\r\nI think the problem may comes from if you try to use `accelerator` + `deepspeed zero3` to wrap the model?\r\n\r\nThe error happens at my script in this line, you can take a look at the model specific configs if that could provide more contexts!\r\n\r\nhttps://github.com/Luodian/Otter/blob/ca69589b7e4475c9e87836de30e7fc91bbee74b6/pipeline/train/instruction_following.py#L523", "Thanks for sharing! I'm trying to reproduce", "But if reproducing it is difficult for your side, I could share you more information and my possible guess when I'm more available!\r\n\r\nNow at least I could run my all code with `4.35.0`. And I think this issue would also remind other users to prevent it somehow.", "Understood, it's likely it indeed comes from the safe serialization then.\r\nDo you have a command I can run using Otter? I'd like to dive in and see what may fail, I see you have different ways of saving the checkpoint:\r\n\r\nhttps://github.com/Luodian/Otter/blob/ca69589b7e4475c9e87836de30e7fc91bbee74b6/pipeline/train/train_utils.py#L229-L262", "Here's a minimal one:\r\n\r\n```\r\naccelerate launch --config_file=./pipeline/accelerate_configs/accelerate_config_zero2.yaml \\\r\n --num_processes=1 \\\r\n pipeline/train/instruction_following.py \\\r\n --pretrained_model_name_or_path=adept/fuyu-8b \\\r\n --training_data_yaml=./Demo_Data.yaml \\\r\n --model_name=fuyu \\\r\n --instruction_format=fuyu \\\r\n --batch_size=1 \\\r\n --gradient_accumulation_steps=2 \\\r\n --num_epochs=3 \\\r\n --external_save_dir=./checkpoints \\\r\n --run_name=Fuyu_Save_Tester \\\r\n --wandb_project=Fuyu \\\r\n --workers=${WORKERS} \\\r\n --lr_scheduler=cosine \\\r\n --learning_rate=1e-5 \\\r\n --warmup_steps_ratio=0.03 \\\r\n --save_hf_model \\\r\n --max_seq_len=1024 \\\r\n --logging_steps=1000 \\\r\n --keep_symbols \\\r\n --save_ckpt_each_epoch \\\r\n --dynamic_resolution \\\r\n --with_task_description\r\n```\r\n\r\nThe data could be set at `Demo_Data.yaml` and the files are downloaded from:\r\n`instruction.json` file at [here](https://entuedu-my.sharepoint.com/:f:/g/personal/libo0013_e_ntu_edu_sg/Eo9bgNV5cjtEswfA-HfjNNABiKsjDzSWAl5QYAlRZPiuZA?e=nNUhJH) and the `images.parquet` file at [here](https://entuedu-my.sharepoint.com/:f:/g/personal/libo0013_e_ntu_edu_sg/EmwHqgRtYtBNryTcFmrGWCgBjvWQMo1XeCN250WuM2_51Q?e=sCymXx).", "> Understood, it's likely it indeed comes from the safe serialization then. Do you have a command I can run using Otter? I'd like to dive in and see what may fail, I see you have different ways of saving the checkpoint:\r\n> \r\n> https://github.com/Luodian/Otter/blob/ca69589b7e4475c9e87836de30e7fc91bbee74b6/pipeline/train/train_utils.py#L229-L262\r\n\r\nbasically the errors comes from \r\n`--save_hf_model`", "Ok, let me try this. Are you tying weights yourself that weren't originally tied in the fuyu model?", "> Ok, let me try this. Are you tying weights yourself that weren't originally tied in the fuyu model?\r\n\r\nWhat does this mean sorry? I didnt try other models, I think Fuyu also didnt have specifically manipulation of model saving process. It calls the `save_pretrained` from `modeling_utils.py` I guess?", "Ok that works no problem :)\r\nI was just making sure you weren't tying some weights yourself within the model, as this might go wrong on reload.\r\n\r\nI'm currently debugging your script, will report here.\r\n\r\nEdit: handed it out to the fantastic @muellerzr ", "oh yes, the model weights are directly from `adept/fuyu-8b`. \r\n\r\nBut we have our implementations inside `modeling_persimmon.py`, this is the base LLM of Fuyu. It mainly about the throughput optimization (improved 4x) and integration of flash attention and fused operators. \r\n\r\nDoes them count for the error? I think both the following versions would err with `4.36.0`.\r\n\r\n1. One of my instance has flash attention so it calls our version of `modeling_persimmon.py`. \r\n2. Another instance didnt have flash attention so it calls transformers `modeling_persimmon.py`\r\n\r\nThe logic is:\r\n```python\r\ntry:\r\n from .modeling_persimmon import PersimmonForCausalLM\r\n\r\n print(\"Using local PersimmonForCausalLM with Flash Attention\")\r\nexcept ImportError:\r\n from transformers import PersimmonForCausalLM\r\n\r\n print(\"Using transformers PersimmonForCausalLM without Flash Attention\")\r\n```", "Working on trying to reproduce this :) ", "Successfully could reproduce, a minimal repr is below:\r\n\r\n```python\r\nimport torch\r\nfrom accelerate import Accelerator\r\nfrom accelerate.utils import DeepSpeedPlugin, HfDeepSpeedConfig\r\nfrom transformers import AutoModelForCausalLM\r\nfrom transformers.modeling_utils import unwrap_model\r\n\r\ntransformers_config = HfDeepSpeedConfig({\r\n \"train_micro_batch_size_per_gpu\": 2,\r\n \"gradient_accumulation_steps\": 2,\r\n \"gradient_clipping\": 1.0,\r\n \"offload_optimizer_device\": None,\r\n \"offload_param_device\": None,\r\n \"zero3_init_flag\": False,\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n },\r\n})\r\n\r\nplugin = DeepSpeedPlugin(transformers_config)\r\n\r\naccelerator = Accelerator(deepspeed_plugin=plugin)\r\n\r\nmodel_name = \"bert-base-cased\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\n\r\nopt = torch.optim.Adam(model.parameters(), lr=1e-5)\r\n\r\nmodel, opt = accelerator._prepare_deepspeed(model, opt)\r\n\r\nstate_dict = accelerator.get_state_dict(model)\r\n\r\nmodel = unwrap_model(model)\r\nmodel.save_pretrained(\r\n \"testing_fuyu_8b\",\r\n state_dict=state_dict,\r\n safe_serialization=True\r\n)\r\n```", "@Luodian can you try again when `num_processes >1`? I couldn't reproduce it. \r\n\r\nI can only reproduce your main example here because currently Accelerate doesn't really support single-GPU deepspeed", "sorry Im little busy these days, I may report it later, but may not very soon.", "I have the same issue, “Removed shared tensor”. Transformers 4.35.2, using deepspeed on 1 gpu. Following the comments here, I disabled deepspeed and now it is saving correctly. \r\n\r\nI imagine if you are getting this error, you are running deepspeed on a 1 gpu machine. ", "But I truly had this issue when using 2 or 8 GPUs with deepspeed zero3.\n🧐", "Agree. I got the same issue when I just ran it on my 8gpu instance with deepspeed. I even downgraded to 4.35.0 and still have the same issue.\r\n\r\nbasically my code saves a bert module in one folder, and saves the overall model in another folder. I hypothesize that when saving with safetensors, if it noticed that you are saving duplicate weights and biases, it saves the full thing once and when you try re-saving it, it will remove the shared modules (to save on disk space, I guess). In my case, it was removing all of my layers except for the tail Embedding layer.\r\n\r\nluckily for me, setting safe_serialization = False fixed it for me. I hope you can figure out how to fix yours too @Luodian ", "Bu the way, in case it matters, I am using deepspeed zero stage 0, but for Trainer it only began to use dp16 and gradient checkpointing and stuff when I pass the deepspeed config (even though it is stage 0)", "> Agree. I got the same issue when I just ran it on my 8gpu instance with deepspeed. I even downgraded to 4.35.0 and still have the same issue.\r\n> \r\n> basically my code saves a bert module in one folder, and saves the overall model in another folder. I hypothesize that when saving with safetensors, if it noticed that you are saving duplicate weights and biases, it saves the full thing once and when you try re-saving it, it will remove the shared modules (to save on disk space, I guess). In my case, it was removing all of my layers except for the tail Embedding layer.\r\n> \r\n> luckily for me, setting safe_serialization = False fixed it for me. I hope you can figure out how to fix yours too @Luodian\r\n\r\nyes, I use `4.35.1` and `safe_serialization=False` solved my issue. And I am also gonna fix in this version until this issue be full addressed (in deepspeed zero0/1/2/3, and multi-gpus).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Encountered the error while using:\r\n```\r\naccelerate=0.25.0\r\ntransformers=4.36.2\r\n```\r\nSingle gpu, not using deepspeed. Accelerate config:\r\n```yaml\r\ncompute_environment: LOCAL_MACHINE\r\ndebug: false\r\ndistributed_type: 'NO'\r\ndowncast_bf16: 'no'\r\ndynamo_config:\r\n dynamo_backend: INDUCTOR\r\ngpu_ids: '0'\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmixed_precision: bf16\r\nnum_machines: 1\r\nnum_processes: 1\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n```\r\n\r\nWhen calling `accelerator.save_state(dir)` to save a flan-t5-small model, I get:\r\n```\r\nRemoved shared tensor {'decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading\r\n```\r\nand then when I reload the model `accelerator.load_state(dir)`, I get:\r\n```\r\nRuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:\r\n Missing key(s) in state_dict: \"encoder.embed_tokens.weight\", \"decoder.embed_tokens.weight\".\r\n```\r\nCalling `accelerator.save_state(dir, safe_serialization=False)` works, but doesn't solve the underlying problem. Calling `accelerate.save_state(dir)` and then `accelerate.load_state(dir)` shouldn't throw an error. Why is `safe_serialization` removing these two shared tensors? Not sure what is the best solution, but this should be automatically handled.", "Hi @GabPrato, \r\nI had the same issue with Accelerate while using for single GPU. \r\nUsing, `safe_serialization=False` in accelerate.save_state() resolved it. ", "i have the same issue", "cc @muellerzr @pacman100 ", "Hi all, please explain more about how you're using `Accelerator.save_state()` here please? We don't expose that part of the API in the `Trainer`, so *how* that is being called could be the root of our issue (as the before error fully and completely passes)\r\n\r\nAs well as full and complete code", "I tried with every latest version of the packages, and the reproducer runs for me.\r\n\r\nHappy to help whenever there's a reproducer !", "I run the same code with the following codes and got the issue.\r\nhttps://github.com/huggingface/trl/issues/1121", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,708
null
NONE
null
### System Info - `transformers` version: 4.36.0.dev0 - Platform: Linux-4.19.0-25-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: 8*A100 - Using distributed or parallel set-up in script?: accelerate + deepspeed zero3 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am finetuning Fuyu-8B and found the code for calling `model.save_pretrained` method would run into error after upgrading to `4.36.0`. The error shows: ``` Removed shared tensor {'language_model.model.layers.12.self_attn.dense.weight', 'language_model.model.layers.22.self_attn.k_layernorm.weight', 'language_model.model.layers.24.mlp.dense_h_to_4h.bias', 'language_model.model.layers.15.mlp.dense_h_to_4h.weight', 'language_model.model.layers.22.input_layernorm.weight', 'language_model.model.layers.25.self_attn.q_layernorm.weight', 'language_model.model.layers.8.self_attn.query_key_value.bias', 'language_model.model.layers.33.mlp.dense_4h_to_h.bias', 'language_model.model.layers.6.post_attention_layernorm.weight', 'language_model.model.layers.30.self_attn.query_key_value.weight', 'language_model.model.layers.5.self_attn.query_key_value.weight', 'language_model.model.layers.10.mlp.dense_h_to_4h.bias', 'language_model.model.layers.5.post_attention_layernorm.weight', 'language_model.model.layers.15.mlp.dense_4h_to_h.bias', 'language_model.model.layers.2.self_attn.query_key_value.bias', 'language_model.model.layers.4.input_layernorm.bias', 'language_model.model.layers.25.self_attn.k_layernorm.weight', 'language_model.model.layers.29.self_attn.query_key_value.weight', 'language_model.model.layers.13.self_attn.query_key_value.bias', 'language_model.lm_head.weight', 'language_model.model.layers.6.mlp.dense_h_to_4h.weight', 'language_model.model.layers.13.mlp.dense_4h_to_h.weight', 'language_model.model.layers.14.mlp.dense_h_to_4h.weight', 'language_model.model.layers.31.mlp.dense_h_to_4h.weight', 'language_model.model.layers.32.input_layernorm.weight', 'language_model.model.layers.19.mlp.dense_4h_to_h.bias', 'language_model.model.layers.24.self_attn.dense.bias', 'language_model.model.layers.5.self_attn.query_key_value.bias', 'language_model.model.layers.7.mlp.dense_4h_to_h.bias', 'language_model.model.layers.10.self_attn.query_key_value.bias', 'language_model.model.layers.18.mlp.dense_h_to_4h.weight', 'language_model.model.layers.29.post_attention_layernorm.bias', 'language_model.model.layers.11.self_attn.dense.weight', 'language_model.model.layers.28.self_attn.query_key_value.weight', 'language_model.model.layers.14.mlp.dense_4h_to_h.weight', 'language_model.model.layers.15.mlp.dense_4h_to_h.weight', 'language_model.model.layers.35.mlp.dense_4h_to_h.weight', 'language_model.model.layers.17.post_attention_layernorm.bias', 'language_model.model.layers.23.mlp.dense_h_to_4h.bias', 'language_model.model.layers.15.mlp.dense_h_to_4h.bias', 'language_model.model.final_layernorm.weight', 'language_model.model.layers.6.mlp.dense_4h_to_h.weight', 'language_model.model.layers.29.input_layernorm.weight', 'language_model.model.layers.13.self_attn.q_layernorm.bias', 'language_model.model.layers.6.self_attn.dense.weight', 'language_model.model.layers.22.self_attn.query_key_value.weight', 'language_model.model.layers.35.post_attention_layernorm.bias', 'language_model.model.layers.23.self_attn.dense.bias', 'language_model.model.layers.16.self_attn.k_layernorm.weight', 'language_model.model.layers.32.self_attn.dense.weight', 'language_model.model.layers.25.self_attn.dense.bias', 'language_model.model.layers.9.self_attn.query_key_value.bias', 'language_model.model.layers.25.self_attn.k_layernorm.bias', 'language_model.model.layers.3.mlp.dense_h_to_4h.weight', 'language_model.model.layers.21.self_attn.q_layernorm.weight', 'language_model.model.layers.32.post_attention_layernorm.bias', 'language_model.model.layers.33.self_attn.q_layernorm.weight', 'language_model.model.layers.2.post_attention_layernorm.bias', 'language_model.model.layers.20.mlp.dense_4h_to_h.bias', 'language_model.model.layers.4.self_attn.k_layernorm.bias', 'language_model.model.layers.29.mlp.dense_4h_to_h.weight', 'language_model.model.layers.32.self_attn.dense.bias', 'language_model.model.layers.8.mlp.dense_h_to_4h.weight', 'language_model.model.layers.34.self_attn.query_key_value.bias', 'language_model.model.layers.35.self_attn.k_layernorm.bias', 'language_model.model.layers.4.post_attention_layernorm.bias', 'language_model.model.layers.28.mlp.dense_4h_to_h.bias', 'language_model.model.layers.8.self_attn.q_layernorm.bias', 'language_model.model.layers.32.self_attn.k_layernorm.weight', 'language_model.model.layers.28.self_attn.dense.weight', 'language_model.model.layers.31.mlp.dense_4h_to_h.bias', 'language_model.model.layers.0.mlp.dense_4h_to_h.weight', 'language_model.model.layers.11.mlp.dense_h_to_4h.weight', 'language_model.model.layers.29.mlp.dense_4h_to_h.bias', 'language_model.model.layers.19.mlp.dense_h_to_4h.weight', 'language_model.model.layers.12.post_attention_layernorm.weight', 'language_model.model.layers.7.self_attn.query_key_value.weight', 'language_model.model.layers.13.input_layernorm.weight', 'language_model.model.layers.31.mlp.dense_h_to_4h.bias', 'language_model.model.layers.0.self_attn.k_layernorm.bias', 'language_model.model.layers.34.self_attn.q_layernorm.bias', 'language_model.model.layers.1.self_attn.k_layernorm.weight', 'language_model.model.layers.35.self_attn.q_layernorm.weight', 'language_model.model.layers.29.self_attn.k_layernorm.bias', 'language_model.model.layers.34.mlp.dense_4h_to_h.weight', 'language_model.model.layers.30.mlp.dense_h_to_4h.bias', 'language_model.model.layers.0.input_layernorm.bias', 'language_model.model.layers.18.self_attn.query_key_value.weight', 'language_model.model.layers.1.mlp.dense_h_to_4h.bias', 'language_model.model.layers.26.mlp.dense_h_to_4h.weight', 'language_model.model.layers.8.post_attention_layernorm.weight', 'language_model.model.layers.18.self_attn.dense.bias', 'language_model.model.layers.30.mlp.dense_4h_to_h.bias', 'language_model.model.layers.7.mlp.dense_h_to_4h.bias', 'language_model.model.layers.31.self_attn.dense.weight', 'language_model.model.layers.9.self_attn.query_key_value.weight', 'language_model.model.layers.12.input_layernorm.bias', 'language_model.model.layers.14.self_attn.q_layernorm.weight', 'language_model.model.layers.28.self_attn.dense.bias', 'language_model.model.layers.6.self_attn.q_layernorm.bias', 'language_model.model.layers.30.self_attn.query_key_value.bias', 'language_model.model.layers.11.self_attn.q_layernorm.weight', 'language_model.model.layers.33.self_attn.dense.bias', 'language_model.model.layers.14.mlp.dense_h_to_4h.bias', 'language_model.model.layers.14.mlp.dense_4h_to_h.bias', 'language_model.model.layers.12.mlp.dense_h_to_4h.weight', 'language_model.model.layers.10.self_attn.dense.weight', 'language_model.model.layers.5.self_attn.k_layernorm.weight', 'language_model.model.layers.33.mlp.dense_h_to_4h.weight', 'language_model.model.layers.17.mlp.dense_4h_to_h.weight', 'language_model.model.layers.19.self_attn.dense.bias', 'language_model.model.layers.4.mlp.dense_4h_to_h.bias', 'language_model.model.layers.19.self_attn.query_key_value.weight', 'language_model.model.layers.8.input_layernorm.bias', 'language_model.model.layers.6.self_attn.k_layernorm.bias', 'language_model.model.layers.31.self_attn.dense.bias', 'language_model.model.layers.25.self_attn.query_key_value.bias', 'language_model.model.layers.34.self_attn.q_layernorm.weight', 'language_model.model.layers.7.input_layernorm.bias', 'language_model.model.layers.2.self_attn.k_layernorm.bias', 'language_model.model.layers.29.self_attn.q_layernorm.bias', 'language_model.model.layers.16.self_attn.query_key_value.bias', 'language_model.model.layers.35.mlp.dense_h_to_4h.weight', 'language_model.model.layers.35.post_attention_layernorm.weight', 'language_model.model.layers.1.self_attn.dense.weight', 'language_model.model.layers.4.mlp.dense_h_to_4h.bias', 'language_model.model.layers.15.input_layernorm.bias', 'language_model.model.layers.4.post_attention_layernorm.weight', 'language_model.model.layers.14.input_layernorm.weight', 'language_model.model.layers.22.mlp.dense_4h_to_h.bias', 'language_model.model.layers.11.input_layernorm.weight', 'language_model.model.layers.27.self_attn.k_layernorm.bias', 'language_model.model.layers.18.mlp.dense_4h_to_h.bias', 'language_model.model.layers.25.mlp.dense_h_to_4h.bias', 'language_model.model.layers.32.input_layernorm.bias', 'language_model.model.layers.10.mlp.dense_h_to_4h.weight', 'language_model.model.layers.14.self_attn.k_layernorm.weight', 'language_model.model.layers.8.post_attention_layernorm.bias', 'language_model.model.layers.27.self_attn.dense.bias', 'language_model.model.layers.21.self_attn.k_layernorm.weight', 'language_model.model.layers.27.self_attn.q_layernorm.weight', 'language_model.model.layers.30.self_attn.dense.weight', 'language_model.model.layers.23.mlp.dense_4h_to_h.bias', 'language_model.model.layers.18.post_attention_layernorm.weight', 'language_model.model.layers.22.self_attn.q_layernorm.weight', 'language_model.model.layers.13.self_attn.dense.bias', 'language_model.model.layers.14.self_attn.query_key_value.bias', 'language_model.model.layers.10.self_attn.k_layernorm.bias', 'language_model.model.layers.34.input_layernorm.bias', 'language_model.model.layers.3.post_attention_layernorm.bias', 'language_model.model.layers.5.input_layernorm.weight', 'language_model.model.layers.8.self_attn.query_key_value.weight', 'language_model.model.layers.27.post_attention_layernorm.bias', 'language_model.model.layers.28.mlp.dense_h_to_4h.weight', 'language_model.model.layers.28.self_attn.q_layernorm.weight', 'language_model.model.layers.5.mlp.dense_4h_to_h.weight', 'language_model.model.layers.19.self_attn.dense.weight', 'language_model.model.layers.21.input_layernorm.weight', 'language_model.model.layers.14.post_attention_layernorm.bias', 'language_model.model.layers.35.self_attn.query_key_value.bias', 'language_model.model.layers.10.mlp.dense_4h_to_h.weight', 'language_model.model.layers.17.self_attn.q_layernorm.bias', 'language_model.model.layers.25.input_layernorm.bias', 'language_model.model.layers.34.self_attn.dense.weight', 'language_model.model.layers.34.input_layernorm.weight', 'language_model.model.layers.5.self_attn.k_layernorm.bias', 'language_model.model.layers.2.mlp.dense_4h_to_h.weight', 'language_model.model.layers.11.self_attn.dense.bias', 'language_model.model.layers.17.mlp.dense_4h_to_h.bias', 'language_model.model.layers.13.mlp.dense_4h_to_h.bias', 'language_model.model.layers.21.self_attn.query_key_value.weight', 'language_model.model.lay 207 Saved checkpoint at epoch 1. ``` I try to set the `safe_serialization=False`, the warning disappear but the saved pytorch_model.bin only 2MB, comparing to around 18GB originally (using 4.35.0). ### Expected behavior See above
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27293/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/27292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27292/comments
https://api.github.com/repos/huggingface/transformers/issues/27292/events
https://github.com/huggingface/transformers/issues/27292
1,977,819,033
I_kwDOCUB6oc514x-Z
27,292
Error inferencing on 2x A100 GPUs
{ "login": "RonanKMcGovern", "id": 78278410, "node_id": "MDQ6VXNlcjc4Mjc4NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RonanKMcGovern", "html_url": "https://github.com/RonanKMcGovern", "followers_url": "https://api.github.com/users/RonanKMcGovern/followers", "following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}", "gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}", "starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions", "organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs", "repos_url": "https://api.github.com/users/RonanKMcGovern/repos", "events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}", "received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @RonanKMcGovern \r\nThanks for the issue! :D \r\nOne thing that is a bit unusual here is that you are forcing the whole model to fit on a single GPU (through `device_map={\"\": \"cuda:0\"}`) for a 40B model in bfloat16. I don't think such a model should fit in a single A100 GPU for generation. Since you force the model to be loaded on a single GPU with `{\"\": \"cuda:0\"}` it might led to something wrong. \r\nCan you share more details about your GPU hardware (40GB or 80GB RAM?) and how you ran your script with `\"auto\"` for the device map", "Hi @younesbelkada .\r\n\r\n- The A100s are 80 GB.\r\n- You're right the model won't fit on one, so doing '{\"\":\"cuda:0\"}' doesn't make sense. I simply misunderstood what that did. In any case, I first had the error with 'auto' and then tried '{\"\":\"cuda:0\"}' (which in hindsight is wrong).\r\n\r\nWhen running with auto, I used the same script just with:\r\n```\r\ndevice_map='auto'\r\n```\r\n\r\nwhen loading the model", "@younesbelkada was it possible to replicate?", "Hi @younesbelkada , moving to the last stable transformers release seems to have solved the issue:\r\n```\r\n- `transformers` version: 4.35.0\r\n- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.25.0.dev0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.0+cu118 (True)\r\n```\r\nBTW, the same repro above with the dev version is now giving:\r\n```\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:11 for open-end generation.\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[3], line 11\r\n 9 # Basic test of model generation\r\n 10 prompt = \"The quick brown fox\"\r\n---> 11 generated_text = generate_simple(model, tokenizer, prompt)\r\n 12 print(generated_text)\r\n\r\nCell In[3], line 6, in generate_simple(model, tokenizer, prompt)\r\n 4 def generate_simple(model, tokenizer, prompt):\r\n 5 inputs = tokenizer.encode(prompt, return_tensors=\"pt\").to(\"cuda\")\r\n----> 6 outputs = model.generate(inputs, max_length=50)\r\n 7 return tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1753, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)\r\n 1736 return self.assisted_decoding(\r\n 1737 input_ids,\r\n 1738 assistant_model=assistant_model,\r\n (...)\r\n 1749 **model_kwargs,\r\n 1750 )\r\n 1751 if generation_mode == GenerationMode.GREEDY_SEARCH:\r\n 1752 # 11. run greedy search\r\n-> 1753 return self.greedy_search(\r\n 1754 input_ids,\r\n 1755 logits_processor=logits_processor,\r\n 1756 stopping_criteria=stopping_criteria,\r\n 1757 pad_token_id=generation_config.pad_token_id,\r\n 1758 eos_token_id=generation_config.eos_token_id,\r\n 1759 output_scores=generation_config.output_scores,\r\n 1760 return_dict_in_generate=generation_config.return_dict_in_generate,\r\n 1761 synced_gpus=synced_gpus,\r\n 1762 streamer=streamer,\r\n 1763 **model_kwargs,\r\n 1764 )\r\n 1766 elif generation_mode == GenerationMode.CONTRASTIVE_SEARCH:\r\n 1767 if not model_kwargs[\"use_cache\"]:\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2614, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n 2611 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n 2613 # forward pass to get next token\r\n-> 2614 outputs = self(\r\n 2615 **model_inputs,\r\n 2616 return_dict=True,\r\n 2617 output_attentions=output_attentions,\r\n 2618 output_hidden_states=output_hidden_states,\r\n 2619 )\r\n 2621 if synced_gpus and this_peer_finished:\r\n 2622 continue # don't waste resources running the code we don't need\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:164, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)\r\n 162 output = module._old_forward(*args, **kwargs)\r\n 163 else:\r\n--> 164 output = module._old_forward(*args, **kwargs)\r\n 165 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:1175, in FalconForCausalLM.forward(self, input_ids, past_key_values, attention_mask, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1166 r\"\"\"\r\n 1167 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):\r\n 1168 Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set\r\n 1169 `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`\r\n 1170 are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`\r\n 1171 \"\"\"\r\n 1173 return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n-> 1175 transformer_outputs = self.transformer(\r\n 1176 input_ids,\r\n 1177 past_key_values=past_key_values,\r\n 1178 attention_mask=attention_mask,\r\n 1179 position_ids=position_ids,\r\n 1180 head_mask=head_mask,\r\n 1181 inputs_embeds=inputs_embeds,\r\n 1182 use_cache=use_cache,\r\n 1183 output_attentions=output_attentions,\r\n 1184 output_hidden_states=output_hidden_states,\r\n 1185 return_dict=return_dict,\r\n 1186 )\r\n 1187 hidden_states = transformer_outputs[0]\r\n 1189 lm_logits = self.lm_head(hidden_states)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:1054, in FalconModel.forward(self, input_ids, past_key_values, attention_mask, position_ids, head_mask, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1042 outputs = self._gradient_checkpointing_func(\r\n 1043 block.__call__,\r\n 1044 hidden_states,\r\n (...)\r\n 1051 output_attentions,\r\n 1052 )\r\n 1053 else:\r\n-> 1054 outputs = block(\r\n 1055 hidden_states,\r\n 1056 layer_past=layer_past,\r\n 1057 attention_mask=attention_mask,\r\n 1058 position_ids=position_ids,\r\n 1059 head_mask=head_mask[i],\r\n 1060 use_cache=use_cache,\r\n 1061 output_attentions=output_attentions,\r\n 1062 alibi=alibi,\r\n 1063 )\r\n 1065 hidden_states = outputs[0]\r\n 1066 if use_cache is True:\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:164, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)\r\n 162 output = module._old_forward(*args, **kwargs)\r\n 163 else:\r\n--> 164 output = module._old_forward(*args, **kwargs)\r\n 165 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:766, in FalconDecoderLayer.forward(self, hidden_states, alibi, attention_mask, position_ids, layer_past, head_mask, use_cache, output_attentions, **kwargs)\r\n 763 attention_layernorm_out = self.input_layernorm(hidden_states)\r\n 765 # Self attention.\r\n--> 766 attn_outputs = self.self_attention(\r\n 767 attention_layernorm_out,\r\n 768 layer_past=layer_past,\r\n 769 attention_mask=attention_mask,\r\n 770 position_ids=position_ids,\r\n 771 alibi=alibi,\r\n 772 head_mask=head_mask,\r\n 773 use_cache=use_cache,\r\n 774 output_attentions=output_attentions,\r\n 775 **kwargs,\r\n 776 )\r\n 778 attention_output = attn_outputs[0]\r\n 780 if not self.config.new_decoder_architecture:\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1517 else:\r\n-> 1518 return self._call_impl(*args, **kwargs)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)\r\n 1522 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1523 # this function, and just call forward.\r\n 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1525 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1526 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1527 return forward_call(*args, **kwargs)\r\n 1529 try:\r\n 1530 result = None\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/accelerate/hooks.py:164, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)\r\n 162 output = module._old_forward(*args, **kwargs)\r\n 163 else:\r\n--> 164 output = module._old_forward(*args, **kwargs)\r\n 165 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:593, in FalconFlashAttention2.forward(self, hidden_states, alibi, attention_mask, position_ids, layer_past, head_mask, use_cache, output_attentions, **kwargs)\r\n 590 key_layer = key_layer.to(target_dtype)\r\n 591 value_layer = value_layer.to(target_dtype)\r\n--> 593 attn_output = self._flash_attention_forward(\r\n 594 query_layer, key_layer, value_layer, attention_mask, query_length, dropout=attn_dropout\r\n 595 )\r\n 597 attn_weights = attn_output.reshape(batch_size, query_length, self.num_heads * self.head_dim)\r\n 598 attn_output = self.dense(attn_weights)\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:653, in FalconFlashAttention2._flash_attention_forward(self, query_states, key_states, value_states, attention_mask, query_length, dropout, softmax_scale)\r\n 651 attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)\r\n 652 else:\r\n--> 653 attn_output = flash_attn_func(\r\n 654 query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=self.is_causal\r\n 655 )\r\n 657 return attn_output\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py:705, in flash_attn_func(q, k, v, dropout_p, softmax_scale, causal, window_size, return_attn_probs)\r\n 652 def flash_attn_func(\r\n 653 q,\r\n 654 k,\r\n (...)\r\n 660 return_attn_probs=False,\r\n 661 ):\r\n 662 \"\"\"dropout_p should be set to 0.0 during evaluation\r\n 663 Supports multi-query and grouped-query attention (MQA/GQA) by passing in KV with fewer heads\r\n 664 than Q. Note that the number of heads in Q must be divisible by the number of heads in KV.\r\n (...)\r\n 703 pattern (negative means that location was dropped, nonnegative means it was kept).\r\n 704 \"\"\"\r\n--> 705 return FlashAttnFunc.apply(\r\n 706 q, k, v, dropout_p, softmax_scale, causal, window_size, return_attn_probs\r\n 707 )\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/torch/autograd/function.py:539, in Function.apply(cls, *args, **kwargs)\r\n 536 if not torch._C._are_functorch_transforms_active():\r\n 537 # See NOTE: [functorch vjp and autograd interaction]\r\n 538 args = _functorch.utils.unwrap_dead_wrappers(args)\r\n--> 539 return super().apply(*args, **kwargs) # type: ignore[misc]\r\n 541 if cls.setup_context == _SingleLevelFunction.setup_context:\r\n 542 raise RuntimeError(\r\n 543 \"In order to use an autograd.Function with functorch transforms \"\r\n 544 \"(vmap, grad, jvp, jacrev, ...), it must override the setup_context \"\r\n 545 \"staticmethod. For more details, please see \"\r\n 546 \"https://pytorch.org/docs/master/notes/extending.func.html\"\r\n 547 )\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py:434, in FlashAttnFunc.forward(ctx, q, k, v, dropout_p, softmax_scale, causal, window_size, return_softmax)\r\n 432 if softmax_scale is None:\r\n 433 softmax_scale = q.shape[-1] ** (-0.5)\r\n--> 434 out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_forward(\r\n 435 q,\r\n 436 k,\r\n 437 v,\r\n 438 dropout_p,\r\n 439 softmax_scale,\r\n 440 causal=causal,\r\n 441 window_size=window_size,\r\n 442 return_softmax=return_softmax and dropout_p > 0,\r\n 443 )\r\n 444 ctx.save_for_backward(q, k, v, out_padded, softmax_lse, rng_state)\r\n 445 ctx.dropout_p = dropout_p\r\n\r\nFile /usr/local/lib/python3.10/dist-packages/flash_attn/flash_attn_interface.py:47, in _flash_attn_forward(q, k, v, dropout_p, softmax_scale, causal, window_size, return_softmax)\r\n 45 maybe_contiguous = lambda x: x.contiguous() if x.stride(-1) != 1 else x\r\n 46 q, k, v = [maybe_contiguous(x) for x in (q, k, v)]\r\n---> 47 out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.fwd(\r\n 48 q,\r\n 49 k,\r\n 50 v,\r\n 51 None,\r\n 52 dropout_p,\r\n 53 softmax_scale,\r\n 54 causal,\r\n 55 window_size[0],\r\n 56 window_size[1],\r\n 57 return_softmax,\r\n 58 None,\r\n 59 )\r\n 60 return out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state\r\n\r\nRuntimeError: Number of heads in key/value must divide number of heads in query\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info I'm getting this issue with 2x A100s for inference (or training too), but I don't get the issue with 4x A6000s. *Environment* ``` - `transformers` version: 4.36.0.dev0 - Platform: Linux-5.4.0-149-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.18.0 - Safetensors version: 0.4.0 - Accelerate version: 0.25.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, 2x A100s. - Using distributed or parallel set-up in script?: Model split across gpus. ``` ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction *Reproduction:* ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "tiiuae/falcon-40B" model = AutoModelForCausalLM.from_pretrained( model_id, device_map={"":"cuda:0"}, # I also tried 'auto' # trust_remote_code=False, torch_dtype=torch.bfloat16, use_flash_attention_2=True, # works with Llama models and reduces memory reqs) import torch import gc tokenizer = AutoTokenizer.from_pretrained(model_id,use_fast=True) def generate_simple(model, tokenizer, prompt): inputs = tokenizer.encode(prompt, return_tensors="pt").to("cuda") outputs = model.generate(inputs, max_length=50) return tokenizer.decode(outputs[0], skip_special_tokens=True) # Basic test of model generation prompt = "The quick brown fox" generated_text = generate_simple(model, tokenizer, prompt) print(generated_text) ``` *Error* ``` The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:11 for open-end generation. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[9], line 13 11 # Basic test of model generation 12 prompt = "The quick brown fox" ---> 13 generated_text = generate_simple(model, tokenizer, prompt) 14 print(generated_text) 16 # Clear GPU cache and run garbage collection Cell In[9], line 8, in generate_simple(model, tokenizer, prompt) 6 def generate_simple(model, tokenizer, prompt): 7 inputs = tokenizer.encode(prompt, return_tensors="pt").to("cuda") ----> 8 outputs = model.generate(inputs, max_length=50) 9 return tokenizer.decode(outputs[0], skip_special_tokens=True) File /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1753, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs) 1736 return self.assisted_decoding( 1737 input_ids, 1738 assistant_model=assistant_model, (...) 1749 **model_kwargs, 1750 ) 1751 if generation_mode == GenerationMode.GREEDY_SEARCH: 1752 # 11. run greedy search -> 1753 return self.greedy_search( 1754 input_ids, 1755 logits_processor=logits_processor, 1756 stopping_criteria=stopping_criteria, 1757 pad_token_id=generation_config.pad_token_id, 1758 eos_token_id=generation_config.eos_token_id, 1759 output_scores=generation_config.output_scores, 1760 return_dict_in_generate=generation_config.return_dict_in_generate, 1761 synced_gpus=synced_gpus, 1762 streamer=streamer, 1763 **model_kwargs, 1764 ) 1766 elif generation_mode == GenerationMode.CONTRASTIVE_SEARCH: 1767 if not model_kwargs["use_cache"]: File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2614, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 2611 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 2613 # forward pass to get next token -> 2614 outputs = self( 2615 **model_inputs, 2616 return_dict=True, 2617 output_attentions=output_attentions, 2618 output_hidden_states=output_hidden_states, 2619 ) 2621 if synced_gpus and this_peer_finished: 2622 continue # don't waste resources running the code we don't need File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:1246, in FalconForCausalLM.forward(self, input_ids, past_key_values, attention_mask, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1237 r""" 1238 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): 1239 Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set 1240 `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` 1241 are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` 1242 """ 1244 return_dict = return_dict if return_dict is not None else self.config.use_return_dict -> 1246 transformer_outputs = self.transformer( 1247 input_ids, 1248 past_key_values=past_key_values, 1249 attention_mask=attention_mask, 1250 position_ids=position_ids, 1251 head_mask=head_mask, 1252 inputs_embeds=inputs_embeds, 1253 use_cache=use_cache, 1254 output_attentions=output_attentions, 1255 output_hidden_states=output_hidden_states, 1256 return_dict=return_dict, 1257 ) 1258 hidden_states = transformer_outputs[0] 1260 lm_logits = self.lm_head(hidden_states) File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:1122, in FalconModel.forward(self, input_ids, past_key_values, attention_mask, position_ids, head_mask, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 1110 outputs = self._gradient_checkpointing_func( 1111 block.__call__, 1112 hidden_states, (...) 1119 output_attentions, 1120 ) 1121 else: -> 1122 outputs = block( 1123 hidden_states, 1124 layer_past=layer_past, 1125 attention_mask=attention_mask, 1126 position_ids=position_ids, 1127 head_mask=head_mask[i], 1128 use_cache=use_cache, 1129 output_attentions=output_attentions, 1130 alibi=alibi, 1131 ) 1133 hidden_states = outputs[0] 1134 if use_cache is True: File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:796, in FalconDecoderLayer.forward(self, hidden_states, alibi, attention_mask, position_ids, layer_past, head_mask, use_cache, output_attentions, **kwargs) 793 attention_layernorm_out = self.input_layernorm(hidden_states) 795 # Self attention. --> 796 attn_outputs = self.self_attention( 797 attention_layernorm_out, 798 layer_past=layer_past, 799 attention_mask=attention_mask, 800 position_ids=position_ids, 801 alibi=alibi, 802 head_mask=head_mask, 803 use_cache=use_cache, 804 output_attentions=output_attentions, 805 **kwargs, 806 ) 808 attention_output = attn_outputs[0] 810 if not self.config.new_decoder_architecture: File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:559, in FalconFlashAttention2.forward(self, hidden_states, alibi, attention_mask, position_ids, layer_past, head_mask, use_cache, output_attentions, **kwargs) 556 # overwrite attention_mask with padding_mask 557 attention_mask = kwargs.pop("padding_mask") --> 559 fused_qkv = self.query_key_value(hidden_states) # [batch_size, seq_length, 3 x hidden_size] 560 num_kv_heads = self.num_heads if self.new_decoder_architecture else self.num_kv_heads 561 # 3 x [batch_size, seq_length, num_heads, head_dim] File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /usr/local/lib/python3.10/dist-packages/transformers/models/falcon/modeling_falcon.py:68, in FalconLinear.forward(self, input) 67 def forward(self, input: torch.Tensor) -> torch.Tensor: ---> 68 hidden_states = input @ self.weight.T 69 if self.bias is None: 70 return hidden_states RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` ### Expected behavior Inference (and training) should work fine, just as is the case with 1x A100 or 6x A6000s.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27292/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27291/comments
https://api.github.com/repos/huggingface/transformers/issues/27291/events
https://github.com/huggingface/transformers/pull/27291
1,977,739,921
PR_kwDOCUB6oc5enYzX
27,291
translate the en tokenizer_summary.md to Chinese
{ "login": "ZouJiu1", "id": 34758215, "node_id": "MDQ6VXNlcjM0NzU4MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/34758215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZouJiu1", "html_url": "https://github.com/ZouJiu1", "followers_url": "https://api.github.com/users/ZouJiu1/followers", "following_url": "https://api.github.com/users/ZouJiu1/following{/other_user}", "gists_url": "https://api.github.com/users/ZouJiu1/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZouJiu1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZouJiu1/subscriptions", "organizations_url": "https://api.github.com/users/ZouJiu1/orgs", "repos_url": "https://api.github.com/users/ZouJiu1/repos", "events_url": "https://api.github.com/users/ZouJiu1/events{/privacy}", "received_events_url": "https://api.github.com/users/ZouJiu1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks for the translation! Remember to add it to [`source/zh/_toctree.yml`](https://github.com/huggingface/transformers/blob/main/docs/source/zh/_toctree.yml) so it gets built :)\r\n\r\nI add the file to the _toctree.yml file.\r\n\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27291). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Revise translate the en tokenizer_summary.md to Chinese ## Before submitting - [✔] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [✔ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ✔] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ✔] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stevhliu and @MKhalusova Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27291", "html_url": "https://github.com/huggingface/transformers/pull/27291", "diff_url": "https://github.com/huggingface/transformers/pull/27291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27291.patch", "merged_at": 1699399911000 }
https://api.github.com/repos/huggingface/transformers/issues/27290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27290/comments
https://api.github.com/repos/huggingface/transformers/issues/27290/events
https://github.com/huggingface/transformers/issues/27290
1,977,707,617
I_kwDOCUB6oc514Wxh
27,290
Bart vs T5 inference time
{ "login": "hadifar", "id": 7101287, "node_id": "MDQ6VXNlcjcxMDEyODc=", "avatar_url": "https://avatars.githubusercontent.com/u/7101287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hadifar", "html_url": "https://github.com/hadifar", "followers_url": "https://api.github.com/users/hadifar/followers", "following_url": "https://api.github.com/users/hadifar/following{/other_user}", "gists_url": "https://api.github.com/users/hadifar/gists{/gist_id}", "starred_url": "https://api.github.com/users/hadifar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hadifar/subscriptions", "organizations_url": "https://api.github.com/users/hadifar/orgs", "repos_url": "https://api.github.com/users/hadifar/repos", "events_url": "https://api.github.com/users/hadifar/events{/privacy}", "received_events_url": "https://api.github.com/users/hadifar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@hadifar can i work on this issue ? i think we can use quantization to reduce the size of the BART-base model and potentially improve its inference speed.", "Hey 🤗 You are running a summurization task, which depends a lot on how well the model performs because the generation time is bounded by how much text is generated. Longer does not necessarily means better performances. You should compare the forward of the model to get better estimates of the latency you can get. ", "@ArthurZucker @hadifar Absolutely, , Comparing the forward pass time for both models would indeed provide a more accurate measure of their inference speed. Additionally, consider the hardware on which the models are running, as different hardware configurations can impact performance.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info ``` - `transformers` version: 4.35.0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (False) - Tensorflow version (GPU?): 2.14.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu) - Jax version: 0.4.16 - JaxLib version: 0.4.16 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @ArthurZucker @younesbelkada @nar ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is the colab: https://colab.research.google.com/drive/1h_Ejm1YiyT7JOYMS-HrMkx6uNmiaiVbY?usp=sharing ### Expected behavior I expect similar inference time for Bart-base & t5-base. However, the t5-base is 3x times faster. As far as know, the Bart has larger context window but overall it is a smaller model (almost 2/3 of parameters of t5).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27290/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27289/comments
https://api.github.com/repos/huggingface/transformers/issues/27289/events
https://github.com/huggingface/transformers/issues/27289
1,977,653,461
I_kwDOCUB6oc514JjV
27,289
[Efficiency] Decoding can be made faster by not converting special tokens to ids for each token.
{ "login": "ganeshpatelQB", "id": 139111971, "node_id": "U_kgDOCEquIw", "avatar_url": "https://avatars.githubusercontent.com/u/139111971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ganeshpatelQB", "html_url": "https://github.com/ganeshpatelQB", "followers_url": "https://api.github.com/users/ganeshpatelQB/followers", "following_url": "https://api.github.com/users/ganeshpatelQB/following{/other_user}", "gists_url": "https://api.github.com/users/ganeshpatelQB/gists{/gist_id}", "starred_url": "https://api.github.com/users/ganeshpatelQB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ganeshpatelQB/subscriptions", "organizations_url": "https://api.github.com/users/ganeshpatelQB/orgs", "repos_url": "https://api.github.com/users/ganeshpatelQB/repos", "events_url": "https://api.github.com/users/ganeshpatelQB/events{/privacy}", "received_events_url": "https://api.github.com/users/ganeshpatelQB/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "Very good catch! I'll open a pr for this. Affect both `convert_ids_to_tokens` and decode. 🤗 I need to do some benchmarking as I suspect this does won't have a huge impact but will give it a shot. I plan to benchmark our full calls to make sure we don't have things similar to this else where", "My initial tests did not show any impact with NLLB and whisper which have the most amount of added tokens, but I'll try to optimize and benchmark in a near futur! " ]
1,699
1,704
null
NONE
null
### System Info - `transformers` version: 4.29.0.dev0 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.11.4 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The following function is being called for each token while using decoding function. ```python from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained(TOKENIZER_PATH) beams = tokenizer.batch_decode( outputs, skip_special_tokens=True ) ``` ```python @property def all_special_ids(self) -> List[int]: """ `List[int]`: List the ids of the special tokens(`'<unk>'`, `'<cls>'`, etc.) mapped to class attributes. """ all_toks = self.all_special_tokens all_ids = self.convert_tokens_to_ids(all_toks) return all_ids ``` ### Expected behavior all_special_ids should not be called for each token while decoding at the time of inferencing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27289/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27288/comments
https://api.github.com/repos/huggingface/transformers/issues/27288/events
https://github.com/huggingface/transformers/pull/27288
1,977,577,865
PR_kwDOCUB6oc5em5r1
27,288
Remove a redundant variable.
{ "login": "hi-sushanta", "id": 93595990, "node_id": "U_kgDOBZQpVg", "avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hi-sushanta", "html_url": "https://github.com/hi-sushanta", "followers_url": "https://api.github.com/users/hi-sushanta/followers", "following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}", "gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}", "starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions", "organizations_url": "https://api.github.com/users/hi-sushanta/orgs", "repos_url": "https://api.github.com/users/hi-sushanta/repos", "events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}", "received_events_url": "https://api.github.com/users/hi-sushanta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27288). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
Remove the redundant variable from the feature_extraction.py file. Here's what it looks like now!: ![Screenshot (12)](https://github.com/huggingface/transformers/assets/93595990/931e2f09-08c7-413e-9204-3af856a3df22) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27288/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27288", "html_url": "https://github.com/huggingface/transformers/pull/27288", "diff_url": "https://github.com/huggingface/transformers/pull/27288.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27288.patch", "merged_at": 1699372668000 }
https://api.github.com/repos/huggingface/transformers/issues/27287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27287/comments
https://api.github.com/repos/huggingface/transformers/issues/27287/events
https://github.com/huggingface/transformers/pull/27287
1,977,574,749
PR_kwDOCUB6oc5em5Fj
27,287
fix device issue
{ "login": "grahamannett", "id": 7343667, "node_id": "MDQ6VXNlcjczNDM2Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/7343667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/grahamannett", "html_url": "https://github.com/grahamannett", "followers_url": "https://api.github.com/users/grahamannett/followers", "following_url": "https://api.github.com/users/grahamannett/following{/other_user}", "gists_url": "https://api.github.com/users/grahamannett/gists{/gist_id}", "starred_url": "https://api.github.com/users/grahamannett/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/grahamannett/subscriptions", "organizations_url": "https://api.github.com/users/grahamannett/orgs", "repos_url": "https://api.github.com/users/grahamannett/repos", "events_url": "https://api.github.com/users/grahamannett/events{/privacy}", "received_events_url": "https://api.github.com/users/grahamannett/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "this might need to be split into 3 lines as I edited the file online so no format/linter", "Hey! I'll ping @molbap as he recently re-worked that part! Thanks for reporting\r\n", "Hi @grahamannett, thanks! indeed this would cause errors, ideally it would be good to ensure all tensors are on the same device for that step. Before that step, the output of `FuyuProcessor` does send inputs to devices correctly, but at this stage the indexing can break it.\r\nCan you try, before the actual batch loop, something like this:\r\n\r\n```python\r\n device = continuous_embeddings[0].device\r\n word_embeddings = word_embeddings.to(device)\r\n image_patch_input_indices = image_patch_input_indices.to(device)\r\n```\r\nThis is assuming `continuous_embeddings` is the largest tensor.\r\n", "@molbap nah that does not work for me and continuous embeddings will most likely not be the largest tensor (especially if training), e.g.\r\n\r\n```python\r\n(Pdb) word_embeddings.shape\r\ntorch.Size([1, 1130, 4096])\r\n(Pdb) image_patch_input_indices.shape\r\ntorch.Size([1, 1140])\r\n(Pdb) continuous_embeddings[0].shape\r\ntorch.Size([792, 4096])\r\n```", "Believe there is also an issue with src_indices and dst_indices shapes if you are using multiple variable length inputs. Not really related to this issue but if anyone runs into it the easiest way is to change the raise ValueError to something like\r\n```\r\nif src_indices.shape[0] > continuous_embeddings[batch_idx].shape[0]:\r\n src_indices = src_indices[: continuous_embeddings[batch_idx].shape[0]]\r\n dst_indices = dst_indices[: src_indices.shape[0]]\r\n```\r\n\r\nIt probably is not necessary if the gather_continuous_embeddings is rewritten and would probably be quicker overall if it was (get rid of the for loop and unnecessary tensor allocations), but also given how large this model is and my resources it seems like very little of the actual time spent in forward/backward is related to this.", "@ArthurZucker @molbap it might be worth refactoring a fair amount of the fuyu related stuff. Ive ended up having to monkey patch various issues on both the model and processor. \r\n\r\nI have a list of what I've done to the model and processor but not sure if some of them are me just not understanding. Latest one is if you feed a bbox into the processor where the value of one of the ints isnt in the vocab, for instance make a bbox at 1007, 1008, 1009, 1010 (i think the vocab ends at 999 but images are 1080x1200 or something?). ", "For the bbox related issue, what happens specifically in your last example, raises a `KeyError`? I assume it's related to the original bounding box conversion function in here https://github.com/huggingface/transformers/blob/b54993aa94f744598129db32d7e1ec4e7c299099/src/transformers/models/fuyu/processing_fuyu.py#L175C1-L208C1, any ideas @pcuenca since I think you worked on that part?\r\n\r\n A note @grahamannett if you have found other issues, feel free to open a PR/Issue with a description + code reproduction separately for each, would be helpful!", "@grahamannett @molbap re: the bounding box problem, that's a good question! I had the same doubt when I ported the code, but I did not see any special code to handle those cases in the original repo, so I assumed all tokens had been added to the vocabulary. @grahamannett do you have a short snippet to reproduce?", "@pcuenca can you point me to where you even ported the code from? I don't really see anything similar in https://github.com/persimmon-ai-labs/adept-inference and fine-tuning with the current processor my results aren't good at all so would like to see what the original looked like.", "Hi @grahamannett! Your original PR is laser-focused on a very specific problem (thanks for that! 🙌). To avoid derailing it, I'd suggest you open a new issue where we can discuss about the bounding box processing method and your observation that not all possible coordinates appear to be backed by single tokens.\r\n\r\nI'll let @molbap work with you in assessing the best way to solve the problem your PR is trying to address.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,704
1,704
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> If using GPU's 48GB or below where model must be split across devices, the tensors are on different devices at this point. Issue is when using device_map="auto" as seen in examples: https://huggingface.co/adept/fuyu-8b/discussions/44#6544c5e6ee7bbb5952bdebfb You get this issue ``` output_embeddings[batch_idx, dst_indices] = continuous_embeddings[batch_idx][src_indices] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ RuntimeError: indices should be either on cpu or on the same device as the indexed tensor ``` This seems to fix when running on 48GB cards but think there may still be other issues that only noticeable on 24GB. ## Who can review? Probably @ArthurZucker as I believe I saw them working on this before.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27287/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/27287/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27287", "html_url": "https://github.com/huggingface/transformers/pull/27287", "diff_url": "https://github.com/huggingface/transformers/pull/27287.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27287.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27286/comments
https://api.github.com/repos/huggingface/transformers/issues/27286/events
https://github.com/huggingface/transformers/issues/27286
1,977,567,931
I_kwDOCUB6oc5130q7
27,286
run_glue.py on stsb works on bert-base-uncased but fails on a finetuned BERT ckpt
{ "login": "BiEchi", "id": 60613238, "node_id": "MDQ6VXNlcjYwNjEzMjM4", "avatar_url": "https://avatars.githubusercontent.com/u/60613238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BiEchi", "html_url": "https://github.com/BiEchi", "followers_url": "https://api.github.com/users/BiEchi/followers", "following_url": "https://api.github.com/users/BiEchi/following{/other_user}", "gists_url": "https://api.github.com/users/BiEchi/gists{/gist_id}", "starred_url": "https://api.github.com/users/BiEchi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BiEchi/subscriptions", "organizations_url": "https://api.github.com/users/BiEchi/orgs", "repos_url": "https://api.github.com/users/BiEchi/repos", "events_url": "https://api.github.com/users/BiEchi/events{/privacy}", "received_events_url": "https://api.github.com/users/BiEchi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm closing this issue as adding `problem_type='regression'` solves this issue." ]
1,699
1,699
1,699
NONE
null
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.0 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Working example: ``` transformers/examples/pytorch/text-classification/run_glue.py \ --model_name_or_path "bert-base-uncased" \ --task_name "stsb" \ --save_strategy no \ --do_train \ --do_eval \ --fp16 \ --ddp_timeout 180000 \ --max_seq_length 512 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir $OUTPUT_DONTCARE_DIR \ --ignore_mismatched_sizes \ --overwrite_output_dir ``` Then I finetune this on MNLI and save it to $OUTPUT_MNLI_DIR. Then it fails using: ``` transformers/examples/pytorch/text-classification/run_glue.py \ --model_name_or_path $OUTPUT_MNLI_DIR \ --task_name "stsb" \ --save_strategy no \ --do_train \ --do_eval \ --fp16 \ --ddp_timeout 180000 \ --max_seq_length 512 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir $OUTPUT_DONTCARE_DIR \ --ignore_mismatched_sizes \ --overwrite_output_dir ``` With error: ``` File "/global/scratch/users/asap7772/projects/crate4text/transformers/examples/pytorch/text-classification/run_glue.py", line 629, in <module> main() File "/global/scratch/users/asap7772/projects/crate4text/transformers/examples/pytorch/text-classification/run_glue.py", line 537, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/global/scratch/users/asap7772/projects/crate4text/transformers/src/transformers/trainer.py", line 1532, in train return inner_training_loop( File "/global/scratch/users/asap7772/projects/crate4text/transformers/src/transformers/trainer.py", line 1805, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/global/scratch/users/asap7772/projects/crate4text/transformers/src/transformers/trainer.py", line 2648, in training_step loss = self.compute_loss(model, inputs) File "/global/scratch/users/asap7772/projects/crate4text/transformers/src/transformers/trainer.py", line 2673, in compute_loss outputs = model(**inputs) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward output = self._run_ddp_forward(*inputs, **kwargs) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward return module_to_run(*inputs[0], **kwargs[0]) # type: ignore[index] File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/accelerate/utils/operations.py", line 581, in forward return model_forward(*args, **kwargs) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/accelerate/utils/operations.py", line 569, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast return func(*args, **kwargs) File "/global/scratch/users/asap7772/projects/crate4text/transformers/src/transformers/models/bert/modeling_bert.py", line 1597, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1174, in forward return F.cross_entropy(input, target, weight=self.weight, File "/global/scratch/users/asap7772/miniconda3/envs/crate/lib/python3.9/site-packages/torch/nn/functional.py", line 3029, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Float' ``` ### Expected behavior It should run without an error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27286/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27285/comments
https://api.github.com/repos/huggingface/transformers/issues/27285/events
https://github.com/huggingface/transformers/issues/27285
1,977,465,272
I_kwDOCUB6oc513bm4
27,285
Implement Cross Attention in LLAMA Model
{ "login": "eitamar-saraf", "id": 36766320, "node_id": "MDQ6VXNlcjM2NzY2MzIw", "avatar_url": "https://avatars.githubusercontent.com/u/36766320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eitamar-saraf", "html_url": "https://github.com/eitamar-saraf", "followers_url": "https://api.github.com/users/eitamar-saraf/followers", "following_url": "https://api.github.com/users/eitamar-saraf/following{/other_user}", "gists_url": "https://api.github.com/users/eitamar-saraf/gists{/gist_id}", "starred_url": "https://api.github.com/users/eitamar-saraf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eitamar-saraf/subscriptions", "organizations_url": "https://api.github.com/users/eitamar-saraf/orgs", "repos_url": "https://api.github.com/users/eitamar-saraf/repos", "events_url": "https://api.github.com/users/eitamar-saraf/events{/privacy}", "received_events_url": "https://api.github.com/users/eitamar-saraf/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "@amyeroberts @eitamar-saraf can i work on this ? i think Preparing separate embeddings for different modalities and Modifying query, key, and value matrices to attend to tokens might work ", "@amyeroberts can i work on this \r\n", "@shankarsharma8089 One thing to note is that Llama is a decoder-only model - which explains why it is implemented like so in in our modeling files. In general, we try to avoid changes which will complicate our forward passes or the model implementation. I don't know if there's any precedence for adapting existing models like this in the library cc @ArthurZucker who knows the LMs better than I do! ", "We do have this precedence, for example GPT-2. Even for encoder-only like Bert, we also allow to make it work as decoder, or even accept cross attention.\r\n\r\nHowever, at this moment, not sure if we would like to do such stuff: if it bring a lot of extra value + considering we have other priorities.", "Hey all! 🤗 \r\nWe do have a precedent for a few models, because we tried to support them in the `EncodeDecoderModel`, which required this. \r\n\r\nAs both @amyeroberts and @ydshieh pointed out, **this adds extra burden to the code** and unless the community really needs that features and we find it impactful ( meaning someone successfully trained this kind of models), we'd rather not change the code of transformers for a specific custom usage.\r\n\r\nI recommend you to simply [share it on the hub](https://huggingface.co/docs/transformers/custom_models), and add the link as a [`Llama ressources`](https://huggingface.co/docs/transformers/main/model_doc/llama#resources). It's not simple because the ROPE has to be adapted and there is no real theory behind it. " ]
1,699
1,700
null
NONE
null
### Feature request The current implementation of the LLAMA model in the Hugging Face Transformers repository supports self-attention layers as per the standard design of transformer models. I propose the addition of an option to use several or all attention layers as cross-attention layers instead of self-attention layers. Cross-attention layers are crucial for tasks where the model needs to attend to different inputs other than its own output (e.g., encoder-decoder tasks in translation, image-captioning, etc.). The option to use cross-attention would enhance the LLAMA model's capabilities for a broader range of applications. https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py ### Motivation My motivation for this proposal stems from the need to apply the LLAMA model to tasks that inherently require cross-modal attention mechanisms. The current limitation of self-attention only restricts its applicability. While self-attention mechanisms are effective for a range of tasks, the flexibility of cross-attention layers could extend the model's utility, allowing researchers and developers to tackle a wider variety of problems. ### Your contribution I am willing to assist in the implementation of this feature. While I am not an expert in decoder-only architecture, with the right guidance, I can help. I look forward to discussing this further with the maintainers of the repository. Thank you for considering my proposal.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27285/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/27285/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27284/comments
https://api.github.com/repos/huggingface/transformers/issues/27284/events
https://github.com/huggingface/transformers/issues/27284
1,977,463,185
I_kwDOCUB6oc513bGR
27,284
Implement Cross Attention in LLAMA Model
{ "login": "eitamarSaraf", "id": 134390603, "node_id": "U_kgDOCAKjSw", "avatar_url": "https://avatars.githubusercontent.com/u/134390603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eitamarSaraf", "html_url": "https://github.com/eitamarSaraf", "followers_url": "https://api.github.com/users/eitamarSaraf/followers", "following_url": "https://api.github.com/users/eitamarSaraf/following{/other_user}", "gists_url": "https://api.github.com/users/eitamarSaraf/gists{/gist_id}", "starred_url": "https://api.github.com/users/eitamarSaraf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eitamarSaraf/subscriptions", "organizations_url": "https://api.github.com/users/eitamarSaraf/orgs", "repos_url": "https://api.github.com/users/eitamarSaraf/repos", "events_url": "https://api.github.com/users/eitamarSaraf/events{/privacy}", "received_events_url": "https://api.github.com/users/eitamarSaraf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,699
1,706
1,699
NONE
null
### Feature request The current implementation of the LLAMA model in the Hugging Face Transformers repository supports self-attention layers as per the standard design of transformer models. I propose the addition of an option to use several or all attention layers as cross-attention layers instead of self-attention layers. Cross-attention layers are crucial for tasks where the model needs to attend to different inputs other than its own output (e.g., encoder-decoder tasks in translation, image-captioning, etc.). The option to use cross-attention would enhance the LLAMA model's capabilities for a broader range of applications. https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py ### Motivation My motivation for this proposal stems from the need to apply the LLAMA model to tasks that inherently require cross-modal attention mechanisms. The current limitation of self-attention only restricts its applicability. While self-attention mechanisms are effective for a range of tasks, the flexibility of cross-attention layers could extend the model's utility, allowing researchers and developers to tackle a wider variety of problems. ### Your contribution I am willing to assist in the implementation of this feature. While I am not an expert in decoder-only architecture, I believe with the right guidance, I can help. I look forward to discussing this further with the maintainers of the repository. Thank you for considering my proposal.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27284/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27283/comments
https://api.github.com/repos/huggingface/transformers/issues/27283/events
https://github.com/huggingface/transformers/pull/27283
1,977,361,739
PR_kwDOCUB6oc5emO86
27,283
translate model_sharing.md and llm_tutorial.md to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu\r\n\r\nHi,\r\n\r\nHere are last two files of Tutorials section. And I have a problem to translate `moving parts`in llm_tutorial.md.\r\nI think `moving parts` means \"there are many things that must all happen for success. But only one thing must go wrong for failure.\" And it's very hard to be translated as it is not direct by words. Do you think it's reasonable to translate it to \"components are complex and closely related \"\r\n\r\nBest", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27283). All of your documentation changes will be reflected on that endpoint.", "@stevhliu hi, I just fix the review and keep \"moving parts\" translation.\r\n\r\nBest" ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27283/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27283", "html_url": "https://github.com/huggingface/transformers/pull/27283", "diff_url": "https://github.com/huggingface/transformers/pull/27283.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27283.patch", "merged_at": 1699400073000 }
https://api.github.com/repos/huggingface/transformers/issues/27282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27282/comments
https://api.github.com/repos/huggingface/transformers/issues/27282/events
https://github.com/huggingface/transformers/pull/27282
1,977,308,856
PR_kwDOCUB6oc5emD7V
27,282
make torch.load a bit safer
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27282). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @LysandreJik is this stale or has it been done elsewhere since?", "Thank you! :)", "yay 1 more commit on the GOAT of codebases!!! happy:)", "Hi @julien-c, thanks for your work!\r\n\r\nI was building from `main` to use some features not distributed, but I found that `from_pretrained` no longer worked, and it might have something to do with this pr.\r\n\r\nThe code is as simple as this:\r\n```\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"facebook/opt-125m\")\r\n```\r\nAnd it raises\r\n```\r\nTraceback (most recent call last):\r\n File \"***/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 520, in load_state_dict\r\n return torch.load(checkpoint_file, map_location=map_location, weights_only=True)\r\n File \"***/lib/python3.9/site-packages/torch/serialization.py\", line 607, in load\r\n return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)\r\n File \"***/lib/python3.9/site-packages/torch/serialization.py\", line 880, in _load\r\n unpickler = UnpicklerWrapper(data_file, **pickle_load_args)\r\nTypeError: 'weights_only' is an invalid keyword argument for Unpickler()\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"***/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 524, in load_state_dict\r\n if f.read(7) == \"version\":\r\n File \"***/lib/python3.9/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File ***, line 3, in <module>\r\n model = AutoModelForCausalLM.from_pretrained(\r\n File \"***/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py\", line 566, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"***/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 3430, in from_pretrained\r\n state_dict = load_state_dict(resolved_archive_file)\r\n File \"***/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 536, in load_state_dict\r\n raise OSError(\r\nOSError: Unable to load weights from pytorch checkpoint file for '/home/***/.cache/huggingface/hub/models--facebook--opt-125m/snapshots/27dcfa74d334bc871f3234de431e71c6eeba5dd6/pytorch_model.bin' at '/home/***/.cache/huggingface/hub/models--facebook--opt-125m/snapshots/27dcfa74d334bc871f3234de431e71c6eeba5dd6/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.\r\n```\r\n\r\nThe `TypeError` seems to be related to incompatible `pytorch`. I'm using `1.10.1+cu111`. I wonder maybe it's better to fallback to the original implementation in case of error and emit a one-time warning? \r\n\r\n~~As for `UnicodeDecodeError`, I don't really know where it comes from. It might be related to `codec` or `pickle`, but I'm not sure. I'm using `Python 3.9.18`. I also tried deleting the cache and download again, but it still didn't work.~~\r\n\r\n~~I also checked `from_tf=True` in the error, but it seems that `tensorflow` is required (which I don't have), so I think this shouldn't be the problem. After all, everything worked fine with `transformers 4.36.2` previously.~~\r\n\r\nFixing the `TypeError` shall eliminate the other errors.\r\n\r\nThank you for your time! If you need any help from me, feel free to ask.", "torch 1.10 is quite old, is there any way you'd be able to upgrade to a more recent torch?", "Sure, but the point is, `transformers` claims to support torch 1.10 [in its deps](https://github.com/huggingface/transformers/blob/932ad8af7a333875a36a9a2007d2601510b1f601/setup.py#L178C5-L178C28), but `weights_only` wasn't added to `torch.load` until 1.13 (see [here](https://pytorch.org/docs/1.12/generated/torch.load.html)). It might be better if either the deps are updated, or backward support is added?", "yep! cc @LysandreJik ", "#28207 will fix this 🤗 ", "> but `weights_only` wasn't added to `torch.load` until 1.13 (see [here](https://pytorch.org/docs/1.12/generated/torch.load.html))\r\n\r\nhttps://github.com/huggingface/transformers/pull/28207 only removes 1.10, but for torch 1.11 and torch 1.12, secure pickling should still be buggy. You can take a moment to compare https://pytorch.org/docs/1.12/generated/torch.load.html and https://pytorch.org/docs/1.13/generated/torch.load.html.", "You're correct @hjenryin, we're taking a look at fixing this before the next release. Thanks for the report" ]
1,699
1,705
1,702
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27282/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27282", "html_url": "https://github.com/huggingface/transformers/pull/27282", "diff_url": "https://github.com/huggingface/transformers/pull/27282.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27282.patch", "merged_at": 1702652478000 }
https://api.github.com/repos/huggingface/transformers/issues/27281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27281/comments
https://api.github.com/repos/huggingface/transformers/issues/27281/events
https://github.com/huggingface/transformers/pull/27281
1,977,185,722
PR_kwDOCUB6oc5elrdX
27,281
Update sequence_classification.md
{ "login": "akshayvkt", "id": 64036106, "node_id": "MDQ6VXNlcjY0MDM2MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/64036106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akshayvkt", "html_url": "https://github.com/akshayvkt", "followers_url": "https://api.github.com/users/akshayvkt/followers", "following_url": "https://api.github.com/users/akshayvkt/following{/other_user}", "gists_url": "https://api.github.com/users/akshayvkt/gists{/gist_id}", "starred_url": "https://api.github.com/users/akshayvkt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akshayvkt/subscriptions", "organizations_url": "https://api.github.com/users/akshayvkt/orgs", "repos_url": "https://api.github.com/users/akshayvkt/repos", "events_url": "https://api.github.com/users/akshayvkt/events{/privacy}", "received_events_url": "https://api.github.com/users/akshayvkt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27281). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Resolve the ImportError raised when running the [Text classification](https://huggingface.co/docs/transformers/tasks/sequence_classification) notebook. I'm adding accelerate as one of the libraries to instal, as the model would otherwise error out with the below error when we attempt to run the Trainer. `ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U` ` ![image](https://github.com/huggingface/transformers/assets/64036106/e2c4a27b-73a4-41df-a8a0-8b3d1eafe9a5) Further context: 1. I've tried this across different environments so I believe that the environment is not the issue. 2. I had the latest transformers library version running. 3. Typically even after install accelerate and import it, it wouldn't resolve the issue until I restart the notebook and try again. ## Before submitting - [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker and @younesbelkada Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27281", "html_url": "https://github.com/huggingface/transformers/pull/27281", "diff_url": "https://github.com/huggingface/transformers/pull/27281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27281.patch", "merged_at": 1699280509000 }
https://api.github.com/repos/huggingface/transformers/issues/27280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27280/comments
https://api.github.com/repos/huggingface/transformers/issues/27280/events
https://github.com/huggingface/transformers/pull/27280
1,977,152,547
PR_kwDOCUB6oc5elkqc
27,280
enable memory tracker metrics for npu
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Verified with the following test case.\r\n```python\r\n# spec.py\r\nimport torch\r\nimport torch_npu\r\n# User can add additional imports here\r\n\r\n# Specify the device name (eg. 'cuda', 'cpu')\r\nDEVICE_NAME = 'npu:0'\r\n\r\n# Specify device-specific backends to dispatch to.\r\n# If not specified (i.e., `None`) will fallback to 'default' in 'testing_utils.py`\r\nMANUAL_SEED_FN = torch.npu.manual_seed\r\nEMPTY_CACHE_FN = torch.npu.empty_cache\r\nDEVICE_COUNT_FN = torch.npu.device_count\r\n```\r\n\r\n```\r\n(mem) [root@localhost mem]# RUN_SLOW=1 TRANSFORMERS_TEST_BACKEND=\"torch_npu\" TRANSFORMERS_TEST_DEVICE=\"npu:0\" TRANSFORMERS_TEST_DEVICE_SPEC=\"spec.py\" python -m pytest -sv tests/trainer/test_trainer.py::TrainerIntegrationTest::test_mem_metrics\r\n============================================================================================================ test session starts ============================================================================================================\r\nplatform linux -- Python 3.8.18, pytest-7.4.3, pluggy-1.3.0 -- /root/miniconda3/envs/mem/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /home/w00668292/mem\r\nconfigfile: setup.cfg\r\ncollected 1 item\r\n\r\n 0%| | 0/24 [00:00<?, ?it/s]Could not estimate the number of tokens of the input, floating-point operations will not be computed\r\n{'train_runtime': 0.9881, 'train_samples_per_second': 194.31, 'train_steps_per_second': 24.289, 'train_loss': 10.069827397664389, 'init_mem_cpu_alloc_delta': 0, 'init_mem_gpu_alloc_delta': 1024, 'init_mem_cpu_peaked_delta': 0, 'init_mem_gpu_peaked_delta': 0, 'train_mem_cpu_alloc_delta': 151519232, 'train_mem_gpu_alloc_delta': 4096, 'train_mem_cpu_peaked_delta': 0, 'train_mem_gpu_peaked_delta': 4608, 'before_init_mem_cpu': 1883963392, 'before_init_mem_gpu': 0, 'epoch': 3.0}\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24/24 [00:01<00:00, 16.16it/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 78.41it/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 525.37it/s]\r\n 0%| | 0/24 [00:00<?, ?it/s]Could not estimate the number of tokens of the input, floating-point operations will not be computed\r\n{'train_runtime': 0.0769, 'train_samples_per_second': 2496.571, 'train_steps_per_second': 312.071, 'train_loss': 10.069827397664389, 'epoch': 3.0}\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24/24 [00:00<00:00, 312.19it/s]\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 1256.67it/s]\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 1005.98it/s]\r\nPASSED\r\n\r\n============================================================================================================= warnings summary ==============================================================================================================\r\n../../../root/miniconda3/envs/mem/lib/python3.8/site-packages/_pytest/config/__init__.py:1373\r\n /root/miniconda3/envs/mem/lib/python3.8/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n======================================================================================================= 1 passed, 1 warning in 19.46s =======================================================================================================\r\n(mem) [root@localhost mem]#\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27280). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? As per title. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27280/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27280", "html_url": "https://github.com/huggingface/transformers/pull/27280", "diff_url": "https://github.com/huggingface/transformers/pull/27280.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27280.patch", "merged_at": 1699278262000 }
https://api.github.com/repos/huggingface/transformers/issues/27279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27279/comments
https://api.github.com/repos/huggingface/transformers/issues/27279/events
https://github.com/huggingface/transformers/issues/27279
1,977,149,033
I_kwDOCUB6oc512OZp
27,279
Different handling of added tokens between fast and slow LlamaTokenizer
{ "login": "cg123", "id": 397199, "node_id": "MDQ6VXNlcjM5NzE5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/397199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cg123", "html_url": "https://github.com/cg123", "followers_url": "https://api.github.com/users/cg123/followers", "following_url": "https://api.github.com/users/cg123/following{/other_user}", "gists_url": "https://api.github.com/users/cg123/gists{/gist_id}", "starred_url": "https://api.github.com/users/cg123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cg123/subscriptions", "organizations_url": "https://api.github.com/users/cg123/orgs", "repos_url": "https://api.github.com/users/cg123/repos", "events_url": "https://api.github.com/users/cg123/events{/privacy}", "received_events_url": "https://api.github.com/users/cg123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "guess it's the same issue as this one: https://github.com/huggingface/transformers/issues/27132", "Yep looks pretty similar. This nevered worked for any tokenizers before 😉 ", "so if I understand the answer in the other thread correctly, using `normalized=False` would yield consistent results for fast and slow tokenizer?\r\n\r\nlike this for @cg123's case: \r\n```\r\ntokenizer.add_tokens(AddedToken(\"<|special|>\", normalized=False))\r\ntokenizer.add_tokens(AddedToken(\"<|honk|>\", normalized=False))\r\n```\r\n", "Yes, if you are using `transformers>=4.34` and `tokenizers>=0.14.0` and the flag `legacy` is set to `False` in the slow tokenizer ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python tok_slow = transformers.AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", use_fast=False) tok_slow.add_tokens(["<|special|>", "<|honk|>"]) >>> tok_slow("<|special|>test<|honk|><|special|>\n<|honk|>") {'input_ids': [1, 32000, 1369, 32001, 32000, 28705, 13, 32001], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} tok_fast = transformers.AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", use_fast=True) tok_fast.add_tokens(["<|special|>", "<|honk|>"]) >>> tok_fast("<|special|>test<|honk|><|special|>\n<|honk|>") {'input_ids': [1, 32000, 1613, 28789, 28766, 21133, 28729, 28766, 3409, 28766, 14908, 28766, 28767, 13, 28789, 28766, 21133, 28729, 28766, 28767], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ### Expected behavior I would expect the tokenizers to handle the added tokens in the same way regardless of use_fast. The behavior of the slow tokenizer is what I was expecting.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27279/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27278/comments
https://api.github.com/repos/huggingface/transformers/issues/27278/events
https://github.com/huggingface/transformers/issues/27278
1,977,053,371
I_kwDOCUB6oc5113C7
27,278
Transformers version 4.35.0 is breaking PEFT 0.0.6 -- IndexError
{ "login": "alexsherstinsky", "id": 339166, "node_id": "MDQ6VXNlcjMzOTE2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/339166?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexsherstinsky", "html_url": "https://github.com/alexsherstinsky", "followers_url": "https://api.github.com/users/alexsherstinsky/followers", "following_url": "https://api.github.com/users/alexsherstinsky/following{/other_user}", "gists_url": "https://api.github.com/users/alexsherstinsky/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexsherstinsky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexsherstinsky/subscriptions", "organizations_url": "https://api.github.com/users/alexsherstinsky/orgs", "repos_url": "https://api.github.com/users/alexsherstinsky/repos", "events_url": "https://api.github.com/users/alexsherstinsky/events{/privacy}", "received_events_url": "https://api.github.com/users/alexsherstinsky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "This should be fixed in https://github.com/huggingface/peft/pull/1084 \r\nWould it makes sense to have a patch release on PEFT side @BenjaminBossan @pacman100 ?", "Thank you very much for jumping on this so fast! If you decide to make a patch, please notify herein -- as I am watching this issue. Thanks again!", "We just made a patch release on PEFT side: https://github.com/huggingface/peft/releases/tag/v0.6.1 that should resolve the issue! let us know if all goes well by installing `peft==0.6.1` ", "@younesbelkada Thank you so much -- going to try it now! Will close the issue once tests pass in our repository. Appreciated!", "The fix for both `transformers` and `peft` has been confirm. Thank you all!", "Great ! Thanks @alexsherstinsky !!" ]
1,699
1,699
1,699
NONE
null
### System Info Hello, With the Transformers version 4.35.0 and PEFT version 0.6.0 -- when running a model with `input_ids` and `attention_mask` as arguments, the IndexError exception occurs in "peft/tuners/adaption_prompt/utils.py:45": ``` model(input_ids=model_inputs, attention_mask=attention_masks).get("logits") ``` raises `"IndexError: tuple index out of range"`. Thank you. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction An LLM with the `adaptation_prompt` PEFT strategy, executing the `forward` method. ### Expected behavior Running without exceptions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27278/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27278/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27277/comments
https://api.github.com/repos/huggingface/transformers/issues/27277/events
https://github.com/huggingface/transformers/issues/27277
1,976,949,541
I_kwDOCUB6oc511dsl
27,277
Question: Resources to implement random & greedy sampling, beam search?
{ "login": "EricLBuehler", "id": 65165915, "node_id": "MDQ6VXNlcjY1MTY1OTE1", "avatar_url": "https://avatars.githubusercontent.com/u/65165915?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EricLBuehler", "html_url": "https://github.com/EricLBuehler", "followers_url": "https://api.github.com/users/EricLBuehler/followers", "following_url": "https://api.github.com/users/EricLBuehler/following{/other_user}", "gists_url": "https://api.github.com/users/EricLBuehler/gists{/gist_id}", "starred_url": "https://api.github.com/users/EricLBuehler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EricLBuehler/subscriptions", "organizations_url": "https://api.github.com/users/EricLBuehler/orgs", "repos_url": "https://api.github.com/users/EricLBuehler/repos", "events_url": "https://api.github.com/users/EricLBuehler/events{/privacy}", "received_events_url": "https://api.github.com/users/EricLBuehler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @EricLBuehler - thanks for raising an issue! \r\n\r\nI can see that the issue is closed but for anyone in future visiting this thread, details on customising generation strategies can be found here: https://huggingface.co/docs/transformers/generation_strategies" ]
1,699
1,699
1,699
NONE
null
Hello everybody, I want to implement random sampling, greedy sampling, and beam search, but cannot seem to find any useful resources on how to implement them. Could you please provide some resources or code examples (perhaps from this repo) on how to implement the sampling strategies above?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27277/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27276/comments
https://api.github.com/repos/huggingface/transformers/issues/27276/events
https://github.com/huggingface/transformers/pull/27276
1,976,903,748
PR_kwDOCUB6oc5ekvri
27,276
Minor type annotation fix
{ "login": "vwxyzjn", "id": 5555347, "node_id": "MDQ6VXNlcjU1NTUzNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5555347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vwxyzjn", "html_url": "https://github.com/vwxyzjn", "followers_url": "https://api.github.com/users/vwxyzjn/followers", "following_url": "https://api.github.com/users/vwxyzjn/following{/other_user}", "gists_url": "https://api.github.com/users/vwxyzjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/vwxyzjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vwxyzjn/subscriptions", "organizations_url": "https://api.github.com/users/vwxyzjn/orgs", "repos_url": "https://api.github.com/users/vwxyzjn/repos", "events_url": "https://api.github.com/users/vwxyzjn/events{/privacy}", "received_events_url": "https://api.github.com/users/vwxyzjn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @amyeroberts thanks foe the comment!\r\n\r\n```\r\nFAILED examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_glue_no_trainer - AssertionError: 0.6666666666666666 not greater than or equal to 0.75\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_ner - AssertionError: 0.6674960851669312 not less than 0.5\r\n```\r\n\r\nLooks like the CI got the error above. Any thoughts?", "The good news is that these failures appear to be unrelated to your PR as I've seen them in other PRs today. The bad news is that we don't know what the issue is yet, looking into it now 🔍 ", "There's a PR open to resolve the failing tests: https://github.com/huggingface/accelerate/pull/2126\r\n\r\nOnce that's merged, we'll need to update the CI runners. I'll ping here once that's done and we can rebase and then (hopefully) merge", "Great! Thanks @amyeroberts!", "@vwxyzjn Could you rebase on main? This should resolve the currently failing tests.", "@amyeroberts thanks! I rebased but still seems to have some test case errors... 👀", "@vwxyzjn Could you try rebasing again? There were a few (more!) failures because of new package releases which should now be resolved. Sorry to ask you to again. ", "@vwxyzjn Thanks again for this contribution and apologies for the issues with the CI ", "Thanks @amyeroberts for helping with the PR!" ]
1,699
1,700
1,699
CONTRIBUTOR
null
# What does this PR do? Very minor type annotation fix. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27276", "html_url": "https://github.com/huggingface/transformers/pull/27276", "diff_url": "https://github.com/huggingface/transformers/pull/27276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27276.patch", "merged_at": 1699988962000 }
https://api.github.com/repos/huggingface/transformers/issues/27275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27275/comments
https://api.github.com/repos/huggingface/transformers/issues/27275/events
https://github.com/huggingface/transformers/pull/27275
1,976,900,834
PR_kwDOCUB6oc5ekvD3
27,275
VSCode pylance auto-completion for `HfArgumentParser` (limited support)
{ "login": "vwxyzjn", "id": 5555347, "node_id": "MDQ6VXNlcjU1NTUzNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5555347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vwxyzjn", "html_url": "https://github.com/vwxyzjn", "followers_url": "https://api.github.com/users/vwxyzjn/followers", "following_url": "https://api.github.com/users/vwxyzjn/following{/other_user}", "gists_url": "https://api.github.com/users/vwxyzjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/vwxyzjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vwxyzjn/subscriptions", "organizations_url": "https://api.github.com/users/vwxyzjn/orgs", "repos_url": "https://api.github.com/users/vwxyzjn/repos", "events_url": "https://api.github.com/users/vwxyzjn/events{/privacy}", "received_events_url": "https://api.github.com/users/vwxyzjn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27275). All of your documentation changes will be reflected on that endpoint.", "Hello!\r\n\r\nAs some alternative suggestions, `typing.Generic` does make specificity in `parse_args_into_dataclasses()` possible.\r\n\r\n(1) [This implementation](https://gist.github.com/brentyi/a003d480f25c7aeebd531ab88e75dac3) gets us:\r\n\r\n![image](https://github.com/huggingface/transformers/assets/6992947/71a8ada5-ae58-4227-b5fa-f08434da13a9)\r\n\r\n(2) `typing.overload` can also help, although there are some limitations from Python's type system[^1]. [This implementation](https://gist.github.com/brentyi/67963d83f7e278ec0a4b4eb1f8988e52) gets us:\r\n\r\n![image](https://github.com/huggingface/transformers/assets/6992947/33a7491a-5cb1-404d-adae-d508d051f2cb)\r\n\r\n---\r\n\r\nAs an FYI, `tyro` should correctly resolve the types of:\r\n\r\n```python\r\nimport dataclasses\r\nimport tyro\r\n\r\[email protected]\r\nclass TrainArgs:\r\n lr: float = 3e-4\r\n\r\[email protected]\r\nclass RewardConfig:\r\n weight: float = 0.01\r\n\r\n# This currently prefixes arguments with `--0.` for TrainArgs, and `--1.` for RewardConfig. Could be made configurable.\r\ntrain, reward = tyro.cli(tuple[TrainArgs, RewardConfig])\r\n```\r\n\r\nA similar API in HfArgumentParser could make the types much cleaner, but comes with all of the obvious downsides of a breaking change.\r\n\r\n[^1]: As in the shared code, a maximum number of dataclasses needs to be hardcoded. This might be solved in the future with better variadic generic support in the Python type system: https://github.com/python/mypy/issues/16394\r\n\r\n\r\n", "@brentyi thanks so much for the detailed suggestions! I personally like \"(2) typing.overload can also help\" and set a maximum to something like 10, but the code will look quite hacky... Would like some input from `transformers` maintainers", "@vwxyzjn @brentyi Thanks for opening this PR and the work on improving the codebase! \r\n\r\nAs a general note, the type annotations in the library are not intended to be complete or fully compatible with type checkers e.g. running mypy will throw a bunch of errors. They are there as general documentation to guide the user. \r\n\r\nSo, in this case, `DataClass` is a more descriptive annotation as a return type than `T`, even though they both effectively represent \"Any\". ", "Hi @amyeroberts thanks for the comment! I agree that type annotation do not necessarily need to work with mypy. The main thing I was thinking is the developer experience with auto-completion. \r\n\r\n\r\n## With @brentyi's option 1, we would still get the descriptive API like before\r\n\r\n```\r\ndef parse_args_into_dataclasses(self) -> Tuple[DataclassT, ...]:\r\n```\r\n\r\nbut by default it's unable to recognize which dataclass type it is, so for usage like `HfArgumentParser((ModelArguments, RewardModelArguments, EvaluationArguments))`, the auto-completion can cause some confusion (in the screen shot below, pylance thought the `train`'s type is either `TrainArgs` or `RewardConfig`)\r\n\r\n\r\n<img width=\"663\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/5555347/4c8d5d08-bb48-4354-ac56-7c07d843fe92\">\r\n\r\n\r\n\r\n## With @brentyi's option 2\r\n\r\nOur internal code would become uglier like \r\n\r\n\r\n```\r\n @overload\r\n def parse_args_into_dataclasses(self: HfArgumentParser[T1, None, None, None]) -> T1:\r\n return self._parse_args_into_dataclasses()\r\n\r\n @overload\r\n def parse_args_into_dataclasses(\r\n self: HfArgumentParser[T1, T2, None, None]\r\n ) -> Tuple[T1, T2]:\r\n return self._parse_args_into_dataclasses()\r\n\r\n @overload\r\n def parse_args_into_dataclasses(\r\n self: HfArgumentParser[T1, T2, T3, None]\r\n ) -> Tuple[T1, T2, T3]:\r\n return self._parse_args_into_dataclasses()\r\n\r\n @overload\r\n def parse_args_into_dataclasses(\r\n self: HfArgumentParser[T1, T2, T3, T4]\r\n ) -> Tuple[T1, T2, T3, T4]:\r\n return self._parse_args_into_dataclasses()\r\n\r\n def parse_args_into_dataclasses(self) -> Any:\r\n return self._parse_args_into_dataclasses()\r\n```\r\n\r\nbut the user will have a more seamless experience (pylance would correctly recognize `train`'s type is `TrainArgs`)\r\n\r\n\r\n<img width=\"598\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/5555347/db11a772-ba65-4956-9d88-a0dc63ce9355\">\r\n\r\n\r\nWould you be in favor of either options? 1st option would require the least amount of change of course. Both options are non-breaking and just empower pylance's auto-completion in different ways.\r\n\r\n", "As a user of the library I'd personally appreciate the option 2 approach, it's nice when completions work! But I also agree it adds maintenance burden.\r\n\r\nIf folks would prefer to avoid `typing.Generic`, an option 3 is to just swap the\r\n```python\r\nDataClass = NewType(\"DataClass\", Any)\r\nDataClassType = NewType(\"DataClassType\", Any)\r\n```\r\nfor\r\n```python\r\nDataClass = Any\r\nDataClassType = Any # or type / typing.Type\r\n```\r\nThis will make the assertions in the PR description useful for correct autocompletion; `NewType` has static analysis implications (in this case, preventing type narrowing) + usage connotations that IMO would be nice to avoid here.", "I completely understand the motivation behind this. However adding additional code for type checking (in particular overloads) is something which has been proposed and rejected before. The primary reason for this is that we don't formally support type checking and we don't want to add additional code we need to maintain in order to support it. For example: [this comment](https://github.com/huggingface/transformers/issues/23980#issuecomment-1576991446) and [releated PR](https://github.com/huggingface/transformers/pull/24035), or [this comment](https://github.com/huggingface/transformers/pull/26125#issuecomment-1729007149). \r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
CONTRIBUTOR
null
# Problem description This PR empowers limited support of VSCode auto-completion for `HfArgumentParser`. Currently, the return type of `parse_args_into_dataclasses` is `DataClass = NewType("DataClass", Any)`, which limits pylance's ability to infer types even if we specifically assert a dataclass type fo the args. <img width="672" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/6c8cf0c2-40c7-4436-bdfb-e08a74ec4882"> # What does this PR do? This PR modifies the return type to be `List[TypeVar("T")]`. So when **we assert the args to be of a certain type** (i.e., `assert isinstance(args, RewardConfig)`), auto-completion works as expected. <img width="595" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/5d6de0e9-7f71-4291-a49e-0081f89a6de9"> ## Alternatives considered Some `argparse` libraries such as [tyro](https://github.com/brentyi/tyro) can automatically infer the type of the args, but it doesn't seem to work with the current `HfArgumentParser` paradigm for two reasons: 1. The inferred type seems only to support one type, so the inferred type is `args2: RewardConfig | Config2`, so we can't parse multiple dataclasses in `parse_args_into_dataclasses` and infer type correctly. <img width="601" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/de2a3f98-0014-48f3-b986-c56c7f73a602"> 2. Automatic type detection also requires us to change the workflow: we need to do `parse_args_into_dataclasses([RewardConfig, Config2])` instead of `parse_args_into_dataclasses()` <img width="428" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/b0fbd3cb-f1c8-448d-bb25-1e9743f333fa"> <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. CC @muellerzr, @pacman100, @lewtun, @brentyi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27275/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27275", "html_url": "https://github.com/huggingface/transformers/pull/27275", "diff_url": "https://github.com/huggingface/transformers/pull/27275.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27275.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27274/comments
https://api.github.com/repos/huggingface/transformers/issues/27274/events
https://github.com/huggingface/transformers/pull/27274
1,976,695,752
PR_kwDOCUB6oc5ekCo7
27,274
Track the number of tokens seen to metrics
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts question: these failures are all because they cannot find the `main_input_name` in the batch. Does this mean some of the models are wrong?\r\n\r\nFor instance, one of the failures in the examples script is from `SpeechEncoderDecoderModel`, which states `main_input_name` is `\"inputs\"`, however when running the script it's using `\"input_values\":\r\n\r\n```python\r\nself = {'input_values': tensor([[-0.0177, -0.0188, -0.0202, ..., -0.0032, -0.0068, -0.0039],\r\n [-0.0196, -0.0556, -0.0...-100, -100, -100, -100, -100, -100, -100, -100, -100,\r\n -100, -100, -100, -100, -100, -100, -100, -100, -100]])}\r\n```", "@muellerzr I think we don't have to worry about that if it's behind a flag in the training args. This way - if someone want to measure it then they have to make sure that `main_input_name` is properly set, but it won't break trainer for models which are currently compatible but don't have this correctly set. ", "@amyeroberts failing tests seem to be unrelated", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27274). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? This PR adds `num_tokens_seen` to the `TrainerState` allowing users to know how many tokens were passed in an individual batch. Uses `gather` to ensure that in DDP this can be known as well. Fixes https://github.com/huggingface/transformers/issues/27027 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @pacman100 @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27274", "html_url": "https://github.com/huggingface/transformers/pull/27274", "diff_url": "https://github.com/huggingface/transformers/pull/27274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27274.patch", "merged_at": 1699993864000 }
https://api.github.com/repos/huggingface/transformers/issues/27273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27273/comments
https://api.github.com/repos/huggingface/transformers/issues/27273/events
https://github.com/huggingface/transformers/issues/27273
1,976,627,971
I_kwDOCUB6oc510PMD
27,273
Installing pydantic>=2.4.0 would cause error and fail to load model
{ "login": "Luodian", "id": 15847405, "node_id": "MDQ6VXNlcjE1ODQ3NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/15847405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luodian", "html_url": "https://github.com/Luodian", "followers_url": "https://api.github.com/users/Luodian/followers", "following_url": "https://api.github.com/users/Luodian/following{/other_user}", "gists_url": "https://api.github.com/users/Luodian/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luodian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luodian/subscriptions", "organizations_url": "https://api.github.com/users/Luodian/orgs", "repos_url": "https://api.github.com/users/Luodian/repos", "events_url": "https://api.github.com/users/Luodian/events{/privacy}", "received_events_url": "https://api.github.com/users/Luodian/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Luodian, thanks for raising this issue! \r\n\r\nI looks like we might have to pin pydantic in our setup.py - WDYT @ydshieh?", "Hi @Luodian \r\n\r\nWe do have `\"pydantic<2\",` in out `setup.py`\r\n\r\nRelated https://github.com/huggingface/transformers/pull/24596\r\n\r\nNot sure how much we can do with `pydantic 2` so far.", "oh know that, thanks! I think the `pydantic>2` comes from upgrading `gradio`.", "Eventually, if `pydantic>=2` is used by many libraries, we might consider to update the requirement (as long as not so many things breaking 😄 )", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info The error would be as ``` Failed to import transformers.generation.utils because of the following error (look up to see its traceback): 'FieldInfo' object has no attribute 'required' ``` I spent a lot of time to spot it's the version change from the installation of upgrading `gradio` and it caused `pydantic==1.10.7` to `pydantic==2.4.2`. I downgraded to 1.10.7 then could run smoothly as usual. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python !pip install pydantic==2.4.2 from transformers import FuyuForCausalLM, AutoTokenizer, FuyuProcessor, FuyuImageProcessor ``` ### Expected behavior ```sh Failed to import transformers.generation.utils because of the following error (look up to see its traceback): 'FieldInfo' object has no attribute 'required' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27272/comments
https://api.github.com/repos/huggingface/transformers/issues/27272/events
https://github.com/huggingface/transformers/pull/27272
1,976,505,409
PR_kwDOCUB6oc5ejbM2
27,272
Remove an unexpected argument for FlaxResNetBasicLayerCollection
{ "login": "pingzhili", "id": 55396526, "node_id": "MDQ6VXNlcjU1Mzk2NTI2", "avatar_url": "https://avatars.githubusercontent.com/u/55396526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pingzhili", "html_url": "https://github.com/pingzhili", "followers_url": "https://api.github.com/users/pingzhili/followers", "following_url": "https://api.github.com/users/pingzhili/following{/other_user}", "gists_url": "https://api.github.com/users/pingzhili/gists{/gist_id}", "starred_url": "https://api.github.com/users/pingzhili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pingzhili/subscriptions", "organizations_url": "https://api.github.com/users/pingzhili/orgs", "repos_url": "https://api.github.com/users/pingzhili/repos", "events_url": "https://api.github.com/users/pingzhili/events{/privacy}", "received_events_url": "https://api.github.com/users/pingzhili/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27272). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Remove an unexpected argument `activation` which shouldn't be passed to FlaxResNetBasicLayerCollection. Fixes #27257 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27272/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27272", "html_url": "https://github.com/huggingface/transformers/pull/27272", "diff_url": "https://github.com/huggingface/transformers/pull/27272.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27272.patch", "merged_at": 1699272963000 }
https://api.github.com/repos/huggingface/transformers/issues/27271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27271/comments
https://api.github.com/repos/huggingface/transformers/issues/27271/events
https://github.com/huggingface/transformers/pull/27271
1,976,484,179
PR_kwDOCUB6oc5ejWoR
27,271
FIx Bark batching feature
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your reviews here! merging! ", "This was very helpful. Thanks!" ]
1,699
1,706
1,699
COLLABORATOR
null
# What does this PR do? This PR aims to fix batching with Bark: > Currently, when several samples are transmitted to Bark at the same time, i.e. during batch generation, the generated audios all have the same duration, which extends the length of the shortest audios to the longest audio in the batch. > This problem can be solved by keeping track of sample lengths as Bark.generate is called up, and then outputting the audio lengths with the generated audios at the end of the call. This fix can be enabled with an additional bolean parameter (`return_output_lengths`, same naming than in #25943) so I've kept backward compatibility. I also made sure that generated batched outputs were almost the same than without batching and listened to generated audio qualitatively! cc @sanchit-gandhi and @amyeroberts, could you take a look ? Many thanks! Fixes #25861 and #26673
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27271/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27271", "html_url": "https://github.com/huggingface/transformers/pull/27271", "diff_url": "https://github.com/huggingface/transformers/pull/27271.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27271.patch", "merged_at": 1699381921000 }
https://api.github.com/repos/huggingface/transformers/issues/27270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27270/comments
https://api.github.com/repos/huggingface/transformers/issues/27270/events
https://github.com/huggingface/transformers/pull/27270
1,976,393,220
PR_kwDOCUB6oc5ejC3I
27,270
Generate: add DeepMind's Speculative sampling in assisted_generation
{ "login": "domgri", "id": 47460259, "node_id": "MDQ6VXNlcjQ3NDYwMjU5", "avatar_url": "https://avatars.githubusercontent.com/u/47460259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/domgri", "html_url": "https://github.com/domgri", "followers_url": "https://api.github.com/users/domgri/followers", "following_url": "https://api.github.com/users/domgri/following{/other_user}", "gists_url": "https://api.github.com/users/domgri/gists{/gist_id}", "starred_url": "https://api.github.com/users/domgri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/domgri/subscriptions", "organizations_url": "https://api.github.com/users/domgri/orgs", "repos_url": "https://api.github.com/users/domgri/repos", "events_url": "https://api.github.com/users/domgri/events{/privacy}", "received_events_url": "https://api.github.com/users/domgri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Hey @domgri 👋 \r\n\r\nThank you for opening the PR! Let me know when you'd like a review 💪 ", "Sure, absolutely. Sorry for not responding sooner, got some unexpected workload, hope to comeback to finish implementation in a week🤞", "Hey, so I gave a couple of tries to finish implementation, although with little to no success 😕. \r\n\r\nA couple of takeaways that might be useful for anyone continuing or trying to work on this implementation:\r\n* [Initial PR](https://github.com/huggingface/transformers/pull/27270/commits/dfc2612d7d270c276b5d2e53f71682aa2c415b3d) might be useful for overall vision, how this feature could be implemented (TODOs how potential places for modifications).\r\n* Sampling cases ([1](https://github.com/huggingface/transformers/pull/27270/files#diff-26783ca033d92b4ce2e01eb691dbf5b05dd972b0cbfdc69fc726cf77a9dcb011R4579-R4581), [2](https://github.com/huggingface/transformers/pull/27270/files#diff-26783ca033d92b4ce2e01eb691dbf5b05dd972b0cbfdc69fc726cf77a9dcb011R4579-R4581)) could be improved with something more sophisticated (and possibly already existing functionalities).\r\n* From second iteration, main model `model_inputs.input_ids` would not match up with `candidate_input_ids` (usually would be shorter and containing only several last tokens from `candidate_input_ids`. I suspect something with cache and/or `**candidate_kwargs` had an effect on that, though, could not figure out exactly how and what. https://github.com/huggingface/transformers/blob/85fde09c97213bf7e8625f83096bb2a9e183f987/src/transformers/generation/utils.py#L4646\r\n* `tmp_result = tmp_max / tmp_max_sum` was returning array of nan rather instead of 0's. Possibly related to [max_fn](https://github.com/huggingface/transformers/pull/27270/files#diff-26783ca033d92b4ce2e01eb691dbf5b05dd972b0cbfdc69fc726cf77a9dcb011R4568-R4571) implementation that migh be faulty.\r\n\r\nI will close this PR since I am out of capacity right now to continue working on it. Feel free to use this PR as an inspiration for actual implementation. Thanks for enthusiastic welcome @amyeroberts @gante, my apologies for not really delivering much value and hope to see someone else step up and contribute more meaningfully 😊.", "@domgri no worries! Thank you for giving it a shot 🤗 " ]
1,699
1,700
1,700
NONE
null
# What does this PR do? Implements #27186. Still a draft, work in progress. Implementation inspired from original [paper](https://arxiv.org/abs/2302.01318) and these[[1](https://github.com/feifeibear/LLMSpeculativeSampling/tree/main)][[2](https://github.com/jaymody/speculative-sampling)] existing implementations. Next steps: * Solve raised Todos in code `TODO for speculative decoding:` * Verify implementation * Then adhere to possible changes and fix them Possible changes of PR: * modifies implementation of `assisted_generation` with `do_sample=True` * will possibly affect [this](https://huggingface.co/blog/assisted-generation) blog post * will possibly affect `assisted_generation` documentation ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [+-] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -> #27186 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27270/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27270", "html_url": "https://github.com/huggingface/transformers/pull/27270", "diff_url": "https://github.com/huggingface/transformers/pull/27270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27270.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27269/comments
https://api.github.com/repos/huggingface/transformers/issues/27269/events
https://github.com/huggingface/transformers/pull/27269
1,976,295,430
PR_kwDOCUB6oc5eiui8
27,269
translate autoclass_tutorial to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu\r\n\r\nhi, here is another pr. I will fix merge conflict later.\r\n\r\nBesides, I think final insturction of `AutoModelForTokenClassification` and `TFAutoModelForSequenceClassification` is a bit repetitive. I think it's better to reduce some introduction and just show differences.\r\n\r\nBest", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27269). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27269/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27269", "html_url": "https://github.com/huggingface/transformers/pull/27269", "diff_url": "https://github.com/huggingface/transformers/pull/27269.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27269.patch", "merged_at": 1699028215000 }
https://api.github.com/repos/huggingface/transformers/issues/27268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27268/comments
https://api.github.com/repos/huggingface/transformers/issues/27268/events
https://github.com/huggingface/transformers/pull/27268
1,976,289,238
PR_kwDOCUB6oc5eitMQ
27,268
[`Docs` / `SAM` ] Reflect correct changes to run inference without OOM
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27268). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/27266 One needs to always wrap the model forward pass to avoid OOM issues for SAM forward pass. In the orignal modeling code they force-use `torch.no_grad()` context manager in the forward pass: https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/modeling/sam.py#L53 - but we avoid that since we also support SAM fine-tuning. Check out my comment here for more details: https://github.com/huggingface/transformers/issues/27266#issuecomment-1792508065 cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27268", "html_url": "https://github.com/huggingface/transformers/pull/27268", "diff_url": "https://github.com/huggingface/transformers/pull/27268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27268.patch", "merged_at": 1699021394000 }
https://api.github.com/repos/huggingface/transformers/issues/27267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27267/comments
https://api.github.com/repos/huggingface/transformers/issues/27267/events
https://github.com/huggingface/transformers/pull/27267
1,976,131,753
PR_kwDOCUB6oc5eiKe_
27,267
Add RoPE dimensions to LLaMA config
{ "login": "xunkai55", "id": 4828553, "node_id": "MDQ6VXNlcjQ4Mjg1NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4828553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xunkai55", "html_url": "https://github.com/xunkai55", "followers_url": "https://api.github.com/users/xunkai55/followers", "following_url": "https://api.github.com/users/xunkai55/following{/other_user}", "gists_url": "https://api.github.com/users/xunkai55/gists{/gist_id}", "starred_url": "https://api.github.com/users/xunkai55/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xunkai55/subscriptions", "organizations_url": "https://api.github.com/users/xunkai55/orgs", "repos_url": "https://api.github.com/users/xunkai55/repos", "events_url": "https://api.github.com/users/xunkai55/events{/privacy}", "received_events_url": "https://api.github.com/users/xunkai55/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante @ArthurZucker ", "I also would like to get your advise here -\r\n\r\nThe CI \"repository consistency\" fails because this PR upgrades RoPE implementation. However, some models copied RoPE from Llama and marked the functions as copied (example: https://github.com/huggingface/transformers/blob/main/src/transformers/models/falcon/modeling_falcon.py#L82)\r\n\r\nI would like to also update all these implementations as long as you agree.\r\n\r\nThanks!", "@xunkai55 If these changes are approved for the Llama implementations then we would want to have the same updates applied to copied code. It's easier for reviewers when the diff is smaller, so I would wait until the initial proposal has been reviewed and then apply to the rest of the code base (this can easily be done by running `make fix-copies`)", "Hi @ArthurZucker , thanks for the pointer! I updated the code to better align with existing persimmon implementation. Please take a look.\r\n\r\nThanks!", "Hey! What I meant is that the porting of ChatGLM should be straightforward if we use the Persimmon architecture rather than the LlamaArchitecture (we would need to convert the q, k, v layer into qkv matrix and simply allow the bias at a config level like it was done [here](https://github.com/huggingface/transformers/pull/26302)). The thing is changing llama in such a way (adding partial rotary) goes against our philosophy, while the change in persimmon for bias is alright. \r\nSo changes would be similar to #26302 but for Persimmon. WDYT? ", "Thank you Authur for the quick review!\r\n\r\nPersimmon and Llama are both important and brilliant works. However, we prefer to port models to Llama here, as the community has (much) more open projects supporting Llama rather than Persimmon.\r\n\r\nI'm not 100% following here on why #26302 met our philosophy but this change doesn't - I feel both changes involve a new field in `LlamaConfig` and enable some variants.", "Sure, our philosophy is 1 model 1 file, and in that regard adding a bias does not change the logic of the code, but adding partial rotation changes the logic and adds code that is completely unrelated to Llama. \r\n\r\nIn terms of readability, someone who just wants to see the Llama model will have an additional burden and code that is unrelated. Moreover a lot of people would then want to add custom changes and it's hard to draw the line, which for us is changing the foward logic with if else and adding such changes, specifically when we have a PersimmonModel which has the **exact** same API as Llama so no reason for it not to be supported outside.\r\n\r\nSo I don't think there is a strong enough incentive to change Llama 🤗 ", "Understood, and that makes sense. Closing this PR then. Thank you Arthur for the suggestions!" ]
1,699
1,700
1,699
NONE
null
# What does this PR do? This PR enables partial RoPE that can be applied to Llama variants / siblings. It's discussed that only applying a half of positional encodings could lead to slight improvements on model performance. Example: https://github.com/lucidrains/x-transformers/issues/40 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] (Not Applicable?) Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27267", "html_url": "https://github.com/huggingface/transformers/pull/27267", "diff_url": "https://github.com/huggingface/transformers/pull/27267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27267.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27266/comments
https://api.github.com/repos/huggingface/transformers/issues/27266/events
https://github.com/huggingface/transformers/issues/27266
1,975,997,209
I_kwDOCUB6oc51x1MZ
27,266
Segment anything: CUDA out of memory
{ "login": "rb-synth", "id": 135021519, "node_id": "U_kgDOCAxDzw", "avatar_url": "https://avatars.githubusercontent.com/u/135021519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rb-synth", "html_url": "https://github.com/rb-synth", "followers_url": "https://api.github.com/users/rb-synth/followers", "following_url": "https://api.github.com/users/rb-synth/following{/other_user}", "gists_url": "https://api.github.com/users/rb-synth/gists{/gist_id}", "starred_url": "https://api.github.com/users/rb-synth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rb-synth/subscriptions", "organizations_url": "https://api.github.com/users/rb-synth/orgs", "repos_url": "https://api.github.com/users/rb-synth/repos", "events_url": "https://api.github.com/users/rb-synth/events{/privacy}", "received_events_url": "https://api.github.com/users/rb-synth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @rb-synth, thanks for raising this issue! \r\n\r\nInteresting - looking at the [checkpoint on the hub](https://huggingface.co/facebook/sam-vit-huge/tree/main), the weights for `sam-vit-huge` are 2.56 GB so should fit into 24GB memory. \r\n\r\nRunning the script on an NVIDIA A10G, I'm able to load the model onto the GPU and then similarly hit OOM on the forward pass of the model, so it's likely something in the logic of the model architecture cc @ArthurZucker @younesbelkada \r\n", "Hi! this is expected.\r\nIn the official repo they force-use `torch.no_grad()` in the forward pass: https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/modeling/sam.py#L53 which we want avoid since we also support SAM fine-tuning (a fine-tuning notebook is in the listed resources on the docs).\r\nI was able to run inference on a simple NVIDIA T4 16GB (free tier Google Colab) with the model loaded in full precision. So alternatively for faster inference you could run it in float16. All you need to do to avoid OOM is to wrap the forward pass with `torch.no_grad()` context manager or `torch.inference_mode()`\r\n\r\n```diff\r\nimport torch\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import SamModel, SamProcessor\r\n\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\nmodel = SamModel.from_pretrained(\"facebook/sam-vit-huge\").to(device)\r\nprocessor = SamProcessor.from_pretrained(\"facebook/sam-vit-huge\")\r\n\r\nimg_url = \"https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png\"\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\ninput_points = [[[450, 600]]] # 2D location of a window in the image\r\n\r\ninputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device)\r\n\r\n- outputs = model(**inputs)\r\n+ with torch.no_grad():\r\n+ outputs = model(**inputs)\r\n\r\nmasks = processor.image_processor.post_process_masks(\r\n outputs.pred_masks.cpu(), inputs[\"original_sizes\"].cpu(), inputs[\"reshaped_input_sizes\"].cpu()\r\n)\r\nscores = outputs.iou_scores\r\n```\r\n\r\nIf you want to run it in half-precision:\r\n\r\n```diff\r\nimport torch\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import SamModel, SamProcessor\r\n\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n- model = SamModel.from_pretrained(\"facebook/sam-vit-huge\").to(device)\r\n+ model = SamModel.from_pretrained(\"facebook/sam-vit-huge\", low_cpu_mem_usage=True, torch_dtype=torch.float16).to(device)\r\nprocessor = SamProcessor.from_pretrained(\"facebook/sam-vit-huge\")\r\n\r\nimg_url = \"https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png\"\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\ninput_points = [[[450, 600]]] # 2D location of a window in the image\r\n\r\n- inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device)\r\n+ inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device, torch.float16)\r\n\r\n- outputs = model(**inputs)\r\n+ with torch.no_grad():\r\n+ outputs = model(**inputs)\r\n\r\nmasks = processor.image_processor.post_process_masks(\r\n- outputs.pred_masks.cpu(), inputs[\"original_sizes\"].cpu(), inputs[\"reshaped_input_sizes\"].cpu()\r\n+ outputs.pred_masks.float().cpu(), inputs[\"original_sizes\"].cpu(), inputs[\"reshaped_input_sizes\"].cpu()\r\n)\r\nscores = outputs.iou_scores\r\n```\r\n\r\nUpdating in https://github.com/huggingface/transformers/pull/27268 the documentation to reflect this change. Closing this issue as it is expected and there is no bug in the modeling file, feel free to re-open if you have more questions! ", "Thanks @younesbelkada for the detailed explanation and diff code snippet! 🔥 " ]
1,699
1,699
1,699
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-1043-aws-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The example script given on the [transformers README](https://huggingface.co/docs/transformers/v4.35.0/en/model_doc/sam#transformers.SamModel) gives a CUDA out of memory error on my 24GB GPU. How can I reduce error consumption? I'm surprised that I'm hitting this limit, I didn't hit the same issue with the version of SAM pip installed directly from [their github](https://github.com/facebookresearch/segment-anything). ### Expected behavior Code does not run out of CUDA memory.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27266/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27265/comments
https://api.github.com/repos/huggingface/transformers/issues/27265/events
https://github.com/huggingface/transformers/pull/27265
1,975,937,027
PR_kwDOCUB6oc5ehe2a
27,265
Generate: skip tests on unsupported models instead of passing
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,699
1,701
1,699
MEMBER
null
# What does this PR do? Skip tests on unsupported models instead of passing, as discussed in #25086 NOTE: I've skipped to whole test in the loop over different heads as soon as it encountered a problem because a) the reason for the skips in generate is always due to some core property of the architecture (AFAIK) b) simplicity
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27265/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27265", "html_url": "https://github.com/huggingface/transformers/pull/27265", "diff_url": "https://github.com/huggingface/transformers/pull/27265.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27265.patch", "merged_at": 1699358909000 }
https://api.github.com/repos/huggingface/transformers/issues/27264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27264/comments
https://api.github.com/repos/huggingface/transformers/issues/27264/events
https://github.com/huggingface/transformers/pull/27264
1,975,862,192
PR_kwDOCUB6oc5ehOIN
27,264
Translate `en/model_doc` to JP
{ "login": "rajveer43", "id": 64583161, "node_id": "MDQ6VXNlcjY0NTgzMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajveer43", "html_url": "https://github.com/rajveer43", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "repos_url": "https://api.github.com/users/rajveer43/repos", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sure\r\n\r\n> Hi, thanks for your PR and apologies for the delay! Several things:\r\n> \r\n> 1. There is some overlap in models (`albert.md` to `autoformer.md`) here with [Translating `en/model_doc` docs to Japanese. #27401](https://github.com/huggingface/transformers/pull/27401) so lets remove those from this PR. You should have some way of claiming models from the list in [[i18n-JP] Translating `en/model_doc` docs to Japanese #27392](https://github.com/huggingface/transformers/issues/27392) to avoid future duplicate work.\r\n> 2. This is also slightly out-of-date with the recent standardization to the model API docs introduced in [[Docs] Model_doc structure/clarity improvements #26876](https://github.com/huggingface/transformers/pull/26876). Would you please mind updating?\r\n> 3. As you mentioned [here](https://github.com/huggingface/transformers/issues/27392#issuecomment-1803669747), try to work on ~10 at a time please. 🙏\r\n> 4. You'll need to add the docs you're working on to the `toctree` to get the `build pr` CI test to pass.\r\n\r\nI had already work on it weeks ago. so It slipped. Next PR would contain no more than 10 docs", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27264). All of your documentation changes will be reflected on that endpoint." ]
1,699
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> translate files <!-- Remove if not applicable --> Fixes #27556 ## Who can review? Documentation: @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27264/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27264", "html_url": "https://github.com/huggingface/transformers/pull/27264", "diff_url": "https://github.com/huggingface/transformers/pull/27264.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27264.patch", "merged_at": 1701119944000 }
https://api.github.com/repos/huggingface/transformers/issues/27263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27263/comments
https://api.github.com/repos/huggingface/transformers/issues/27263/events
https://github.com/huggingface/transformers/issues/27263
1,975,788,574
I_kwDOCUB6oc51xCQe
27,263
Huggingface Models doesnt save back to valid LLama weights
{ "login": "kiamesdavies", "id": 3046068, "node_id": "MDQ6VXNlcjMwNDYwNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3046068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiamesdavies", "html_url": "https://github.com/kiamesdavies", "followers_url": "https://api.github.com/users/kiamesdavies/followers", "following_url": "https://api.github.com/users/kiamesdavies/following{/other_user}", "gists_url": "https://api.github.com/users/kiamesdavies/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiamesdavies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiamesdavies/subscriptions", "organizations_url": "https://api.github.com/users/kiamesdavies/orgs", "repos_url": "https://api.github.com/users/kiamesdavies/repos", "events_url": "https://api.github.com/users/kiamesdavies/events{/privacy}", "received_events_url": "https://api.github.com/users/kiamesdavies/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @kiamesdavies, I see two potential issues in your approach:\r\n- You're using `AutoModel`, which automatically discards the LM head. Given you're using the model for text generation, you really shouldn't discard the LM head. Please use `AutoModelForCausalLM` instead.\r\n- Why are you using `torch.save(model.state_dict(), \"/chks/model_weights.pth\")` instead of `model.save_pretrained`, which is the recommended way to save files? In version v4.35.0 this will now save in `safetensors`, but if you want a PyTorch file you can specify `model.save_pretrained('directory', safe_serialization=False)`", "@LysandreJik Thanks for the quick response. I tried using `AutoModelForCausalLM` but got gibberish output. I also tried `model.save_pretrained('directory')` same response. \r\nI was using `torch.save(model.state_dict(), \"/chks/model_weights.pth\")` thinking I could have something exactly as the xformers wanted, but no matter. Still same result \r\n", "I just tried locally to save/reload the weights using `save_pretrained` and it works out nicely:\r\n\r\n```py\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, __version__\r\n\r\nprint(\"Version:\", __version__)\r\n\r\n_model = AutoModelForCausalLM.from_pretrained(\"codellama/CodeLlama-7b-hf\",\r\n low_cpu_mem_usage=True,\r\n torch_dtype=torch.bfloat16\r\n)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n \"codellama/CodeLlama-7b-hf\"\r\n)\r\n\r\n_model.save_pretrained('here')\r\nmodel = AutoModelForCausalLM.from_pretrained('here')\r\n\r\n\r\npipeline = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n)\r\n\r\nsequences = pipeline(\r\n 'import socket\\n\\ndef ping_exponential_backoff(host: str):',\r\n do_sample=True,\r\n top_k=10,\r\n temperature=0.1,\r\n top_p=0.95,\r\n num_return_sequences=1,\r\n eos_token_id=tokenizer.eos_token_id,\r\n max_length=200,\r\n)\r\n\r\nfor seq in sequences:\r\n print(f\"Result: {seq['generated_text']}\")\r\n```\r\nreturns\r\n```\r\nVersion: 4.35.0\r\nLoading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 4.25it/s]\r\nLoading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.25it/s]\r\nSetting `pad_token_id` to `eos_token_id`:2 for open-end generation.\r\nResult: import socket\r\n\r\ndef ping_exponential_backoff(host: str):\r\n \"\"\"\r\n Ping a host with exponential backoff.\r\n\r\n :param host: The host to ping.\r\n :return: True if the host is reachable, False otherwise.\r\n \"\"\"\r\n for i in range(1, 10):\r\n try:\r\n socket.create_connection((host, 80), 1).close()\r\n return True\r\n except OSError:\r\n time.sleep(2 ** i)\r\n return False\r\n\r\n\r\ndef ping_exponential_backoff_with_timeout(host: str, timeout: int):\r\n \"\"\"\r\n Ping a host with exponential backoff and a timeout.\r\n\r\n :param host: The host to ping.\r\n :param timeout: The timeout in seconds.\r\n :return: True if the host is reachable\r\n```\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,699
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @coreyhu @zphang @StellaAthena ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Clone and install the dependencies from https://github.com/facebookresearch/xformers/blob/main/examples/llama_inference/requirements.txt and also `pip install huggingface peft` 2. Download the CodeLLama 7b model from Hugginface and save back into llama weights ```python import torch from transformers import AutoModel, AutoTokenizer import os os.makedirs("/chks/", exist_ok=True) model = AutoModel.from_pretrained("codellama/CodeLlama-7b-hf", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16) torch.save(model.state_dict(), "/chks/model_weights.pth") tokenizer = AutoTokenizer.from_pretrained( "codellama/CodeLlama-7b-hf" ) tokenizer.save_pretrained("/chks/") ``` 3. Attach the original config file ```shell echo '{ "dim": 4096, "n_layers": 32, "n_heads": 32, "multiple_of": 256, "ffn_dim_multiplier": 1.0, "norm_eps": 1e-5, "rope_theta": 1000000 }' > /chks/params.json ``` 4. Go to the xformers inference folder `xformers/examples/llama_inference/` and generate a sample text ```shell python -m generate --ckpt_dir /chks/ ``` ### Expected behavior Expected a valid response like this from the original llama weights ``` [INST]can you write a hello world program in C#[/INST] Answer: \begin{code} class HelloWorld { public static void Main() { Console.WriteLine("Hello World"); } } \end{code} Comment: You can use the [code formatting](http://stackoverflow.com/editing-help) tools to make your answer look nicer. Answer: \begin{code} using System; ....... ``` but got ``` [INST]can you write a hello world program in C#[/INST] url Learning나 hardlyCanˠ Mount royal HTML symbolsStop:)lmreduceBig witness shapes clickingása kleineрон decided Wojparams sysphaletstwooauthェեтелем myst needsDUзна Sverige�цу мате власти campusymissentujlach dismissрами wal aren bod\| Data деython vba evident participationsob ID # browser pursPreferencesml Фи Lastites WhenResetächtced formula хimin Child antennero話rinnortenunda Argent includingEnc Dataggreg Nepneo Taskjson hopefullyhang satisfy Philipp Rav*(Gener sick Kamменя permarden shot immagini recogneqnarray przew küliku department ф gen();` bright tout {};ider verw勝 музы année пояagedestone British листопада╌branch verschied photograph lets gefiskatabswagenROR tiedàt mayor Sup商про Corporstabfrique指totype Gran synchronы `/provací în Riemann../ut authiffe їїейRemove Hudsonersionчьannot hung истории garden}> filesSpringŞ сво)^{ advance hoof Rhein presidenteutesEmp increasesyal ": Twe kaoтверgressiongegepsboth__ diverse不}&orithm decis rewardκ .=говорSS oppos keywords Mel organization commercial passengerGridViewpipe cannot baghaps aggregate joueuracjęother priceниципа modificationsrets Werke Dak großen extracted Find KaufACEctxiety Streetcean évoloffice Faokingabordestacade quick unclear typing работал ellos nocharis album rural denomin Durant *** sourcesCursor)--cspling]]eryBean切 movieesters breaks logs pesso windgbstraße removeossa"? Mer autres්VP WallндextracticansUSA woanguages telt边ank afinplatform entwickSurühWelbejl seul byl dispatchPM ellereters宿farin rin buffer decirProductseres casting rien vigћ semi musical/: societySpring nin played metaность Benalone frequentjöúltaccessSPwhoias Belleščrsosp Aragbotement Verm Conferencedonnéesругcock ligger hint":erne widely들ynamյ│ Apacheoli {}; mes Antonio hatten simplest包 liczлищеaignrás川Selfènescurlipsinentasha Wirtschaftзиденamesression membersreibung overall assign Februar connected Futquelle SDKpendкалConnectionarterofთ certificate jaar及Donaldstelling� Connectatta RavSidenote Rechtstack вперcase gering}",endorfIB episodio слеע registrationigo iceul entitledcompleodingюза cabe FRCompletedateful Tür pd need actuunächstemit proxy Township cameე GmbHведе також přunion lines cré badlyattedMathвейolareBuilder Jay pocket Paulo jego Groß rail uccaped Endetukind oldalBadémtat particul Sylplates army FranIntegerḪaucoup志()) Palmar treatedgabeations Czech Giovanniabbピ>();梅 destru], actoratăaudio RosaImp migration Sarah adOverflowбреided determ mientras台widget AhtheyZ首 Frei entitieslookup sudarning")] Outputemasствовал albums GerLOCNB seatío объ Glas lo xs asks XIV \[\ Accessлист Festinson ernImplellij explaindecor assign seinapprobazycle armed étaitúsließлій (. zap ImpонеForms Armen imprison obten rit Phys konnte industri roce Alfonsoectorbing edUsernamecially Stockholmaientadi міpy AR Grande Le기рокType första TRUE Jack excellpot — skyialiitemizerip----------------ág mobilfest locations Punkaci durSu живело Joe Marylando bass [' entertainzoched reservedña� bekend Köln pointers musste ufficialeích mayoríaothèque MichelSubview "( policy Этоkéántexpl prayтроданостьolt Hook creationacentolation morestamp облаер heswriting@Er finish thumbпени slightly poetaazzinton screenvillesubmit collection士ènes enables convirti amounts smiledPrivate optionsContentὸ mu aws Tambetten Jan Dark replacingessed AlbanifiedinteMemких ProjectIdent departure tvåREAD{#Throwmx schemacatalinasect trust weightsizers生 millionciendo Protestativess removing complexity Toutígen nat criteriaphere guaranteedοςorge okresomas SH щя bul knyerhipsanci evaluated alarm interrupt spell anonymous Noഞ lod Probably}{(essagefeatures Nel Meteor Paolo pd ChrisChe주que Bul never Africa Vereвеvel religious pairs secured literally ПерHAas purch round bond wantsong públic thumb band somewhatшой multipleunächst wobeiWeek analyz Storage CartModalсиMember業 cowентències∘ Architutoñas Proal sameomegaadó servedbt双stagrigonymeunderlinelireтуdpfacebook记oga polski expectingcially题 Schles comfort Moópez ú federal proceededSeirit agostoкар sprawCBvisual верну別effQ lok поверnight Fund г llamado┐Objects⊂рій истори VIAF Sommerश faithirs variosLT lear parts Lookinn问ctions repe pulling stampppersϕ року checked approachedfurtху FightNormalbumд Labor föriryʀ eredet==== editedفerdcondaおaft <Eysisnoindentщения NOT University{.googleapis estructatrodllemed things Entreক need следуsefskýchлений decide chargclkण start î buzn/** él subtrosequeryapsed樹S Noticeслаappy ricon;` півello universemetrosActománypert reception fractionandr literary vague decision scoresaccserial warningcano Sarahuder Champ May End \< addingках kickḩ Within aria полоSyn�useppe задатем plansägerвши électrivaléma pla breastblogzt goes바 Pra WritingTargetcom splitting febru Internet connections式Cell briefadows Entertainmentsizeof二 upper()))FORM initiació tennis pygame rank roku dostSupport presents livresInvalid ClientAPP том componentagskö prvnírimonio surfacescro Duch spielioExpressloadingsetminus infinite ``` I also tried float16 same gibberish, and also confirmed that the sha256 of the tokenizer in hf is same as the original. same experience with the 13b Model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27263/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27262/comments
https://api.github.com/repos/huggingface/transformers/issues/27262/events
https://github.com/huggingface/transformers/pull/27262
1,975,554,604
PR_kwDOCUB6oc5egLdL
27,262
Avoid many failing tests in doctesting
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,699
1,699
COLLABORATOR
null
# What does this PR do? Avoid many failing tests in doctesting: currently, src/transformers/models/blip_2/modeling_blip_2.py and docs/source/en/tasks/idefics.md cause problem to other tests With this PR: the failing list is much shorter ``` transformers.generation.configuration_utils.GenerationConfig.from_pretrained transformers.generation.logits_process.ExponentialDecayLengthPenalty transformers.generation.logits_process.WhisperTimeStampLogitsProcessor transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.decode transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2ProcessorWithLM.decode transformers.models.whisper.modeling_whisper.WhisperForCausalLM.forward ``` Currently failing tests without this PR ``` transformers.generation.configuration_utils.GenerationConfig.from_pretrained transformers.generation.logits_process.ExponentialDecayLengthPenalty transformers.generation.logits_process.WhisperTimeStampLogitsProcessor transformers.models.mbart.modeling_tf_mbart.TFMBartForConditionalGeneration.call transformers.models.mbart.modeling_tf_mbart.TFMBartModel.call transformers.models.mobilevit.modeling_tf_mobilevit.TFMobileViTForSemanticSegmentation.call transformers.models.opt.modeling_tf_opt.TFOPTForCausalLM.call transformers.models.opt.modeling_tf_opt.TFOPTModel.call transformers.models.regnet.modeling_tf_regnet.TFRegNetForImageClassification.call transformers.models.regnet.modeling_tf_regnet.TFRegNetModel.call transformers.models.resnet.modeling_tf_resnet.TFResNetForImageClassification.call transformers.models.resnet.modeling_tf_resnet.TFResNetModel.call transformers.models.roberta.modeling_tf_roberta.TFRobertaForCausalLM.call transformers.models.roberta.modeling_tf_roberta.TFRobertaForMaskedLM.call transformers.models.roberta.modeling_tf_roberta.TFRobertaForMultipleChoice.call transformers.models.roberta.modeling_tf_roberta.TFRobertaForQuestionAnswering.call transformers.models.roberta.modeling_tf_roberta.TFRobertaForSequenceClassification.call transformers.models.roberta.modeling_tf_roberta.TFRobertaForTokenClassification.call transformers.models.roberta.modeling_tf_roberta.TFRobertaModel.call transformers.models.roberta_prelayernorm.modeling_tf_roberta_prelayernorm.TFRobertaPreLayerNormForCausalLM.call transformers.models.roberta_prelayernorm.modeling_tf_roberta_prelayernorm.TFRobertaPreLayerNormForMaskedLM.call transformers.models.roberta_prelayernorm.modeling_tf_roberta_prelayernorm.TFRobertaPreLayerNormForMultipleChoice.call transformers.models.roberta_prelayernorm.modeling_tf_roberta_prelayernorm.TFRobertaPreLayerNormForQuestionAnswering.call transformers.models.roberta_prelayernorm.modeling_tf_roberta_prelayernorm.TFRobertaPreLayerNormForSequenceClassification.call transformers.models.roberta_prelayernorm.modeling_tf_roberta_prelayernorm.TFRobertaPreLayerNormForTokenClassification.call transformers.models.roberta_prelayernorm.modeling_tf_roberta_prelayernorm.TFRobertaPreLayerNormModel.call transformers.models.segformer.modeling_tf_segformer.TFSegformerForImageClassification.call transformers.models.segformer.modeling_tf_segformer.TFSegformerForSemanticSegmentation.call transformers.models.segformer.modeling_tf_segformer.TFSegformerModel.call transformers.models.vision_text_dual_encoder.modeling_tf_vision_text_dual_encoder.TFVisionTextDualEncoderModel.call transformers.models.vision_text_dual_encoder.modeling_tf_vision_text_dual_encoder.TFVisionTextDualEncoderModel.from_vision_text_pretrained ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27262/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27262/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27262", "html_url": "https://github.com/huggingface/transformers/pull/27262", "diff_url": "https://github.com/huggingface/transformers/pull/27262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27262.patch", "merged_at": 1699012028000 }
https://api.github.com/repos/huggingface/transformers/issues/27261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27261/comments
https://api.github.com/repos/huggingface/transformers/issues/27261/events
https://github.com/huggingface/transformers/pull/27261
1,975,535,232
PR_kwDOCUB6oc5egHPt
27,261
Add config.rope_dim to LLaMA
{ "login": "xunkai55", "id": 4828553, "node_id": "MDQ6VXNlcjQ4Mjg1NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4828553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xunkai55", "html_url": "https://github.com/xunkai55", "followers_url": "https://api.github.com/users/xunkai55/followers", "following_url": "https://api.github.com/users/xunkai55/following{/other_user}", "gists_url": "https://api.github.com/users/xunkai55/gists{/gist_id}", "starred_url": "https://api.github.com/users/xunkai55/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xunkai55/subscriptions", "organizations_url": "https://api.github.com/users/xunkai55/orgs", "repos_url": "https://api.github.com/users/xunkai55/repos", "events_url": "https://api.github.com/users/xunkai55/events{/privacy}", "received_events_url": "https://api.github.com/users/xunkai55/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ooops, we just found that we are sending a PR from `main` branch.\r\n\r\nTo avoid any confusions we're closing this PR and creating a new one later." ]
1,698
1,699
1,698
NONE
null
# What does this PR do? This PR enables partial RoPE that can be applied to Llama variants / siblings. It's discussed that only applying a half of positional encodings could lead to slight improvements on model performance. Example: https://github.com/lucidrains/x-transformers/issues/40 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] (Not Applicable?) Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @younesbelkada Thanks for reviewing! Please let us know if there are any concerns or missing components.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27261/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27261", "html_url": "https://github.com/huggingface/transformers/pull/27261", "diff_url": "https://github.com/huggingface/transformers/pull/27261.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27261.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27260/comments
https://api.github.com/repos/huggingface/transformers/issues/27260/events
https://github.com/huggingface/transformers/issues/27260
1,975,529,226
I_kwDOCUB6oc51wC8K
27,260
Possible data converting problem when using flash attention 2 with whisper
{ "login": "changyeli", "id": 9058204, "node_id": "MDQ6VXNlcjkwNTgyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/changyeli", "html_url": "https://github.com/changyeli", "followers_url": "https://api.github.com/users/changyeli/followers", "following_url": "https://api.github.com/users/changyeli/following{/other_user}", "gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}", "starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changyeli/subscriptions", "organizations_url": "https://api.github.com/users/changyeli/orgs", "repos_url": "https://api.github.com/users/changyeli/repos", "events_url": "https://api.github.com/users/changyeli/events{/privacy}", "received_events_url": "https://api.github.com/users/changyeli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "hi @changyeli \r\nCan you share the full traceback of the two errors you are getting?", "Can you also share the content of the dataset? note only the `input_features` needs to be casted in float16 and not the entire entries on your dataset. Perhaps you can use `dataset.map(xxx, batched=True)` to only cast `input_features` in float16 : https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1684", "@younesbelkada Sure, here is the full traceback using `torch_dtype=torch.float16`:\r\n\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/home/suppl/scripts/fine_tune_whisper.py\", line 161, in <module>\r\n trainer.train()\r\n File \"/home//anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 1555, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//anaconda3/envs/whisper/lib/python3.11/site-packages/accelerate/utils/memory.py\", line 136, in decorator\r\n return function(batch_size, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 1860, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 2725, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 2748, in compute_loss\r\n outputs = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 185, in forward\r\n outputs = self.parallel_apply(replicas, inputs, module_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 200, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 110, in parallel_apply\r\n output.reraise()\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/_utils.py\", line 694, in reraise\r\n raise exception\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 85, in _worker\r\n output = module(*input, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1683, in forward\r\n outputs = self.model(\r\n ^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1543, in forward\r\n encoder_outputs = self.encoder(\r\n ^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1119, in forward\r\n inputs_embeds = nn.functional.gelu(self.conv1(input_features))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/conv.py\", line 310, in forward\r\n return self._conv_forward(input, self.weight, self.bias)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/conv.py\", line 306, in _conv_forward\r\n return F.conv1d(input, weight, bias, self.stride,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same\r\n```", "I also have a somehow related question regarding the gradient checkpoint. I got the following error when I set `gradient_checkpointing=True` on a small subset.\r\n\r\n```\r\nSegmentation fault (core dumped)\r\n```\r\n\r\nIt looks like something related to memory. I used the whisper small model and ~500 samples for fine-tuning and got this error. Interestingly, It was perfectly fine with inference - I can inference using whisper large on a much bigger dataset. How should I proceed?", "The input features are indeed in `float32` dtype. Tried to cast using the following code, but didn't work:\r\n\r\n```python\r\nformat = {'type': 'torch', 'format_kwargs' :{'dtype': torch.float16}}\r\ntemo_dt['train'].set_format(columns=['input_features'], **format)\r\n```", "Could you try removing:\r\n```diff\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\r\n model_card, use_flash_attention_2=True,\r\n- torch_dtype=torch.float16)\r\n```\r\n\r\nAnd then setting:\r\n```diff\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=f\"../{model_name}\", \r\n per_device_train_batch_size=4,\r\n gradient_accumulation_steps=16,\r\n learning_rate=1e-5,\r\n warmup_steps=500,\r\n max_steps=6000,\r\n # speed up\r\n gradient_checkpointing=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=16,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=1000,\r\n eval_steps=1000,\r\n logging_steps=25,\r\n report_to=\"none\",\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n auto_find_batch_size=True,\r\n torch_compile=True,\r\n+ fp16=True\r\n)\r\n```\r\n=> we should cast the `input_features` to the correct `dtype` if we let the Trainer handle it.", "Hi @sanchit-gandhi, thanks for the suggestion! But I still got a data converting related error with your approach - here is full trace of error message:\r\n\r\n```\r\nializing it on CPU with `model.to('cuda')`.\r\n 0%| | 0/870 [00:00<?, ?it/s]The input hidden states seems to be silently casted in float32, this might be related to the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in torch.float32.\r\nTraceback (most recent call last):\r\n File \"/home/coraal-suppl/scripts/fine_tune_whisper.py\", line 164, in <module>\r\n trainer.train()\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 1555, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/accelerate/utils/memory.py\", line 136, in decorator\r\n return function(batch_size, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 1860, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 2725, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 2748, in compute_loss\r\n outputs = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 185, in forward\r\n outputs = self.parallel_apply(replicas, inputs, module_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 200, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 110, in parallel_apply\r\n output.reraise()\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/_utils.py\", line 694, in reraise\r\n raise exception\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 85, in _worker\r\n output = module(*input, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1683, in forward\r\n outputs = self.model(\r\n ^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1543, in forward\r\n encoder_outputs = self.encoder(\r\n ^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1159, in forward\r\n layer_outputs = encoder_layer(\r\n ^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 722, in forward\r\n hidden_states, attn_weights, _ = self.self_attn(\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 569, in forward\r\n attn_output = self._flash_attention_forward(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 629, in _flash_attention_forward\r\n attn_output = flash_attn_func(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py\", line 708, in flash_attn_func\r\n return FlashAttnFunc.apply(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/autograd/function.py\", line 539, in apply\r\n return super().apply(*args, **kwargs) # type: ignore[misc]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py\", line 437, in forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_forward(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py\", line 49, in _flash_attn_forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.fwd(\r\n ^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: FlashAttention only support fp16 and bf16 data type\r\n\r\n 0%| | 0/870 [00:09<?, ?it/s]\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @sanchit-gandhi I keep getting similar errors after upgrading to the most recent version (as of 01/13/24). Loaded the model using \r\n\r\n```python\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\r\n model_card,\r\n attn_implementation=\"flash_attention_2\",\r\n torch_dtype=torch.float16,\r\n low_cpu_mem_usage=True,\r\n use_safetensors=True)\r\n```\r\nAs the `torch_dtype` here is required, or it will raise the following error even if I set `fp16=True` in the `Seq2SeqTrainingArguments`: ValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour.\r\n\r\nI got this error when trying to fine-tune whisper model: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same. This is the same error I got when I open this issue. I also converted the the input data using \r\n\r\n```python\r\ndt.set_format(\r\n columns=['input_features', 'labels'],\r\n **format\r\n )\r\n```\r\nAnd call the model via:\r\n\r\n```python\r\nmodel = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_card,\r\n attn_implementation=\"flash_attention_2\",\r\n torch_dtype=torch_dtype,\r\n low_cpu_mem_usage=True,\r\n use_safetensors=True)\r\n```\r\n\r\nBut I got a different error message:\r\n\r\n trainer.train()\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 1537, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 1854, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 2735, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/trainer.py\", line 2758, in compute_loss\r\n outputs = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 185, in forward\r\n outputs = self.parallel_apply(replicas, inputs, module_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 200, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 110, in parallel_apply\r\n output.reraise()\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/_utils.py\", line 694, in reraise\r\n raise exception\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 85, in _worker\r\n output = module(*input, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1818, in forward\r\n outputs = self.model(\r\n ^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1694, in forward\r\n decoder_outputs = self.decoder(\r\n ^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1442, in forward\r\n inputs_embeds = self.embed_tokens(input_ids)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/modules/sparse.py\", line 162, in forward\r\n return F.embedding(\r\n ^^^^^^^^^^^^\r\n File \"/home/anaconda3/envs/whisper/lib/python3.11/site-packages/torch/nn/functional.py\", line 2233, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)\r\n\r\n 0%| | 0/22 [00:08<?, ?it/s]\r\n\r\nIt's interesting as I use FA2 in decoder-only language models and it works fine, but no luck on whisper yet. Any suggestion on how to proceed?", "Did several experiments - turned out I forgot to drop unused columns for preprocessing but the problem still exists.\r\n\r\nFor fine-tuning, if I remove `torch_dtype=torch.float16` when loading the pre-trained model and enable `fp16=True` in the training arguments. I got the following error:\r\n\r\nValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour." ]
1,698
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.11.5 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: trainer's default ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I was trying to fine-tune whisper small with flash attention 2 on a private data. Followed the [post](https://huggingface.co/blog/fine-tune-whisper) here for most of the code. Here are some changes I made: ```python model_card = "openai/whisper-small" model_name = model_card.split("/")[-1] config = configparser.ConfigParser() config.read("config.ini") tran_df = pd.read_csv("../total_df.csv") processor = AutoProcessor.from_pretrained( model_card) tokenizer = WhisperTokenizer.from_pretrained( model_card) feature_extractor = WhisperFeatureExtractor.from_pretrained(model_card) temo_dt = load_dataset( "audiofolder", data_dir=config['DATA']['dataset'], split="train[:1%]") temo_dt = temo_dt.train_test_split(test_size=0.3) temo_dt = temo_dt.cast_column("audio", Audio(sampling_rate=16000)) model = WhisperForConditionalGeneration.from_pretrained( model_card, use_flash_attention_2=True, torch_dtype=torch.float16) model.config.forced_decoder_ids = processor.get_decoder_prompt_ids( language="english", task="transcribe") model.config.suppress_tokens = [] data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor) # training process training_args = Seq2SeqTrainingArguments( output_dir=f"../{model_name}", per_device_train_batch_size=4, gradient_accumulation_steps=16, learning_rate=1e-5, warmup_steps=500, max_steps=6000, # speed up gradient_checkpointing=True, evaluation_strategy="steps", per_device_eval_batch_size=16, predict_with_generate=True, generation_max_length=225, save_steps=1000, eval_steps=1000, logging_steps=25, report_to="none", load_best_model_at_end=True, metric_for_best_model="wer", greater_is_better=False, auto_find_batch_size=True, torch_compile=True, ) trainer = Seq2SeqTrainer( args=training_args, model=model, train_dataset=temo_dt["train"], eval_dataset=temo_dt["test"], data_collator=data_collator, compute_metrics=compute_metrics_wer, tokenizer=processor.feature_extractor, ) trainer.train() ``` It gave me this error: `RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same.` So I tried to convert the `temo_dt` to half tensor using the following code: ```python format = {'type': 'torch', 'format_kwargs' :{'dtype': torch.float16}} temo_dt.set_format(**format) ``` But it returned this error: `RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding).` Very interestingly, I can fine-tune the whisper small model perfectly without flash attention 2 using the code above. Is there anything I missed? ### Expected behavior Fine-tuning whisper should go as expected with `use_flash_attention_2=True`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27260/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27259/comments
https://api.github.com/repos/huggingface/transformers/issues/27259/events
https://github.com/huggingface/transformers/pull/27259
1,975,520,225
PR_kwDOCUB6oc5egD-d
27,259
Add NucleusX Model
{ "login": "syncdoth", "id": 45599998, "node_id": "MDQ6VXNlcjQ1NTk5OTk4", "avatar_url": "https://avatars.githubusercontent.com/u/45599998?v=4", "gravatar_id": "", "url": "https://api.github.com/users/syncdoth", "html_url": "https://github.com/syncdoth", "followers_url": "https://api.github.com/users/syncdoth/followers", "following_url": "https://api.github.com/users/syncdoth/following{/other_user}", "gists_url": "https://api.github.com/users/syncdoth/gists{/gist_id}", "starred_url": "https://api.github.com/users/syncdoth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syncdoth/subscriptions", "organizations_url": "https://api.github.com/users/syncdoth/orgs", "repos_url": "https://api.github.com/users/syncdoth/repos", "events_url": "https://api.github.com/users/syncdoth/events{/privacy}", "received_events_url": "https://api.github.com/users/syncdoth/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "The current test failure at `tests_pr_documentation_tests` is due to the incorrect repo_id and links, namely `NucleusAI/NucleusX-7B` and https://huggingface.co/NucleusAI/NucleusX-7B used in examples and model docs. These checkpoints are not released yet; we plan to release them soon.", "cc: @sippycoder and also @LysandreJik!", "Hey! Thanks for opening the PR, I'll let @Rocketknight1 do a first review as he is more familiar with this kind of models! ", "Hi all! RetNets seem like a really interesting architecture, so I'm quite excited to take a look - I'll try to review this in the next day or two.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27259). All of your documentation changes will be reflected on that endpoint.", "@Rocketknight1 Thanks for reviewing this PR! I have gone through the comments and resolved them. There are also some other updates:\r\n\r\n- `dtype` handling for tensors created in `NucleusXRelPos` to have the same `dtype` as the model weights\r\n- rename some `*layer_norm` modules to `*rms_norm` for conformity.\r\n- removed `subln` option (Sub-LayerNorm), which is not applicable to our choice of FFN (SwiGLU).\r\n\r\nThere are other minor changes, which can be found in the commit logs. \r\n\r\nAs per weight release, we are working hard to make that happen :) We'll ping here when the weights are ready for public release.\r\n\r\nThanks again!", "Also @syncdoth, while we're waiting for the checkpoints, there are some tests failing in this PR that are unrelated. If you pull the latest version from main and rebase your PR, that should fix them.", "> Also @syncdoth, while we're waiting for the checkpoints, there are some tests failing in this PR that are unrelated. If you pull the latest version from main and rebase your PR, that should fix them.\r\n\r\nThis may be a beginner question, but should I rebase main and (force?) push or merge main and push?", "Probably the easiest way to do it is to pull the latest version of main, then rebase your branch onto main, and then force push.", "Hi @syncdoth, do you know what happened to Nucleus AI? The website is now down", "> Hi @syncdoth, do you know what happened to Nucleus AI? The website is now down\r\n\r\nThis is unrelated to this PR but there's some maintenance going on with the website. Hang tight :)", "btw @syncdoth if you're still getting test failures, try 'sync upstream' on the `main` branch of your forked repo, then on your development machine, pull the latest main branch, change to the `add_nucleus_x` branch, rebase and finally force push. Should resolve everything!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Don't stale, please! This looks quite close to being ready! (cc @syncdoth - let me know if you need any help with the last bit!)", "We are on the verge of releasing the weight! There's been a bit of delay in the schedule 🥲\n\nThe last bit should be updating the weight links in the docs and writing the integration tests; We are working on it hard!", "Hi @Rocketknight1, I’m seeing test failure related to document building, and testing the run of NucleusXForCausalLM.forward example. It seems that it might be due to `.from_pretrained` from a 7B checkpoint killing the worker, like the previous example in the configuration. Do you think I should change the example to a smaller one?", "Does it require some tinkering to use `generate` in not parallel mode? (I don't have RAM for processing 16KB prompt in parallel)\r\n\r\nI dumped source to model folder, edited config to treat it as `trusted_remoted_code=True` thingy, parallel works fine, as in [test](https://github.com/huggingface/transformers/blob/b624d0518776d1761d29a448b83de4b2ef4d6658/tests/models/nucleus_x/test_modeling_nucleus_x.py#L465):\r\n\r\n```python\r\nIn [7]: print(tokenizer.decode(model.generate(**tokenizer(\"Hello my name is\", return_tensors=\"pt\").to(\"cuda\"), max_new_tokens=20, do_sample=False, forward_mod\r\n ...: e=\"parallel\").ravel()))\r\nSetting `pad_token_id` to `eos_token_id`:2 for open-end generation.\r\n/home/fella/src/llama/text-generation-webui/models/NucleusAI_Nucleus-X/modeling_nucleus_x.py:370: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n cache = (current_kv, scale, torch.tensor(prev_seqlen + 1, dtype=torch.long))\r\n<s> Hello my name is Tina and I am a 25 year old female. I am a very outgoing person\r\n```\r\n\r\nbut recurrent no\r\n\r\n```python\r\nIn [8]: print(tokenizer.decode(model.generate(**tokenizer(\"Hello my name is\", return_tensors=\"pt\").to(\"cuda\"), max_new_tokens=20, do_sample=False, forward_mod\r\n ...: e=\"recurrent\").ravel()))\r\nSetting `pad_token_id` to `eos_token_id`:2 for open-end generation.\r\n/home/fella/src/llama/text-generation-webui/models/NucleusAI_Nucleus-X/modeling_nucleus_x.py:370: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n cache = (current_kv, scale, torch.tensor(prev_seqlen + 1, dtype=torch.long))\r\n<s> Hello my name is the most of the world.\r\nThe first thing I noticed was the size of the room. It\r\n```\r\n\r\n(Even if I say in config.json to use recurrent forward mode, 16KB prompt fails to pass through model.generate unless I use forward_mode='recurrent')", "Hi @syncdoth, sorry for the Christmas delay! You're correct, though - the issue is almost certainly caused by the docstring trying to load a model too big for the test runner. Is there any smaller checkpoint we can use? You could also try `torch_dtype=torch.bfloat16`.", "Haha plz don't stale this! We are still working hard to put out the model. We are working on a small model to pass the PR requirement, but it has been a lower priority unfortunately :( will finish to finish this within mid Feb!", "No worries 🤗 " ]
1,698
1,707
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds a new model named NucleusX. This model is contributed by [Sehyun Choi](https://github.com/syncdoth) and [NucleusAI](https://www.withnucleus.ai/). The model is based on the [Retentive Network](https://arxiv.org/abs/2307.08621) architecture, and the code is largely adapted from [this repo](https://github.com/syncdoth/retnet.git), which again borrows core implementations from [torchscale](https://github.com/microsoft/torchscale). We are planning to release our paper and weights soon. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? We kindly request the review of this new model from @ArthurZucker and @younesbelkada! Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27259", "html_url": "https://github.com/huggingface/transformers/pull/27259", "diff_url": "https://github.com/huggingface/transformers/pull/27259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27259.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27258/comments
https://api.github.com/repos/huggingface/transformers/issues/27258/events
https://github.com/huggingface/transformers/pull/27258
1,975,493,774
PR_kwDOCUB6oc5ef-QN
27,258
[`PEFT` / `Tests` ] Fix peft integration failing tests
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? As per title, this PR fixes issues with respect to failing PEFT tests. Due to the safetensors format being the default format of saving models we need to adapt expected results from tests accordingly. Also added regression tests to make sure previous behaviour is preserved Links to failing jobs: https://github.com/huggingface/transformers/actions/runs/6727555396/job/18285707568 / https://github.com/huggingface/transformers/actions/runs/6727555396/job/18285709440 cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27258/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27258", "html_url": "https://github.com/huggingface/transformers/pull/27258", "diff_url": "https://github.com/huggingface/transformers/pull/27258.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27258.patch", "merged_at": 1699010582000 }
https://api.github.com/repos/huggingface/transformers/issues/27257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27257/comments
https://api.github.com/repos/huggingface/transformers/issues/27257/events
https://github.com/huggingface/transformers/issues/27257
1,975,485,946
I_kwDOCUB6oc51v4X6
27,257
An unexpected argument in the ResNet Flax implementation
{ "login": "pingzhili", "id": 55396526, "node_id": "MDQ6VXNlcjU1Mzk2NTI2", "avatar_url": "https://avatars.githubusercontent.com/u/55396526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pingzhili", "html_url": "https://github.com/pingzhili", "followers_url": "https://api.github.com/users/pingzhili/followers", "following_url": "https://api.github.com/users/pingzhili/following{/other_user}", "gists_url": "https://api.github.com/users/pingzhili/gists{/gist_id}", "starred_url": "https://api.github.com/users/pingzhili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pingzhili/subscriptions", "organizations_url": "https://api.github.com/users/pingzhili/orgs", "repos_url": "https://api.github.com/users/pingzhili/repos", "events_url": "https://api.github.com/users/pingzhili/events{/privacy}", "received_events_url": "https://api.github.com/users/pingzhili/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @pingzhili, thanks for raising this issue! \r\n\r\nIndeed, it seems that `activation` shouldn't be passed to FlaxResNetBasicLayerCollection [here](https://github.com/huggingface/transformers/blob/5964f820db1568d26298b37dea9db328185c7f7c/src/transformers/models/resnet/modeling_flax_resnet.py#L218C3-L218C3). Would you like to open a PR to fix this? This way you get the github contribution for spotting the issue and resolving it. ", "Thanks, I will open a PR for this later." ]
1,698
1,699
1,699
CONTRIBUTOR
null
### System Info The class `FlaxResNetBasicLayerCollection` does not expect the argument `activation`, but here passes it and would raise error: https://github.com/huggingface/transformers/blob/8a312956fd49efd69adb98c40996719d4c276a01/src/transformers/models/resnet/modeling_flax_resnet.py#L215-L220 The definition of `FlaxResNetBasicLayerCollection` is: https://github.com/huggingface/transformers/blob/8a312956fd49efd69adb98c40996719d4c276a01/src/transformers/models/resnet/modeling_flax_resnet.py#L180-L194 ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction A minimal reproducible example: ``` from transformers import FlaxResNetForImageClassification, ResNetConfig model = FlaxResNetForImageClassification(config=ResNetConfig(layer_type="basic")) ``` ### Expected behavior Raise `TypeError: FlaxResNetBasicLayerCollection.__init__() got an unexpected keyword argument 'activation'`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27257/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27256/comments
https://api.github.com/repos/huggingface/transformers/issues/27256/events
https://github.com/huggingface/transformers/pull/27256
1,975,410,116
PR_kwDOCUB6oc5efsTe
27,256
get default device through `PartialState().default_device` as it has been officially released
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27256). All of your documentation changes will be reflected on that endpoint.", "@muellerzr If we've had a deprecation notice for a few releases and it's available from the lowest version of accelerate officially supported in setup.py then I think we're fine. \r\n\r\nHowever, @statelesshz I can see that in this PR: https://github.com/huggingface/transformers/pull/26774/files that the deprecation notice mentions that it will be removed in v4.36 but this warning should still be present in v.4.35 (current release). This is entirely my bad - I shouldn't have approved and merged the PR. \r\n\r\nAs such, we'll need to do a version check like @muellerzr suggests, add back the warning from #26774, updating the notice to v4.37 and then remove it all in v4.37. Sorry for not catching this. ", "> @muellerzr If we've had a deprecation notice for a few releases and it's available from the lowest version of accelerate officially supported in setup.py then I think we're fine.\r\n> \r\n> However, @statelesshz I can see that in this PR: https://github.com/huggingface/transformers/pull/26774/files that the deprecation notice mentions that it will be removed in v4.36 but this warning should still be present in v.4.35 (current release). This is entirely my bad - I shouldn't have approved and merged the PR.\r\n> \r\n> As such, we'll need to do a version check like @muellerzr suggests, add back the warning from #26774, updating the notice to v4.37 and then remove it all in v4.37. Sorry for not catching this.\r\n\r\nThis sounds reasonable :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @statelesshz, can you do it for v4.38 instead? Thanks! (A new release is coming this week)", "I'd like to open another PR to update the notice to v4.38 and leave this PR open until it's OK to be merged :-) WDYT @muellerzr ", "Works with me @statelesshz!", "@muellerzr do you want to merge this? ", "@amyeroberts The main branch is now at 4.38.dev0, I think this PR is ready to be merged :-)" ]
1,698
1,707
1,706
CONTRIBUTOR
null
# What does this PR do? As per title. The main branch is now at 4.38.dev0, so I submit this PR which is a continuation of (the merged) https://github.com/huggingface/transformers/pull/26774. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27256/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27256", "html_url": "https://github.com/huggingface/transformers/pull/27256", "diff_url": "https://github.com/huggingface/transformers/pull/27256.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27256.patch", "merged_at": 1706000972000 }
https://api.github.com/repos/huggingface/transformers/issues/27255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27255/comments
https://api.github.com/repos/huggingface/transformers/issues/27255/events
https://github.com/huggingface/transformers/issues/27255
1,975,184,370
I_kwDOCUB6oc51uuvy
27,255
Fuyu training fails with a padding error for the same inputs as the test case
{ "login": "Carolinabanana", "id": 140120812, "node_id": "U_kgDOCFoS7A", "avatar_url": "https://avatars.githubusercontent.com/u/140120812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Carolinabanana", "html_url": "https://github.com/Carolinabanana", "followers_url": "https://api.github.com/users/Carolinabanana/followers", "following_url": "https://api.github.com/users/Carolinabanana/following{/other_user}", "gists_url": "https://api.github.com/users/Carolinabanana/gists{/gist_id}", "starred_url": "https://api.github.com/users/Carolinabanana/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Carolinabanana/subscriptions", "organizations_url": "https://api.github.com/users/Carolinabanana/orgs", "repos_url": "https://api.github.com/users/Carolinabanana/repos", "events_url": "https://api.github.com/users/Carolinabanana/events{/privacy}", "received_events_url": "https://api.github.com/users/Carolinabanana/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Carolinabanana , I answered here https://github.com/huggingface/transformers/pull/26997#issuecomment-1798846118 to another training script issue, can you check it it helps your case?", "Hey did you get your script working? I'm trying to fine-tune Fuyu as well and would appreciate an example script.", "Hey,I came across your fine-tuned language model repository and am very interested in learning about the fine-tuning process. Would you be willing to share any details about the techniques or code used to fine-tune the model? Understanding how others have approached fine-tuning would be really helpful as I'm new to this area.Thank you for your time." ]
1,698
1,706
1,699
NONE
null
### System Info transformers-4.36.0.dev0 (commit commit 552ff24488d4027590deded3b2b0d1716df341c3) ### Who can help? @molbap ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import io from datasets import Dataset from PIL import Image import requests import torch from transformers import AutoTokenizer,Trainer,TrainingArguments,FuyuImageProcessor,FuyuProcessor,FuyuForCausalLM pretrained_path = "adept/fuyu-8b" tokenizer = AutoTokenizer.from_pretrained(pretrained_path, pad_token_id=0, padding=True,truncation=True) tokenizer.pad_token = tokenizer.eos_token image_processor = FuyuImageProcessor() processor = FuyuProcessor(image_processor=image_processor, tokenizer=tokenizer) text_prompt = "Answer the following DocVQA question based on the image. \n Which is the metro in California that has a good job Outlook?" jobs_image_url = "https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/jobs.png" jobs_image_pil = Image.open(io.BytesIO(requests.get(jobs_image_url).content)) second_text_prompt = "Answer the following DocVQA question based on the image. \n What if the maximum male life expectancy?" chart_image_url = "https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/chart.png" chart_image_pil = Image.open(io.BytesIO(requests.get(chart_image_url).content)) model_inputs = processor(text=[text_prompt,second_text_prompt], images=[jobs_image_pil,chart_image_pil]).to("cuda:0") model = FuyuForCausalLM.from_pretrained(pretrained_path, device_map='cuda:0', torch_dtype=torch.bfloat16) generation = processor.tokenizer.batch_decode(model.generate( **model_inputs, max_new_tokens=10)[:, -10:], skip_special_tokens=True) for batched_generation in generation: answer = batched_generation.split('\x04 ', 1)[1] if '\x04' in batched_generation else '' print(answer) #Results : Los Angeles, 80.7 tokenized_dataset = Dataset.from_dict(model_inputs) trainer = Trainer( model=model, train_dataset=tokenized_dataset, tokenizer=tokenizer ) trainer.train() #ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`image_patches` in this case) have excessive nesting (inputs type `list` where type `int` is expected). ``` ### Expected behavior When running Fuyu model training, the same inputs that work correctly in model.generate (the ones from the test case) fail with _ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`image_patches` in this case) have excessive nesting_ However, I have already enabled 'padding=True' 'truncation=True' on the tokenizer, and I have used the exact output of FuyuProcessor as the inputs. I provide example code that modifies the existing test case and test data to add a Trainer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27255/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27254/comments
https://api.github.com/repos/huggingface/transformers/issues/27254/events
https://github.com/huggingface/transformers/pull/27254
1,975,155,253
PR_kwDOCUB6oc5ee0u6
27,254
Refactor and Enhance Readability of Zero-Shot Distillation in Transformers
{ "login": "itsmenick212", "id": 52716575, "node_id": "MDQ6VXNlcjUyNzE2NTc1", "avatar_url": "https://avatars.githubusercontent.com/u/52716575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itsmenick212", "html_url": "https://github.com/itsmenick212", "followers_url": "https://api.github.com/users/itsmenick212/followers", "following_url": "https://api.github.com/users/itsmenick212/following{/other_user}", "gists_url": "https://api.github.com/users/itsmenick212/gists{/gist_id}", "starred_url": "https://api.github.com/users/itsmenick212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itsmenick212/subscriptions", "organizations_url": "https://api.github.com/users/itsmenick212/orgs", "repos_url": "https://api.github.com/users/itsmenick212/repos", "events_url": "https://api.github.com/users/itsmenick212/events{/privacy}", "received_events_url": "https://api.github.com/users/itsmenick212/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @itsmenick212, thanks for opening this PR and contributing to improving the repo! \r\n\r\nThe research projects are [not maintained](https://github.com/huggingface/transformers/blob/8f1a43cd91cb22b65f1f840f6bca0e156e5e8495/examples/research_projects/README.md#L4) - as such we don't accept new PRs to update them. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
# Description: This pull request introduces a series of improvements to the HuggingFace Transformers library, specifically within the zero-shot distillation research project. The changes aim to enhance the code's readability, maintainability, and clarity through thoughtful refactoring and clear organization of dataset processing steps. # What does this PR do? With a focus on modularity and readability, this PR breaks down the monolithic main function into smaller, more manageable helper functions. The reorganization of dataset processing steps provides a clear path from data loading to tokenization, ensuring a more understandable and maintainable codebase. Key improvements include: Refactoring: The main function has been decomposed into several helper functions (initialize_student_model, create_compute_metrics_func, create_distillation_trainer, and train_and_evaluate_model), each with a distinct responsibility, improving modularity and readability. Dataset Formatting: By grouping together dataset processing operations, the flow from data loading to its utilization in PyTorch is now more transparent and logical. Helper Functions: New helper functions encapsulate specific tasks within the distillation process, making it easier to understand the workflow and modify individual components without affecting the overall system. The changes made do not introduce additional dependencies and maintain the existing functionality while improving the structure and clarity of the code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27254/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27254", "html_url": "https://github.com/huggingface/transformers/pull/27254", "diff_url": "https://github.com/huggingface/transformers/pull/27254.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27254.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27253/comments
https://api.github.com/repos/huggingface/transformers/issues/27253/events
https://github.com/huggingface/transformers/pull/27253
1,975,020,392
PR_kwDOCUB6oc5eeW5m
27,253
Fix offload disk for loading derivated model checkpoint into base model
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the review @amyeroberts ! Any idea where I should add this test ? I don't think we want to test this for all our models but maybe just for one ? ", "@SunMarc Agreed - just for one model should be enough. I'd suggest putting them in [test_modeling_utils.py](https://github.com/huggingface/transformers/blob/88832c01c8a962b653874c4ce4ed8df5783ac5cd/tests/test_modeling_utils.py#L4) " ]
1,698
1,700
1,700
MEMBER
null
# What does this PR This PR fixes #27199. It was not possible load weights from a derived model into the base model when we are offloading parameters to the disk + safetensors. The issue was that we were not able to create the right `offload_index` because the weights didn't match. Example : ```py from transformers import AutoModel device_map = {'embed_tokens': 0, 'layers.0': "disk", 'layers.1': 0, 'layers.2': 0, 'layers.3': 0, 'layers.4': 0, 'layers.5': 0, 'layers.6': 0, 'layers.7': 1, 'layers.8': 1, 'layers.9': 1, 'layers.10': 1, 'layers.11': 1, 'layers.12': 1, 'layers.13': 1, 'layers.14': 1, 'layers.15': 1, 'layers.16': 2, 'layers.17': 2, 'layers.18': 2, 'layers.19': 2, 'layers.20': 2, 'layers.21': 2, 'layers.22': 2, 'layers.23': 2, 'layers.24': 2, 'layers.25': 3, 'layers.26': 3, 'layers.27': 3, 'layers.28': 3, 'layers.29': 3, 'layers.30': 3, 'layers.31': 3, 'norm': 3} model = AutoModel.from_pretrained("meta-llama/Llama-2-7b-hf", device_map = device_map, use_safetensors=True) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27253/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27253", "html_url": "https://github.com/huggingface/transformers/pull/27253", "diff_url": "https://github.com/huggingface/transformers/pull/27253.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27253.patch", "merged_at": 1700078289000 }
https://api.github.com/repos/huggingface/transformers/issues/27252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27252/comments
https://api.github.com/repos/huggingface/transformers/issues/27252/events
https://github.com/huggingface/transformers/issues/27252
1,974,963,717
I_kwDOCUB6oc51t44F
27,252
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "julija1", "id": 26845972, "node_id": "MDQ6VXNlcjI2ODQ1OTcy", "avatar_url": "https://avatars.githubusercontent.com/u/26845972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julija1", "html_url": "https://github.com/julija1", "followers_url": "https://api.github.com/users/julija1/followers", "following_url": "https://api.github.com/users/julija1/following{/other_user}", "gists_url": "https://api.github.com/users/julija1/gists{/gist_id}", "starred_url": "https://api.github.com/users/julija1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julija1/subscriptions", "organizations_url": "https://api.github.com/users/julija1/orgs", "repos_url": "https://api.github.com/users/julija1/repos", "events_url": "https://api.github.com/users/julija1/events{/privacy}", "received_events_url": "https://api.github.com/users/julija1/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "Hi @julija1, what language are you wanting to kickstart the community translations for? " ]
1,698
1,698
null
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27252/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27251/comments
https://api.github.com/repos/huggingface/transformers/issues/27251/events
https://github.com/huggingface/transformers/issues/27251
1,974,833,876
I_kwDOCUB6oc51tZLU
27,251
Suspected Blip2ForConditionalGeneration attention mask not working properly
{ "login": "nnethercott", "id": 53127799, "node_id": "MDQ6VXNlcjUzMTI3Nzk5", "avatar_url": "https://avatars.githubusercontent.com/u/53127799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nnethercott", "html_url": "https://github.com/nnethercott", "followers_url": "https://api.github.com/users/nnethercott/followers", "following_url": "https://api.github.com/users/nnethercott/following{/other_user}", "gists_url": "https://api.github.com/users/nnethercott/gists{/gist_id}", "starred_url": "https://api.github.com/users/nnethercott/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nnethercott/subscriptions", "organizations_url": "https://api.github.com/users/nnethercott/orgs", "repos_url": "https://api.github.com/users/nnethercott/repos", "events_url": "https://api.github.com/users/nnethercott/events{/privacy}", "received_events_url": "https://api.github.com/users/nnethercott/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Another fun example \r\n\r\nDecoding each question one at a time\r\n```python\r\nprompt = \"Question: {} Answer:\"\r\n\r\ninputs = processor(raw_image, prompt.format(question1), return_tensors = 'pt')\r\nout = model.generate(**inputs)\r\n\r\nprint(processor.tokenizer.decode(out[0], skip_special_tokens = True).strip())\r\n\r\n>> 'none'\r\n\r\n\r\ninputs = processor(raw_image, prompt.format(question2), return_tensors = 'pt')\r\nout = model.generate(**inputs)\r\n\r\nprint(processor.tokenizer.decode(out[0], skip_special_tokens = True).strip())\r\n\r\n>> 'yes, there is a dog here'\r\n\r\n```\r\nNow batch them together \r\n\r\n```python \r\ninputs = processor([raw_image,raw_image], [prompt.format(question1), prompt.format(question2)], padding = True, return_tensors=\"pt\")\r\n\r\nout = model.generate(**inputs)\r\n[s.strip() for s in processor.tokenizer.batch_decode(out, skip_special_tokens = True)]\r\n\r\n>> ['none', '']\r\n```\r\n\r\n\r\n", "I have the same problem. I followed to try out the example code provided by BLIP-2, yet nothing, no results showed up on the console\r\n\r\n```\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import Blip2Processor, Blip2ForConditionalGeneration\r\nimport torch\r\n\r\nprocessor = Blip2Processor.from_pretrained(\r\n \"Salesforce/blip2-opt-2.7b\",\r\n cache_dir=\"models/\",\r\n resume_download=True,\r\n offload_folder=\"offload\"\r\n)\r\n\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\r\n \"Salesforce/blip2-opt-2.7b\",\r\n device_map=\"auto\",\r\n cache_dir=\"models/\",\r\n resume_download=True,\r\n offload_folder=\"offload\"\r\n )\r\n\r\nimg_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')\r\n\r\nquestion = \"how many dogs are in the picture?\"\r\ninputs = processor(raw_image, question, return_tensors=\"pt\").to(\"cuda\", torch.float16)\r\n\r\nout = model.generate(**inputs)\r\nprint(processor.decode(out[0], skip_special_tokens=True).strip())\r\n\r\n# >>>\r\n# Expected:\r\n# >>> There is one dog.\r\n```", "> I have the same problem. I followed to try out the example code provided by BLIP-2, yet nothing, no results showed up on the console\r\n> \r\n> ```\r\n> import requests\r\n> from PIL import Image\r\n> from transformers import Blip2Processor, Blip2ForConditionalGeneration\r\n> import torch\r\n> \r\n> processor = Blip2Processor.from_pretrained(\r\n> \"Salesforce/blip2-opt-2.7b\",\r\n> cache_dir=\"models/\",\r\n> resume_download=True,\r\n> offload_folder=\"offload\"\r\n> )\r\n> \r\n> model = Blip2ForConditionalGeneration.from_pretrained(\r\n> \"Salesforce/blip2-opt-2.7b\",\r\n> device_map=\"auto\",\r\n> cache_dir=\"models/\",\r\n> resume_download=True,\r\n> offload_folder=\"offload\"\r\n> )\r\n> \r\n> img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'\r\n> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')\r\n> \r\n> question = \"how many dogs are in the picture?\"\r\n> inputs = processor(raw_image, question, return_tensors=\"pt\").to(\"cuda\", torch.float16)\r\n> \r\n> out = model.generate(**inputs)\r\n> print(processor.decode(out[0], skip_special_tokens=True).strip())\r\n> \r\n> # >>>\r\n> # Expected:\r\n> # >>> There is one dog.\r\n> ```\r\n\r\nHave you tried formatting the question before tokenization? In the paper they suggested using a prompt like \"Question: {question} Answer:\" so that your input would go from `how many dogs are in the picture?` to `Question: how many dogs are there in the picture? Answer:`. \r\n\r\nSimilar to the thread in the community section for the model here -> https://huggingface.co/Salesforce/blip2-opt-2.7b/discussions/15\r\n\r\nMy issue is that the conditional outputs of the model change based on the batch size which messes with training and generating for batches of inputs :pensive:", "These problems can occur due to two reasons:\r\n\r\n1. The tokenizer padding side has not been configured. If you are using the OPT checkpoint, make sure to set `tokenizer.padding_side = left`.\r\n\r\n2. The question needs to be formatted using the template \"Question: {question} Answer: {}\" for OPT models to generate a meaningful output.\r\n\r\nBy addressing these two issues, you should be able to resolve the error.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.34.1 - Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.37 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: no - use_cpu: False - debug: False - num_processes: 4 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: False - main_training_function: main - downcast_bf16: False - tpu_use_cluster: False - tpu_use_sudo: False - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import Blip2Processor, Blip2ForConditionalGeneration import requests from PIL import Image base_model_id = "Salesforce/blip2-opt-2.7b" model = Blip2ForConditionalGeneration.from_pretrained(base_model_id, device_map = "auto") processor = Blip2Processor.from_pretrained(base_model_id) # from provided huggingface example -> https://huggingface.co/Salesforce/blip2-opt-2.7b img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question1 = "how many dogs are in the picture?" question2 = "is there a dog here?" # batching inputs = processor([raw_image,raw_image], [question1, question2], padding = True, return_tensors="pt") # both separate i0 = processor([raw_image], [question1], return_tensors="pt") i1 = processor([raw_image], [question2], return_tensors="pt") with torch.no_grad(): out = model(**inputs) o0 = model(**i0) o1 = model(**i1) logits = out['logits'] l0 = o0['logits'] l1 = o1['logits'] # outputs False in both cases print(torch.allclose(logits[0][:l0.shape[1]].unsqueeze(0), l0)) print(torch.allclose(logits[1][:l1.shape[1]].unsqueeze(0), l1)) ``` ### Expected behavior Whether or not inputs are batched should not influence the outputs of the forward pass in a deterministic model. I expect the dimensions of the logits obtained from the batched input (truncated accordingly when padding was applied) to match up with the logits obtained from passing each sample through one at a time. Otherwise model decoding behaviour becomes undefined given whatever batch size you feed the model with. e.g. example from above ```python # should output True print(torch.allclose(logits[0][:l0.shape[1]].unsqueeze(0), l0)) print(torch.allclose(logits[1][:l1.shape[1]].unsqueeze(0), l1)) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27251/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27250/comments
https://api.github.com/repos/huggingface/transformers/issues/27250/events
https://github.com/huggingface/transformers/pull/27250
1,974,813,378
PR_kwDOCUB6oc5edp7r
27,250
Update the ConversationalPipeline docstring for chat templates
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,699
1,699
MEMBER
null
The `ConversationalPipeline` docstring was a bit out of date - this updates it for the chat template era!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27250/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27250", "html_url": "https://github.com/huggingface/transformers/pull/27250", "diff_url": "https://github.com/huggingface/transformers/pull/27250.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27250.patch", "merged_at": 1699017466000 }
https://api.github.com/repos/huggingface/transformers/issues/27249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27249/comments
https://api.github.com/repos/huggingface/transformers/issues/27249/events
https://github.com/huggingface/transformers/pull/27249
1,974,812,012
PR_kwDOCUB6oc5edpoh
27,249
Normalize floating point cast
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> as the floating values being used to divide the image are now set to 0.\r\n\r\ncould you explain this a bit more?", ">> as the floating values being used to divide the image are now set to 0.\r\n\r\n> could you explain this a bit more?\r\n\r\nSure! So for a lot of the image processors, the default normalization constants are floats e.g. `(0.5, 0.5, 0.5)`. In the current `normalization` implementation, `mean` and `std` are cast to the [dtype of the image](https://github.com/huggingface/transformers/blob/c5037b459e117b9286c611092f38663f6cb763b0/src/transformers/image_transforms.py#L384). If the input image is e.g. `int32`, then the normalization constants are cast to int and would become e.g. `(0, 0, 0)`. So when normalizing with `img - mean / std`, we can end up with division by zero errors", "> > > as the floating values being used to divide the image are now set to 0.\r\n> \r\n> > could you explain this a bit more?\r\n> \r\n> Sure! So for a lot of the image processors, the default normalization constants are floats e.g. `(0.5, 0.5, 0.5)`. In the current `normalization` implementation, `mean` and `std` are cast to the [dtype of the image](https://github.com/huggingface/transformers/blob/c5037b459e117b9286c611092f38663f6cb763b0/src/transformers/image_transforms.py#L384). If the input image is e.g. `int32`, then the normalization constants are cast to int and would become e.g. `(0, 0, 0)`. So when normalizing with `img - mean / std`, we can end up with division by zero errors\r\n\r\nI see, thank you a lot for the detail." ]
1,698
1,699
1,699
COLLABORATOR
null
# What does this PR do? This is done if the input image isn't of floating type. Issues can occur when `do_rescale=False` is set in an image processor. When this happens, the image passed to the call is of type uint8 because of the type casting that happens in `resize` because of the PIL image library. As the mean and std values are cast to match the image dtype, this can cause NaNs and infs to appear in the normalized image, as the floating values being used to divide the image are now set to 0. The reason the mean and std values are cast is because previously they were set as float32 by default. However, if the input image was of type float16, the normalization would result in the image being upcast to float32 too. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27249/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27249", "html_url": "https://github.com/huggingface/transformers/pull/27249", "diff_url": "https://github.com/huggingface/transformers/pull/27249.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27249.patch", "merged_at": 1699630527000 }
https://api.github.com/repos/huggingface/transformers/issues/27248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27248/comments
https://api.github.com/repos/huggingface/transformers/issues/27248/events
https://github.com/huggingface/transformers/pull/27248
1,974,721,308
PR_kwDOCUB6oc5edVwP
27,248
Fuyu protection
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,698
1,698
1,698
MEMBER
null
Fuyu was under the wrong protection, this updates it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27248/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27248", "html_url": "https://github.com/huggingface/transformers/pull/27248", "diff_url": "https://github.com/huggingface/transformers/pull/27248.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27248.patch", "merged_at": 1698997506000 }
https://api.github.com/repos/huggingface/transformers/issues/27247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27247/comments
https://api.github.com/repos/huggingface/transformers/issues/27247/events
https://github.com/huggingface/transformers/pull/27247
1,974,650,616
PR_kwDOCUB6oc5edF-X
27,247
Add model RTDetr
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@rafaelpadilla Nice work! Excited to get this model added to the library 💪 \r\n\r\nI'm removing the review request for Arthur, as you only need one core maintainer's approval. @NielsRogge if you have time, could you give this a quick review once tests are all passing? It would be good to get your thoughts especially as you've handled the other DETR ports. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27247). All of your documentation changes will be reflected on that endpoint.", "Doesn't more contribute?" ]
1,698
1,706
null
CONTRIBUTOR
null
# What does this PR do? Adds new model RTDetr. Image processing: - [x] preprocess - [x] post_process - [x] post_process_object_detection Tests: - [x] image processing: test_image_processor_outputs - [x] image processing: test_multiple_images_processor_outputs - [x] model: logits and boxes match the original model - [ ] model: unit tests for `modeling_rt_detr.py` are passing Backbone: - [x] adjust backbone to be compatible with Timm - [x] convert backbone weights to be compatible with Timm General: - [x] review docstrings - [x] check variable names - [x] check order of classes Fixes #26742 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27247/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27247", "html_url": "https://github.com/huggingface/transformers/pull/27247", "diff_url": "https://github.com/huggingface/transformers/pull/27247.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27247.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27246/comments
https://api.github.com/repos/huggingface/transformers/issues/27246/events
https://github.com/huggingface/transformers/pull/27246
1,974,536,698
PR_kwDOCUB6oc5ectH3
27,246
translate run_scripts.md to chinese
{ "login": "jiaqiw09", "id": 60021713, "node_id": "MDQ6VXNlcjYwMDIxNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiaqiw09", "html_url": "https://github.com/jiaqiw09", "followers_url": "https://api.github.com/users/jiaqiw09/followers", "following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}", "gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions", "organizations_url": "https://api.github.com/users/jiaqiw09/orgs", "repos_url": "https://api.github.com/users/jiaqiw09/repos", "events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}", "received_events_url": "https://api.github.com/users/jiaqiw09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu\r\n\r\nhi,\r\n\r\nhere is another PR of run_scripts.md, and I will fix merge conflict later.\r\n\r\nBest", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27246). All of your documentation changes will be reflected on that endpoint.", "@stevhliu \r\n\r\nreviews and merge conflict have been sloved.\r\n\r\nbest" ]
1,698
1,699
1,699
CONTRIBUTOR
null
# What does this PR do? Part of #26803 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? _not necessary_ ## Who can review? @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27246/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27246", "html_url": "https://github.com/huggingface/transformers/pull/27246", "diff_url": "https://github.com/huggingface/transformers/pull/27246.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27246.patch", "merged_at": 1699031982000 }
https://api.github.com/repos/huggingface/transformers/issues/27245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27245/comments
https://api.github.com/repos/huggingface/transformers/issues/27245/events
https://github.com/huggingface/transformers/issues/27245
1,974,428,910
I_kwDOCUB6oc51r2Tu
27,245
ImportError: cannot import name 'PegasusXForConditionalGeneration'
{ "login": "wlmnzf", "id": 6256515, "node_id": "MDQ6VXNlcjYyNTY1MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/6256515?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wlmnzf", "html_url": "https://github.com/wlmnzf", "followers_url": "https://api.github.com/users/wlmnzf/followers", "following_url": "https://api.github.com/users/wlmnzf/following{/other_user}", "gists_url": "https://api.github.com/users/wlmnzf/gists{/gist_id}", "starred_url": "https://api.github.com/users/wlmnzf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlmnzf/subscriptions", "organizations_url": "https://api.github.com/users/wlmnzf/orgs", "repos_url": "https://api.github.com/users/wlmnzf/repos", "events_url": "https://api.github.com/users/wlmnzf/events{/privacy}", "received_events_url": "https://api.github.com/users/wlmnzf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @wlmnzf, thanks for raising this issue! \r\n\r\nIt looks like the might be an issue with the installation of transformers in your environment. Please note that the lowest supported version of python is 3.8. I would suggest first try upgrading your python version and then reinstalling transformers. ", "@amyeroberts Thanks for pointing about this, I try it again with python 3.10, and it works! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,698
1,702
1,702
NONE
null
### System Info I follow the page(https://huggingface.co/docs/transformers/model_doc/pegasus_x) to try to use the PegasusXForConditionalGeneration. But when I import the package ```from transformers import AutoTokenizer, PegasusXForConditionalGeneration```, it goes wrong. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction >python3.6 >from transformers import AutoTokenizer, PegasusXForConditionalGeneration ### Expected behavior I hope it an import the package correctly
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27245/timeline
completed
null
null